You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks very much for this code base, it's been a great way to learn about flow matching. I have a question regarding conditional generation with OT-CFM.
When testing different FM approaches on my own data, I noticed that OT-CFM trains significantly slower and tends to perform much worse on tasks with conditioning. In an effort to isolate this problem I tried conditional MNIST, comparing OT-CFM with FM (using the example provided).
After a single epoch of training, I visualized the generations of both approaches with 1 step and dopri5. FM is on the left, OT-CFM is on the right.
One step generation (euler with 1 step):
Adaptive generation with dopri5:
After one epoch of training, FM has much nicer generations for both 1 sampling step and with dopri5. Even after a longer training time, FM continues to outperform OT-CFM (converges much faster).
I wonder if the authors have studied this, and if there are any results for OT-CFM conditional tasks, or perhaps if there is a reason or explanation that OT-CFM should not work in this setting. My intuition was that adding conditioning makes the combinatorial space of the OT plan extremely hard to approximate from the limited samples in the batch, and this would be further exaggerated if the conditioning is not on simple class labels but rather continuous values (for example language embeddings for text to image generation etc).
I would greatly appreciate any insight on this, and if there is an approach that is applicable to conditional generation. Thank you!
The code tweaks for this were:
sigma=0.0
if args.fm_method == "fm":
FM = TargetConditionalFlowMatcher(sigma=sigma)
elif args.fm_method == "otcfm":
FM = ExactOptimalTransportConditionalFlowMatcher(sigma=sigma)
if args.fm_method == "fm":
t, xt, ut = FM.sample_location_and_conditional_flow(x0, x1)
y1 = y
elif args.fm_method == "otcfm":
t, xt, ut, _, y1 = FM.guided_sample_location_and_conditional_flow(x0, x1, y1=y)
The text was updated successfully, but these errors were encountered:
We have not explored OT in this setting very much. We do use it in a text conditioned model in our most recent work on protein generation (see https://arxiv.org/abs/2405.20313), but did not test the extent to which OT helps in this setting, as it worked so well in the conditional setting.
My intuition was that adding conditioning makes the combinatorial space of the OT plan extremely hard to approximate from the limited samples in the batch, and this would be further exaggerated if the conditioning is not on simple class labels but rather continuous values (for example language embeddings for text to image generation etc).
I'm not sure about this intuition as it seems that even if the OT plan is not approximated well, this should just fall back to random pairings. It seems like the OT pairing is actively harmful in this setting.
It's also quite interesting to see that the one-step generations are all the same for FM but not for OT-CFM, and that the dopri5 generated samples seem more uniform for the FM (line thickness especially). I suspect what is happening is that FM is learning some averaged image first where OT-CFM may be forced to try to directly predict the diverse images from the noise. This is probably difficult to learn especially early in training.
Thanks very much for this code base, it's been a great way to learn about flow matching. I have a question regarding conditional generation with OT-CFM.
When testing different FM approaches on my own data, I noticed that OT-CFM trains significantly slower and tends to perform much worse on tasks with conditioning. In an effort to isolate this problem I tried conditional MNIST, comparing OT-CFM with FM (using the example provided).
After a single epoch of training, I visualized the generations of both approaches with 1 step and dopri5. FM is on the left, OT-CFM is on the right.
One step generation (euler with 1 step):
Adaptive generation with dopri5:
After one epoch of training, FM has much nicer generations for both 1 sampling step and with dopri5. Even after a longer training time, FM continues to outperform OT-CFM (converges much faster).
After reading more, I noticed that both OT-CFM and Multisample Flow Matching papers only report results for unconditional generation, while papers doing conditional generation such as Stable Diffusion 3 and Flow Matching in Latent Space seem to use standard flow matching without batch optimal transport.
I wonder if the authors have studied this, and if there are any results for OT-CFM conditional tasks, or perhaps if there is a reason or explanation that OT-CFM should not work in this setting. My intuition was that adding conditioning makes the combinatorial space of the OT plan extremely hard to approximate from the limited samples in the batch, and this would be further exaggerated if the conditioning is not on simple class labels but rather continuous values (for example language embeddings for text to image generation etc).
I would greatly appreciate any insight on this, and if there is an approach that is applicable to conditional generation. Thank you!
The code tweaks for this were:
The text was updated successfully, but these errors were encountered: