-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sample generation with fixed noise #8
Comments
You could use pytorchs fork rng and set seed functions to achieve this though it would be good to have this functionality built in for some use cases
|
If I'm not mistaken, you should be able to do something like: z_sample = fized_noise.clone()
for transform in reversed(flow.transforms):
z_sample = transform.inverse(z_sample) Using this you can also experiment with different Zs, maybe reduce the std by multiplying it by a value < 1 so you get less different but more coherent results, etc... |
Hey! As mentioned before, if you want to map between x and z, you can do this simply using: # x to z:
for t in flow.transforms:
x, _ = t(x)
# z to x:
for t in reversed(flow.transforms):
z = t.inverse(z) Note that this simple approach generalizes if you want to map to intermediate representations (by not looping through all transforms), etc. I purposefully went for keeping the code simple and hackable and thus did not add specific functions for this behavior. |
Hi,
I want to generate a bunch of base distribution samples z and keep them fixed, so that as the training goes on I can compute their original space representation and see their evolution. However, it seems to me that the current code does not support that, right? The only option right now is to just sample from the learned distribution, but I cannot keep the latent representation fixed and it changes everytime I sample I guess.
Many thanks.
The text was updated successfully, but these errors were encountered: