You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Aayush, thanks for reaching out. I'm afraid I don't understand your question, do you mean code for generation of mixture of Gaussians as they do in [1]? If that's the question, I unfortunately don't have anything nice and clean to commit for this toy problem. I did do the experiment back in the day (along with a swiss roll) with good results but didn't publish my findings on this toy dataset anywhere. I recall it was easy enough to get running with a 2 layer network with ReLU activation in both the generator and discriminator, no batch norm, no momentum or other fancy tricks and more or less the same parameters as in the CelebA experiments.
[1] Srivastava, A., Valkov, L., Russell, C., Gutmann, M., and Sutton, C. VEEGAN: Reducing mode collapse in GANs using implicit variational learning. In Advances in Neural Information Processing Systems 30 (NIPS), 2017.
No description provided.
The text was updated successfully, but these errors were encountered: