Skip to content

Latest commit

 

History

History
61 lines (45 loc) · 2.25 KB

facebookresearch_pytorch-gan-zoo_dcgan.md

File metadata and controls

61 lines (45 loc) · 2.25 KB
layout background-class body-class title summary category image author tags github-link github-id featured_image_1 featured_image_2 accelerator order
hub_detail
hub-background
hub
DCGAN on FashionGen
A simple generative image model for 64x64 images
researchers
dcgan_fashionGen.jpg
FAIR HDGAN
vision
generative
facebookresearch/pytorch_GAN_zoo
dcgan_fashionGen.jpg
no-image
cuda-optional
10
import torch
use_gpu = True if torch.cuda.is_available() else False

model = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=use_gpu)

The input to the model is a noise vector of shape (N, 120) where N is the number of images to be generated. It can be constructed using the function .buildNoiseData. The model has a .test function that takes in the noise vector and generates images.

num_images = 64
noise, _ = model.buildNoiseData(num_images)
with torch.no_grad():
    generated_images = model.test(noise)

# let's plot these images using torchvision and matplotlib
import matplotlib.pyplot as plt
import torchvision
plt.imshow(torchvision.utils.make_grid(generated_images).permute(1, 2, 0).cpu().numpy())
# plt.show()

You should see an image similar to the one on the left.

If you want to train your own DCGAN and other GANs from scratch, have a look at PyTorch GAN Zoo.

Model Description

In computer vision, generative models are networks trained to create images from a given input. In our case, we consider a specific kind of generative networks: GANs (Generative Adversarial Networks) which learn to map a random vector with a realistic image generation.

DCGAN is a model designed in 2015 by Radford et. al. in the paper Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. It is a GAN architecture both very simple and efficient for low resolution image generation (up to 64x64).

Requirements

  • Currently only supports Python 3

References