This repository provides the pytorch code for the paper "Text-to-image synthesis with self-supervised bi-stage generative adversarial network" by Yong Xuan Tan, Chin Poo Lee, Mai Neo, Kian Ming Lim, Jit Yan Lim.
The code is tested on Windows 10 with Anaconda3 and following packages:
- python 3.7.13
- pytorch 1.4.0
- torchvision 0.5
We follow the same procedure and structure as SSTIS.
Download the preprocessed char-CNN-RNN text embeddings for flowers and birds and the images for flowers and birds. Put them into ./data/oxford
and ./data/cub
folder.
To train on Oxford:
python main.py --dataset flowers --exp_num oxford_exp
To evaluate on Oxford:
python main.py --dataset flowers --exp_num oxford_exp --is_test true
Download the pretrained models. Extract it to the saved_model
folder.
Examples generated by SSBi-GAN:
If you find this repo useful for your research, please consider citing the paper:
@article{TAN202343,
title = {Text-to-image synthesis with self-supervised bi-stage generative adversarial network},
journal = {Pattern Recognition Letters},
volume = {169},
pages = {43-49},
year = {2023},
issn = {0167-8655},
doi = {https://doi.org/10.1016/j.patrec.2023.03.023},
url = {https://www.sciencedirect.com/science/article/pii/S0167865523000880},
author = {Yong Xuan Tan and Chin Poo Lee and Mai Neo and Kian Ming Lim and Jit Yan Lim},
keywords = {Text-to-image-synthesis, Generative adversarial network, Self-supervised learning, GAN},
abstract = {Text-to-image synthesis is challenging as generating images that are visually realistic and semantically consistent with the given text description involves multi-modal learning with text and image. To address the challenges, this paper presents a text-to-image synthesis model that utilizes self-supervision and bi-stage image distribution architecture, referred to as the Self-Supervised Bi-Stage Generative Adversarial Network (SSBi-GAN). The self-supervision diversifies the learned representation thus improving the quality of the synthesized images. Besides that, the bi-stage architecture with Residual network enables the generation of larger images with finer visual contents. Not only that, some enhancements including L1 distance, one-sided smoothing and feature matching are incorporated to enhance the visual realism and semantic consistency of the images as well as the training stability of the model. The empirical results on Oxford-102 and CUB datasets corroborate the ability of the proposed SSBi-GAN in generating visually realistic and semantically consistent images.}
}
For any questions, please contact:
Yong Xuan Tan ([email protected])
Jit Yan Lim ([email protected])
This code is released under the MIT License (refer to the LICENSE file for details).