Skip to content

Latest commit

 

History

History
48 lines (30 loc) · 2.75 KB

HOWTOs.md

File metadata and controls

48 lines (30 loc) · 2.75 KB

HOWTOs

English | 简体中文

How to train StyleGAN2

  1. Prepare training dataset: FFHQ. More details are in DatasetPreparation.md

    1. Download FFHQ dataset. Recommend to download the tfrecords files from NVlabs/ffhq-dataset.

    2. Extract tfrecords to images or LMDBs (TensorFlow is required to read tfrecords):

      python scripts/data_preparation/extract_images_from_tfrecords.py

  2. Modify the config file in options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml

  3. Train with distributed training. More training commands are in TrainTest.md.

    python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ_800k.yml --launcher pytorch

How to inference StyleGAN2

  1. Download pre-trained models from ModelZoo (Google Drive, 百度网盘) to the experiments/pretrained_models folder.

  2. Test.

    python inference/inference_stylegan2.py

  3. The results are in the samples folder.

How to inference DFDNet

  1. Install dlib, because DFDNet uses dlib to do face recognition and landmark detection. Installation reference.

    1. Clone dlib repo: git clone [email protected]:davisking/dlib.git
    2. cd dlib
    3. Install: python setup.py install
  2. Download the dlib pretrained models from ModelZoo (Google Drive, 百度网盘) to the experiments/pretrained_models/dlib folder.
    You can download by run the following command OR manually download the pretrained models.

    python scripts/download_pretrained_models.py dlib

  3. Download pretrained DFDNet models, dictionary and face template from ModelZoo (Google Drive, 百度网盘) to the experiments/pretrained_models/DFDNet folder.
    You can download by run the the following command OR manually download the pretrained models.

    python scripts/download_pretrained_models.py DFDNet

  4. Prepare the testing dataset in the datasets, for example, we put images in the datasets/TestWhole folder.

  5. Test.

    python inference/inference_dfdnet.py --upscale_factor=2 --test_path datasets/TestWhole

  6. The results are in the results/DFDNet folder.