pixel-styler

Refactored CycleGAN / pix2pix

<img src=’docs/horse2zebra.gif’ width=384>

This is a concise refactoring version of official PyTorch implementation for image-to-image translation.

Prerequisites

Getting Started

CycleGAN train/test

pix2pix train/test

Apply a pre-trained model (CycleGAN)

If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use --dataset_mode single and --model test options. Here’s command to apply a model to Facade label maps (stored in the directory facades/testB).

  python test.py --dataroot ./datasets/facades/testA/ --name {pre_trained_model_name} --model test --dataset_mode single

You might want to specify --which_model_netG to match the generator architecture of the trained model.

Apply a pre-trained model (pix2pix)

Download the pre-trained models using ./demo/pretrained_models/download_pix2pix_model.sh. For example, if you would like to download label2photo model on the Facades dataset,

bash demo/pretrained_models/download_pix2pix_model.sh facades_label2photo

Then generate the results using

  python test.py --dataroot ./datasets/facades/ --name facades_label2photo_pretrained --model pix2pix --dataset_mode aligned \
  --which_model_netG unet_256 --which_direction BtoA --norm batch

Note that we specified --which_direction BtoA to accomodate the fact that the Facades dataset’s A to B direction is photos to labels.

Also, the models that are currently available to download can be found by reading the output of bash demo/pretrained_models/download_pix2pix_model.sh

Training/test Details

CycleGAN Datasets

Download the CycleGAN datasets using the following script. Some of the datasets are collected by other researchers. Please cite their papers if you use the data.

  bash ./datasets/download_cyclegan_dataset.sh dataset_name

To train on your own datasets, you need to create a data folder with two subdirectories trainA and trainB that contain images from domain A and B. You can test the model on training set by setting --phase train with test.py, and can also create subdirectories testA and testB if you have test data.

pix2pix datasets

Download the pix2pix datasets using the following script. Some of the datasets are collected by other researchers. Please cite their papers if you use the data.

  bash ./datasets/download_pix2pix_dataset.sh dataset_name

We provide a python script to generate pix2pix training data in the form of pairs of images {A,B}:

Create folder /path/to/data with subfolders A and B. A and B should each have their own subfolders train, val, test, etc. Put training images in style A at /path/to/data/A/train, and put corresponding images in style B at /path/to/data/B/train. Repeat for other data splits (val, test, etc).

Corresponding images in a pair {A,B} must be the same size and have the same filename, e.g., /path/to/data/A/train/1.jpg is considered to correspond to /path/to/data/B/train/1.jpg.

Once the data is formatted this way, use:

  python datasets/combine_A_and_B.py --fold_A /path/to/data/A --fold_B /path/to/data/B --fold_AB /path/to/data

This will combine each pair of images (A,B) into a single image file, ready for training.

CycleGAN

[Project] [Paper] [Torch]

Pix2pix

[Project] [Paper] [Torch]