Skip to content

Tensorflow implementation of ICLR2019 paper "Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency"

Notifications You must be signed in to change notification settings

charliememory/EGSC-IT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

57874f2 · Jul 4, 2020

History

32 Commits
Jul 6, 2019
Dec 1, 2019
Jul 6, 2019
Dec 1, 2019
Jul 6, 2019
Dec 1, 2019
Jul 6, 2019
Dec 1, 2019
Jul 6, 2019
Jul 4, 2020
Jul 6, 2019
Dec 1, 2019
Dec 1, 2019
Dec 1, 2019
Jul 6, 2019
Dec 1, 2019
Dec 1, 2019
Jul 6, 2019

Repository files navigation

Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency

Tensorflow implementation of ICLR 2019 paper Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency

alt text

Network architecture

alt text

Information flow diagrams

alt text

Dependencies

  • python 3.6.9
  • tensorflow-gpu (1.14.0)
  • numpy (1.14.0)
  • Pillow (5.0.0)
  • scikit-image (0.13.0)
  • scipy (1.0.1)
  • matplotlib (2.0.0)

Resources

  • Pretrained models: MNIST, MNIST_multi, GTA<->BDD, CelebA, VGG19
  • Training & Testing data in tf-record format: MNIST, MNIST_multi. GTA<->BDD, CelebA. Note: For the GTA<->BDD experiment, the data are prepared with RGB images of 512x1024 resolution, and segmentation labels of 8 categories. They are provided used for further research. In our paper, we use RGB images of 256x512 resolution without and segmentation labels.
  • Segmentation model Refer to DeepLab-ResNet-TensorFlow

TF-record data preparation steps (Optional)

You can skip this data preparation procedure if directly using the tf-record data files.

  1. cd datasets
  2. ./run_convert_mnist.sh to download and convert mnist and mnist_multi to tf-record format.
  3. ./run_convert_gta_bdd.sh to convert the images and segmentation to tf-record format. You need to download data from GTA5 website and BDD website. Note: this script will reuse gta data downloaded and processed in ./run_convert_gta_bdd.sh
  4. ./run_convert_celeba.sh to convert the images to tf-record format. You can directly download the prepared data or download and process data from CelebA website .

Training steps

  1. Replace the links data, logs, weights with your own directories or links.
  2. Download VGG19 into 'weights' directory.
  3. Download the tf-record training data to the data_parent_dir (default ./data).
  4. Modify the data_parent_dir, checkpoint_dir and comment/uncomment the target experiment in the run_train_feaMask.sh and run_train_EGSCIT.sh scripts.
  5. Run run_train_feaMask.sh to pretrain the feature mask network. Then run run_train_EGSCIT.sh.

Testing steps

  1. Replace the links data, logs, weights with your own directories or links.
  2. (Optional) Download the pretrained models to the checkpoint_dir (default ./logs).
  3. Download the tf-record testing data to the data_parent_dir (default ./data).
  4. Modify the data_parent_dir, checkpoint_dir and comment/uncomment the target experiment in the run_test_EGSCIT.sh script.
  5. run run_test_EGSCIT.sh.

Citation

@article{ma2018exemplar,
  title={Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency},
  author={Ma, Liqian and Jia, Xu and Georgoulis, Stamatios and Tuytelaars, Tinne and Van Gool, Luc},
  journal={ICLR},
  year={2019}
}

Related projects

About

Tensorflow implementation of ICLR2019 paper "Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published