diff options
| author | Taesung Park <taesung_park@berkeley.edu> | 2017-12-10 23:04:41 -0800 |
|---|---|---|
| committer | Taesung Park <taesung_park@berkeley.edu> | 2017-12-10 23:04:41 -0800 |
| commit | f33f098be9b25c3b62523540c9c703af1db0b1c0 (patch) | |
| tree | 9b51e547067b46ad8b55ddb34b207825550df867 /README.md | |
| parent | 3d2c534933b356dc313a620639a713cb940dc756 (diff) | |
| parent | 2d96edbee5a488a7861833731a2cb71b23b55727 (diff) | |
merged conflicts
Diffstat (limited to 'README.md')
| -rw-r--r-- | README.md | 48 |
1 files changed, 43 insertions, 5 deletions
@@ -34,7 +34,31 @@ Image-to-Image Translation with Conditional Adversarial Networks [Phillip Isola](https://people.eecs.berkeley.edu/~isola), [Jun-Yan Zhu](https://people.eecs.berkeley.edu/~junyanz), [Tinghui Zhou](https://people.eecs.berkeley.edu/~tinghuiz), [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros) In CVPR 2017. +## Other implementations: +### CycleGAN +<p><a href="https://github.com/leehomyc/cyclegan-1"> [Tensorflow]</a> (by Harry Yang), +<a href="https://github.com/architrathore/CycleGAN/">[Tensorflow]</a> (by Archit Rathore), +<a href="https://github.com/vanhuyz/CycleGAN-TensorFlow">[Tensorflow]</a> (by Van Huy), +<a href="https://github.com/XHUJOY/CycleGAN-tensorflow">[Tensorflow]</a> (by Xiaowei Hu), +<a href="https://github.com/LynnHo/CycleGAN-Tensorflow-Simple"> [Tensorflow-simple]</a> (by Zhenliang He), +<a href="https://github.com/luoxier/CycleGAN_Tensorlayer"> [TensorLayer]</a> (by luoxier), +<a href="https://github.com/Aixile/chainer-cyclegan">[Chainer]</a> (by Yanghua Jin), +<a href="https://github.com/yunjey/mnist-svhn-transfer">[Minimal PyTorch]</a> (by yunjey), +<a href="https://github.com/Ldpe2G/DeepLearningForFun/tree/master/Mxnet-Scala/CycleGAN">[Mxnet]</a> (by Ldpe2G), +<a href="https://github.com/tjwei/GANotebooks">[lasagne/keras]</a> (by tjwei)</p> +</ul> +### pix2pix +<p><a href="https://github.com/affinelayer/pix2pix-tensorflow"> [Tensorflow]</a> (by Christopher Hesse), +<a href="https://github.com/Eyyub/tensorflow-pix2pix">[Tensorflow]</a> (by Eyyüb Sariu), +<a href="https://github.com/datitran/face2face-demo"> [Tensorflow (face2face)]</a> (by Dat Tran), +<a href="https://github.com/awjuliani/Pix2Pix-Film"> [Tensorflow (film)]</a> (by Arthur Juliani), +<a href="https://github.com/kaonashi-tyc/zi2zi">[Tensorflow (zi2zi)]</a> (by Yuchen Tian), +<a href="https://github.com/pfnet-research/chainer-pix2pix">[Chainer]</a> (by mattya), +<a href="https://github.com/tjwei/GANotebooks">[tf/torch/keras/lasagne]</a> (by tjwei), +<a href="https://github.com/taey16/pix2pixBEGAN.pytorch">[Pytorch]</a> (by taey16) +</p> +</ul> ## Prerequisites - Linux or macOS @@ -100,17 +124,31 @@ The test results will be saved to a html file here: `./results/facades_pix2pix/l More example scripts can be found at `scripts` directory. ### Apply a pre-trained model (CycleGAN) -If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a pix2pix model to facade label maps (stored in the directory `facades/testB`). + +If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to Facade label maps (stored in the directory `facades/testB`). ``` bash #!./scripts/test_single.sh -python test.py --dataroot ./datasets/facades/testB/ --name facades_pix2pix --model test --which_model_netG unet_256 --which_direction BtoA --dataset_mode single +python test.py --dataroot ./datasets/facades/testA/ --name {my_trained_model_name} --model test --dataset_mode single +``` +You might want to specify `--which_model_netG` to match the generator architecture of the trained model. + +You can download a few pretrained models from the authors. For example, if you would like to download horse2zebra model, + +```bash +bash pretrained_models/download_cyclegan_model.sh horse2zebra +``` +The pretrained model is saved at `./checkpoints/{name}_pretrained/latest_net_G.pth`. +Then generate the results using + +```bash +python test.py --dataroot datasets/horse2zebra/testA --checkpoints_dir ./checkpoints/ --name horse2zebra_pretrained --no_dropout --model test --dataset_mode single --loadSize 256 --results_dir {directory_path_to_save_result} ``` -Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. +Note: We currently don't have all pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. ### Apply a pre-trained model (pix2pix) -Download the pre-trained models using `./pretrained_models/download_pix2pix_model.sh`. For example, if you would like to download label2photo model on the Facades dataset, +Download the pre-trained models using `./pretrained_models/download_pix2pix_model.sh`. For example, if you would like to download label2photo model on the Facades dataset, ```bash bash pretrained_models/download_pix2pix_model.sh facades_label2photo @@ -120,7 +158,7 @@ Then generate the results using ```bash python test.py --dataroot ./datasets/facades/ --which_direction BtoA --model pix2pix --name facades_label2photo_pretrained --dataset_mode aligned --which_model_netG unet_256 --norm batch ``` -Note that we specified `--which_direction BtoA` to accomodate the fact that the Facades dataset's A to B direction is photos to labels. +Note that we specified `--which_direction BtoA` to accomodate the fact that the Facades dataset's A to B direction is photos to labels. Also, the models that are currently available to download can be found by reading the output of `bash pretrained_models/download_pix2pix_model.sh` |
