From 061226659f430a3c7b868ed7718ed25e58ebd420 Mon Sep 17 00:00:00 2001 From: junyanz Date: Sat, 14 Oct 2017 22:01:14 +0800 Subject: update README --- README.md | 30 ++++++++++++++++++++++++++---- 1 file changed, 26 insertions(+), 4 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index 4e5e04e..5e4a967 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,29 @@ Image-to-Image Translation with Conditional Adversarial Networks [Phillip Isola](https://people.eecs.berkeley.edu/~isola), [Jun-Yan Zhu](https://people.eecs.berkeley.edu/~junyanz), [Tinghui Zhou](https://people.eecs.berkeley.edu/~tinghuiz), [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros) In CVPR 2017. - +## Other implementations: +### CycleGAN +

[Tensorflow] (by Harry Yang), +[Tensorflow] (by Archit Rathore), +[Tensorflow] (by Van Huy), +[Tensorflow] (by Xiaowei Hu), + [Tensorflow-simple] (by Zhenliang He), +[Chainer] (by Yanghua Jin), +[Minimal PyTorch] (by yunjey), +[Mxnet] (by Ldpe2G), +[lasagne/keras] (by tjwei)

+ +### pix2pix +

[Tensorflow] (by Christopher Hesse), +[tf/torch/keras/lasagne] (by tjwei), +[Tensorflow] (by Eyyüb Sariu), + [Tensorflow (face2face)] (by Dat Tran), + [Tensorflow (film)] (by Arthur Juliani), +[Tensorflow (zi2zi)] (by Yuchen Tian), +[Chainer] (by mattya), +[Pytorch] (by taey16) +

+ ## Prerequisites - Linux or macOS @@ -106,11 +128,11 @@ If you would like to apply a pre-trained model to a collection of input photos ( python test.py --dataroot ./datasets/facades/testB/ --name facades_pix2pix --model test --which_model_netG unet_256 --which_direction BtoA --dataset_mode single ``` -Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. +Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. ### Apply a pre-trained model (pix2pix) -Download the pre-trained models using `./pretrained_models/download_pix2pix_model.sh`. For example, if you would like to download label2photo model on the Facades dataset, +Download the pre-trained models using `./pretrained_models/download_pix2pix_model.sh`. For example, if you would like to download label2photo model on the Facades dataset, ```bash bash pretrained_models/download_pix2pix_model.sh facades_label2photo @@ -120,7 +142,7 @@ Then generate the results using ```bash python test.py --dataroot ./datasets/facades/ --which_direction BtoA --model pix2pix --name facades_label2photo_pretrained --dataset_mode aligned --which_model_netG unet_256 --norm batch ``` -Note that we specified `--which_direction BtoA` to accomodate the fact that the Facades dataset's A to B direction is photos to labels. +Note that we specified `--which_direction BtoA` to accomodate the fact that the Facades dataset's A to B direction is photos to labels. Also, the models that are currently available to download can be found by reading the output of `bash pretrained_models/download_pix2pix_model.sh` -- cgit v1.2.3-70-g09d2 From bdbd2262c42b6b723e949e58bdad058ed3b97769 Mon Sep 17 00:00:00 2001 From: junyanz Date: Sat, 14 Oct 2017 22:06:12 +0800 Subject: Update REAME --- README.md | 1 + 1 file changed, 1 insertion(+) (limited to 'README.md') diff --git a/README.md b/README.md index 5e4a967..cd33e79 100644 --- a/README.md +++ b/README.md @@ -46,6 +46,7 @@ In CVPR 2017. [Mxnet] (by Ldpe2G), [lasagne/keras] (by tjwei)

+ ### pix2pix

[Tensorflow] (by Christopher Hesse), [tf/torch/keras/lasagne] (by tjwei), -- cgit v1.2.3-70-g09d2 From 01fea98d5d8cc5411ea5a0ff40e4a5e36a18b1c8 Mon Sep 17 00:00:00 2001 From: junyanz Date: Sat, 14 Oct 2017 22:08:30 +0800 Subject: update README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index cd33e79..3105f05 100644 --- a/README.md +++ b/README.md @@ -49,13 +49,13 @@ In CVPR 2017. ### pix2pix

[Tensorflow] (by Christopher Hesse), -[tf/torch/keras/lasagne] (by tjwei), [Tensorflow] (by Eyyüb Sariu), [Tensorflow (face2face)] (by Dat Tran), [Tensorflow (film)] (by Arthur Juliani), [Tensorflow (zi2zi)] (by Yuchen Tian), [Chainer] (by mattya), -[Pytorch] (by taey16) +[Pytorch] (by taey16), +[tf/torch/keras/lasagne] (by tjwei)

-- cgit v1.2.3-70-g09d2 From f34719c53419d283586c47ad1905b8144a89547a Mon Sep 17 00:00:00 2001 From: junyanz Date: Sat, 14 Oct 2017 22:11:06 +0800 Subject: update README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index 3105f05..455c32a 100644 --- a/README.md +++ b/README.md @@ -54,8 +54,8 @@ In CVPR 2017. [Tensorflow (film)] (by Arthur Juliani), [Tensorflow (zi2zi)] (by Yuchen Tian), [Chainer] (by mattya), -[Pytorch] (by taey16), -[tf/torch/keras/lasagne] (by tjwei) +[tf/torch/keras/lasagne] (by tjwei), +[Pytorch] (by taey16)

-- cgit v1.2.3-70-g09d2 From 276a568218c0d5331d8abb87d141a5c0f4bf6b3f Mon Sep 17 00:00:00 2001 From: taesung89 Date: Sun, 5 Nov 2017 01:03:05 +0900 Subject: Update README.md Replaced the misleading description of applying pretrained CycleGAN model. --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index 455c32a..ad371cb 100644 --- a/README.md +++ b/README.md @@ -123,12 +123,14 @@ The test results will be saved to a html file here: `./results/facades_pix2pix/l More example scripts can be found at `scripts` directory. ### Apply a pre-trained model (CycleGAN) -If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a pix2pix model to facade label maps (stored in the directory `facades/testB`). +If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to facade label maps (stored in the directory `facades/testB`). ``` bash #!./scripts/test_single.sh -python test.py --dataroot ./datasets/facades/testB/ --name facades_pix2pix --model test --which_model_netG unet_256 --which_direction BtoA --dataset_mode single +python test.py --dataroot ./datasets/facades/testA/ --name {my_trained_model_name} --model test --dataset_mode single ``` +You might have to specify `--which_model_netG` to match the generator architecture of the trained model. + Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. ### Apply a pre-trained model (pix2pix) -- cgit v1.2.3-70-g09d2 From b546d99d32d377287f7cfa9130aac1a3b1c980c2 Mon Sep 17 00:00:00 2001 From: taesung89 Date: Sun, 5 Nov 2017 01:03:55 +0900 Subject: Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index ad371cb..87e40ea 100644 --- a/README.md +++ b/README.md @@ -123,13 +123,13 @@ The test results will be saved to a html file here: `./results/facades_pix2pix/l More example scripts can be found at `scripts` directory. ### Apply a pre-trained model (CycleGAN) -If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to facade label maps (stored in the directory `facades/testB`). +If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to Facade label maps (stored in the directory `facades/testB`). ``` bash #!./scripts/test_single.sh python test.py --dataroot ./datasets/facades/testA/ --name {my_trained_model_name} --model test --dataset_mode single ``` -You might have to specify `--which_model_netG` to match the generator architecture of the trained model. +You might want to specify `--which_model_netG` to match the generator architecture of the trained model. Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. -- cgit v1.2.3-70-g09d2 From 197b38ba8885483445becb10d08f9dfa2ce55fe5 Mon Sep 17 00:00:00 2001 From: Jun-Yan Zhu Date: Sat, 4 Nov 2017 22:24:59 -0700 Subject: Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) (limited to 'README.md') diff --git a/README.md b/README.md index 87e40ea..ffd60fc 100644 --- a/README.md +++ b/README.md @@ -41,6 +41,7 @@ In CVPR 2017. [Tensorflow] (by Van Huy), [Tensorflow] (by Xiaowei Hu), [Tensorflow-simple] (by Zhenliang He), + [TensorLayer] (by luoxier), [Chainer] (by Yanghua Jin), [Minimal PyTorch] (by yunjey), [Mxnet] (by Ldpe2G), -- cgit v1.2.3-70-g09d2 From 2d96edbee5a488a7861833731a2cb71b23b55727 Mon Sep 17 00:00:00 2001 From: taesung89 Date: Sun, 10 Dec 2017 22:56:39 -0800 Subject: Update README.md --- README.md | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index ffd60fc..788ad0f 100644 --- a/README.md +++ b/README.md @@ -124,15 +124,27 @@ The test results will be saved to a html file here: `./results/facades_pix2pix/l More example scripts can be found at `scripts` directory. ### Apply a pre-trained model (CycleGAN) + If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to Facade label maps (stored in the directory `facades/testB`). ``` bash #!./scripts/test_single.sh python test.py --dataroot ./datasets/facades/testA/ --name {my_trained_model_name} --model test --dataset_mode single ``` - You might want to specify `--which_model_netG` to match the generator architecture of the trained model. -Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. +You can download a few pretrained models from the authors. For example, if you would like to download horse2zebra model, + +```bash +bash pretrained_models/download_cyclegan_model.sh horse2zebra +``` +The pretrained model is saved at `./checkpoints/{name}_pretrained/latest_net_G.pth`. +Then generate the results using + +```bash +python test.py --dataroot datasets/horse2zebra/testA --checkpoints_dir ./checkpoints/ --name horse2zebra_pretrained --no_dropout --model test --dataset_mode single --loadSize 256 --results_dir {directory_path_to_save_result} +``` + +Note: We currently don't have all pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase. ### Apply a pre-trained model (pix2pix) -- cgit v1.2.3-70-g09d2