summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authortaesung89 <taesung89@gmail.com>2017-12-10 22:56:39 -0800
committerGitHub <noreply@github.com>2017-12-10 22:56:39 -0800
commit2d96edbee5a488a7861833731a2cb71b23b55727 (patch)
tree519eca3388079b3c9cb101accc04bf6876bc421a /README.md
parent7bda734dd7f3466d5d55afe80b97542b1b12bdb5 (diff)
Update README.md
Diffstat (limited to 'README.md')
-rw-r--r--README.md16
1 files changed, 14 insertions, 2 deletions
diff --git a/README.md b/README.md
index ffd60fc..788ad0f 100644
--- a/README.md
+++ b/README.md
@@ -124,15 +124,27 @@ The test results will be saved to a html file here: `./results/facades_pix2pix/l
More example scripts can be found at `scripts` directory.
### Apply a pre-trained model (CycleGAN)
+
If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to Facade label maps (stored in the directory `facades/testB`).
``` bash
#!./scripts/test_single.sh
python test.py --dataroot ./datasets/facades/testA/ --name {my_trained_model_name} --model test --dataset_mode single
```
-
You might want to specify `--which_model_netG` to match the generator architecture of the trained model.
-Note: We currently don't have pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase.
+You can download a few pretrained models from the authors. For example, if you would like to download horse2zebra model,
+
+```bash
+bash pretrained_models/download_cyclegan_model.sh horse2zebra
+```
+The pretrained model is saved at `./checkpoints/{name}_pretrained/latest_net_G.pth`.
+Then generate the results using
+
+```bash
+python test.py --dataroot datasets/horse2zebra/testA --checkpoints_dir ./checkpoints/ --name horse2zebra_pretrained --no_dropout --model test --dataset_mode single --loadSize 256 --results_dir {directory_path_to_save_result}
+```
+
+Note: We currently don't have all pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase.
### Apply a pre-trained model (pix2pix)