summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md43
1 files changed, 25 insertions, 18 deletions
diff --git a/README.md b/README.md
index ce3ca97..02c2d36 100644
--- a/README.md
+++ b/README.md
@@ -124,43 +124,50 @@ The test results will be saved to a html file here: `./results/facades_pix2pix/l
More example scripts can be found at `scripts` directory.
### Apply a pre-trained model (CycleGAN)
-
-If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to Facade label maps (stored in the directory `facades/testB`).
-``` bash
-#!./scripts/test_single.sh
-python test.py --dataroot ./datasets/facades/testA/ --name {my_trained_model_name} --model test --dataset_mode single
-```
-You might want to specify `--which_model_netG` to match the generator architecture of the trained model.
-
-You can download a few pretrained models from the authors. For example, if you would like to download horse2zebra model,
-
+- You can download a pretrained model (e.g. horse2zebra) with the following script:
```bash
bash pretrained_models/download_cyclegan_model.sh horse2zebra
```
-The pretrained model is saved at `./checkpoints/{name}_pretrained/latest_net_G.pth`.
-Then generate the results using
+The pretrained model is saved at `./checkpoints/{name}_pretrained/latest_net_G.pth`.
+- To test the model, you also need to download the horse2zebra dataset:
+```bash
+bash ./datasets/download_cyclegan_dataset.sh horse2zebra
+```
+- Then generate the results using
```bash
-python test.py --dataroot datasets/horse2zebra/testA --checkpoints_dir ./checkpoints/ --name horse2zebra_pretrained --no_dropout --model test --dataset_mode single --loadSize 256 --results_dir {directory_path_to_save_result}
+python test.py --dataroot datasets/horse2zebra/testA --checkpoints_dir ./checkpoints/ --name horse2zebra_pretrained --no_dropout --model test --dataset_mode single --loadSize 256
```
+The results will be saved at `./results/`. Use `--results_dir {directory_path_to_save_result}` to specify the results directory.
+- Note: The models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to reproduce the same results in our paper, we recommend using the pretrained models in the Torch codebase.
-Note: We currently don't have all pretrained models using PyTorch. This is in part because the models trained using Torch and PyTorch produce slightly different results, although we were not able to decide which result is better. If you would like to generate the same results that appeared in our paper, we recommend using the pretrained models in the Torch codebase.
+- If you would like to apply a pre-trained model to a collection of input images (rather than image pairs), please use `--dataset_mode single` and `--model test` options. Here is a script to apply a model to Facade label maps (stored in the directory `facades/testB`).
+``` bash
+#!./scripts/test_single.sh
+python test.py --dataroot ./datasets/facades/testB/ --name {your_trained_model_name} --model test --dataset_mode single
+```
+You might want to specify `--which_model_netG` to match the generator architecture of the trained model.
### Apply a pre-trained model (pix2pix)
-Download the pre-trained models using `./pretrained_models/download_pix2pix_model.sh`. For example, if you would like to download label2photo model on the Facades dataset,
+Download a pre-trained model with `./pretrained_models/download_pix2pix_model.sh`.
+- For example, if you would like to download label2photo model on the Facades dataset,
```bash
bash pretrained_models/download_pix2pix_model.sh facades_label2photo
```
-Then generate the results using
+- Download the pix2pix facades datasets
+```bash
+bash ./datasets/download_pix2pix_dataset.sh facades
+```
+- Then generate the results using
```bash
python test.py --dataroot ./datasets/facades/ --which_direction BtoA --model pix2pix --name facades_label2photo_pretrained --dataset_mode aligned --which_model_netG unet_256 --norm batch
```
-Note that we specified `--which_direction BtoA` to accomodate the fact that the Facades dataset's A to B direction is photos to labels.
+Note that we specified `--which_direction BtoA` as Facades dataset's A to B direction is photos to labels.
-Also, the models that are currently available to download can be found by reading the output of `bash pretrained_models/download_pix2pix_model.sh`
+- See a list of currently available models at `bash pretrained_models/download_pix2pix_model.sh`
## Training/test Details
- Flags: see `options/train_options.py` and `options/base_options.py` for all the training flags; see `options/test_options.py` and `options/base_options.py` for all the test flags.