summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorleVirve <gae.m.project@gmail.com>2017-12-13 09:42:44 +0800
committerleVirve <gae.m.project@gmail.com>2017-12-13 09:43:24 +0800
commit3be4b6ac0cf7fa4b5ae445c2b8b60d34481c9ea1 (patch)
tree39a0b675b03b8d701c367948aa529a1a344b1a76 /README.md
parentf33f098be9b25c3b62523540c9c703af1db0b1c0 (diff)
fix typos at command options in readme
Diffstat (limited to 'README.md')
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 788ad0f..ce3ca97 100644
--- a/README.md
+++ b/README.md
@@ -184,7 +184,7 @@ bash ./datasets/download_cyclegan_dataset.sh dataset_name
- `monet2photo`, `vangogh2photo`, `ukiyoe2photo`, `cezanne2photo`: The art images were downloaded from [Wikiart](https://www.wikiart.org/). The real photos are downloaded from Flickr using the combination of the tags *landscape* and *landscapephotography*. The training set size of each class is Monet:1074, Cezanne:584, Van Gogh:401, Ukiyo-e:1433, Photographs:6853.
- `iphone2dslr_flower`: both classes of images were downlaoded from Flickr. The training set size of each class is iPhone:1813, DSLR:3316. See more details in our paper.
-To train a model on your own datasets, you need to create a data folder with two subdirectories `trainA` and `trainB` that contain images from domain A and B. You can test your model on your training set by setting ``phase='train'`` in `test.lua`. You can also create subdirectories `testA` and `testB` if you have test data.
+To train a model on your own datasets, you need to create a data folder with two subdirectories `trainA` and `trainB` that contain images from domain A and B. You can test your model on your training set by setting `--phase train` in `test.py`. You can also create subdirectories `testA` and `testB` if you have test data.
You should **not** expect our method to work on just any random combination of input and output datasets (e.g. `cats<->keyboards`). From our experiments, we find it works better if two datasets share similar visual content. For example, `landscape painting<->landscape photographs` works much better than `portrait painting <-> landscape photographs`. `zebras<->horses` achieves compelling results while `cats<->dogs` completely fails.