summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorjunyanz <junyanz@berkeley.edu>2017-06-19 22:15:33 -0700
committerjunyanz <junyanz@berkeley.edu>2017-06-19 22:15:33 -0700
commit7d5251fd846d77991a6e73ce6badfe7bac3b3ff0 (patch)
tree6ed780c0d6544f87a4c5696a9883409aefdde60d /README.md
parent1941d1de23498ae68e98c13a433daa71d2cfa862 (diff)
update README
Diffstat (limited to 'README.md')
-rw-r--r--README.md21
1 files changed, 18 insertions, 3 deletions
diff --git a/README.md b/README.md
index dc21dd3..59f1ec0 100644
--- a/README.md
+++ b/README.md
@@ -154,10 +154,25 @@ python datasets/combine_A_and_B.py --fold_A /path/to/data/A --fold_B /path/to/da
This will combine each pair of images (A,B) into a single image file, ready for training.
-## TODO
-- add reflection and other padding layers.
+## Citation
+If you use this code for your research, please cite our papers.
+```
+@article{CycleGAN2017,
+ title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
+ author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
+ journal={arXiv preprint arXiv:1703.10593},
+ year={2017}
+}
+
+@article{pix2pix2016,
+ title={Image-to-Image Translation with Conditional Adversarial Networks},
+ author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
+ journal={arxiv},
+ year={2016}
+}
+```
-## Related Projects:
+## Related Projects
[CycleGAN](https://github.com/junyanz/CycleGAN): Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
[pix2pix](https://github.com/phillipi/pix2pix): Image-to-image translation with conditional adversarial nets
[iGAN](https://github.com/junyanz/iGAN): Interactive Image Generation via Generative Adversarial Networks