summaryrefslogtreecommitdiff
path: root/inversion/README.md
diff options
context:
space:
mode:
authorJules Laplace <julescarbon@gmail.com>2019-12-08 21:43:30 +0100
committerJules Laplace <julescarbon@gmail.com>2019-12-08 21:43:30 +0100
commitfb70ab05768fa4a54358dc1f304b68bc7aff6dae (patch)
tree6ba4c805ce37b5b8827b08946f0b22f639fa3e14 /inversion/README.md
parent326db345db13b1ab3a76406644654cb78b4d1b8d (diff)
inversion json files
Diffstat (limited to 'inversion/README.md')
-rw-r--r--inversion/README.md43
1 files changed, 43 insertions, 0 deletions
diff --git a/inversion/README.md b/inversion/README.md
new file mode 100644
index 0000000..3be7b8d
--- /dev/null
+++ b/inversion/README.md
@@ -0,0 +1,43 @@
+Exploiting GAN Internal Capacity for High-Quality Reconstruction of Natural Images
+==================================================================================
+
+Code for reproducing experiments in ["Exploiting GAN Internal Capacity for High-Quality Reconstruction of Natural Images"](https://arxiv.org/abs/1911.05630)
+
+This directory contains associated source code to invert BigGAN generator for
+128x128 resolution. Requires Tensorflow.
+
+## Generation of Random Samples:
+Generate 1000 random samples of BigGAN generator:
+```console
+ $> python random_sample.py random_sample.json
+```
+
+## Inversion of the Generator:
+The optimization is split into two steps according to the paper:
+First step, invesion to the latent space:
+```console
+ $> python inversion.py params_latent.json
+```
+
+Second step, inversion to the dense layer:
+```console
+ $> python inversion.py params_dense.json
+```
+
+## Interpolation:
+Generate interpolations between the inverted images and generated images:
+```console
+ $> python interpolation.py params_dense.json
+```
+
+## Segmentation:
+Segment inverted images by clustering the attention map:
+```console
+ $> python segmentation.py params_dense.json
+```
+
+Note: to replicate the experiments on real images from ImageNet, first
+a hdf5 file must be created with random images from the dataset, similar to the
+procedure in "random_sample.py". Then, the two step of optimization must be
+executed (modify the "dataset:" parameter in params_latent.json to consider
+custom datasets).