summaryrefslogtreecommitdiff
path: root/inversion/README.md
blob: 3be7b8d17e4f2d393a08f83d6474a7f369d80ee9 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Exploiting GAN Internal Capacity for High-Quality Reconstruction of Natural Images
==================================================================================

Code for reproducing experiments in ["Exploiting GAN Internal Capacity for High-Quality Reconstruction of Natural Images"](https://arxiv.org/abs/1911.05630)

This directory contains associated source code to invert BigGAN generator for
128x128 resolution. Requires Tensorflow.

## Generation of Random Samples:
Generate 1000 random samples of BigGAN generator:
```console
  $> python random_sample.py random_sample.json
```

## Inversion of the Generator:
The optimization is split into two steps according to the paper:
First step, invesion to the latent space:
```console
  $> python inversion.py params_latent.json
```

Second step, inversion to the dense layer:
```console
  $> python inversion.py params_dense.json
```

## Interpolation:
Generate interpolations between the inverted images and generated images:
```console
  $> python interpolation.py params_dense.json
```

## Segmentation:
Segment inverted images by clustering the attention map:
```console
  $> python segmentation.py params_dense.json
```

Note: to replicate the experiments on real images from ImageNet, first
a hdf5 file must be created with random images from the dataset, similar to the
procedure in "random_sample.py". Then, the two step of optimization must be
executed (modify the "dataset:" parameter in params_latent.json to consider
custom datasets).