Exploiting GAN Internal Capacity for High-Quality Reconstruction of Natural Images ================================================================================== Code for reproducing experiments in ["Exploiting GAN Internal Capacity for High-Quality Reconstruction of Natural Images"](https://arxiv.org/abs/1911.05630) This directory contains associated source code to invert BigGAN generator for 128x128 resolution. Requires Tensorflow. ## Generation of Random Samples: Generate 1000 random samples of BigGAN generator: ```console $> python random_sample.py random_sample.json ``` ## Inversion of the Generator: The optimization is split into two steps according to the paper: First step, invesion to the latent space: ```console $> python inversion.py params_latent.json ``` Second step, inversion to the dense layer: ```console $> python inversion.py params_dense.json ``` ## Interpolation: Generate interpolations between the inverted images and generated images: ```console $> python interpolation.py params_dense.json ``` ## Segmentation: Segment inverted images by clustering the attention map: ```console $> python segmentation.py params_dense.json ``` Note: to replicate the experiments on real images from ImageNet, first a hdf5 file must be created with random images from the dataset, similar to the procedure in "random_sample.py". Then, the two step of optimization must be executed (modify the "dataset:" parameter in params_latent.json to consider custom datasets).