diff options
| -rw-r--r-- | README.md | 8 |
1 files changed, 8 insertions, 0 deletions
@@ -82,3 +82,11 @@ Using the error measurements outlined in the paper (Peak Signal to Noise Ratio a --test_freq= <How often to test the model on test data, in # steps> --model_save_freq= <How often to save the model, in # steps> ``` + +## FAQs + +> Why don't you train on patches larger then 32x32? Why not train on the whole image? + +Memory usage. Since the discriminator has fully-connected layers after the convolutions, the output of the last convolution must be flattened to connect to the first fully-connected layer. The size of this output is dependent on the input image size, and blows up really quickly (e.g. For an input size of 64x64, going from 128 feature maps to a fully connected layer with 512 nodes, you need a connection with 64*64*128*512 = 268,435,456 weights). Because of this, training on patches larger than 32x32 causes an out-of-memory error (at least on my machine). + +Luckily, you only need the discriminator for training, and the generator network is fully convolutional, so you can test the weights you trained on 32x32 images over images of any size (which is why I'm able to do generations for the entire Ms. Pac-Man board). |
