From 0d2c4ca76d85503d5d5ac7db5d4f65b89b058cb3 Mon Sep 17 00:00:00 2001 From: Matt Cooper Date: Thu, 1 Sep 2016 17:15:39 -0400 Subject: Added FAQs section to README --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index c875376..5390980 100644 --- a/README.md +++ b/README.md @@ -82,3 +82,11 @@ Using the error measurements outlined in the paper (Peak Signal to Noise Ratio a --test_freq= --model_save_freq= ``` + +## FAQs + +> Why don't you train on patches larger then 32x32? Why not train on the whole image? + +Memory usage. Since the discriminator has fully-connected layers after the convolutions, the output of the last convolution must be flattened to connect to the first fully-connected layer. The size of this output is dependent on the input image size, and blows up really quickly (e.g. For an input size of 64x64, going from 128 feature maps to a fully connected layer with 512 nodes, you need a connection with 64*64*128*512 = 268,435,456 weights). Because of this, training on patches larger than 32x32 causes an out-of-memory error (at least on my machine). + +Luckily, you only need the discriminator for training, and the generator network is fully convolutional, so you can test the weights you trained on 32x32 images over images of any size (which is why I'm able to do generations for the entire Ms. Pac-Man board). -- cgit v1.2.3-70-g09d2