summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMatt Cooper <matthew_cooper@brown.edu>2016-08-18 13:46:02 -0400
committerMatt Cooper <matthew_cooper@brown.edu>2016-08-18 13:46:02 -0400
commit05347b3ba1b388e02259b04ced7f12f014e3c2ae (patch)
treec44efcc8cd6c6b23616c92a836abd7e688c7fba3
parent14ad89afcfe776d6ca290856bd24256a6e9bdc89 (diff)
added official code to readme
-rw-r--r--README.md5
1 files changed, 4 insertions, 1 deletions
diff --git a/README.md b/README.md
index 2a4b0d1..c875376 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,8 @@
# Adversarial Video Generation
-This project implements a generative adversarial network to predict future frames of video, as detailed in ["Deep Multi-Scale Video Prediction Beyond Mean Square Error"](https://arxiv.org/abs/1511.05440) by Mathieu, Couprie & LeCun.
+This project implements a generative adversarial network to predict future frames of video, as detailed in
+["Deep Multi-Scale Video Prediction Beyond Mean Square Error"](https://arxiv.org/abs/1511.05440) by Mathieu,
+Couprie & LeCun. Their official code (using Torch) can be found
+[here](https://github.com/coupriec/VideoPredictionICLR2016).
Adversarial generation uses two networks – a generator and a discriminator – to improve the sharpness of generated images. Given the past four frames of video, the generator learns to generate accurate predictions for the next frame. Given either a generated or a real-world image, the discriminator learns to correctly classify between generated and real. The two networks "compete," with the generator attempting to fool the discriminator into classifying its output as real. This forces the generator to create frames that are very similar to what real frames in the domain might look like.