summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorJules Laplace <julescarbon@gmail.com>2018-11-18 15:03:27 +0100
committerJules Laplace <julescarbon@gmail.com>2018-11-18 15:03:27 +0100
commit0d2314ef1ce689a8281f89ffd1bcfc3a677cc3cd (patch)
treecf59fc211cbbfe685ac55505cd3eda58a71064ce /README.md
just code
Diffstat (limited to 'README.md')
-rw-r--r--README.md34
1 files changed, 34 insertions, 0 deletions
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..f1e6019
--- /dev/null
+++ b/README.md
@@ -0,0 +1,34 @@
+# Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation
+
+This is a tensorflow implementation of the paper. [PDF](http://yhjo09.github.io/files/VSR-DUF_CVPR18.pdf)
+
+## directory
+`./inputs/G/` Ground-truth video frames
+`./inputs/L/` Low-resolution video frames
+
+`./results/<L>L/G/` Outputs from given ground-truth video frames using <L> depth network
+`./results/<L>L/L/` Outputs from given low-resolution video frames using <L> depth network
+
+## test
+Put your video frames to the input directory and run `test.py` with arguments `<L>` and `<T>`.
+```
+python test.py <L> <T>
+```
+`<L>` is the depth of network of 16, 28, 52.
+`<T>` is the type of input frames, `G` denotes GT inputs and `L` denotes LR inputs.
+
+For example, `python test.py 16 G` super-resolve input frames in `./inputs/G/*` using `16` depth network.
+
+## video
+[![supplementary video](./supple/title.png)](./supple/VSR_supple_crf28.mp4?raw=true)
+
+## bibtex
+```
+@InProceedings{Jo_2018_CVPR,
+ author = {Jo, Younghyun and Oh, Seoung Wug and Kang, Jaeyeon and Kim, Seon Joo},
+ title = {Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation},
+ booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2018}
+}
+```
+