From 0d2314ef1ce689a8281f89ffd1bcfc3a677cc3cd Mon Sep 17 00:00:00 2001 From: Jules Laplace Date: Sun, 18 Nov 2018 15:03:27 +0100 Subject: just code --- README.md | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 README.md (limited to 'README.md') diff --git a/README.md b/README.md new file mode 100644 index 0000000..f1e6019 --- /dev/null +++ b/README.md @@ -0,0 +1,34 @@ +# Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation + +This is a tensorflow implementation of the paper. [PDF](http://yhjo09.github.io/files/VSR-DUF_CVPR18.pdf) + +## directory +`./inputs/G/` Ground-truth video frames +`./inputs/L/` Low-resolution video frames + +`./results/L/G/` Outputs from given ground-truth video frames using depth network +`./results/L/L/` Outputs from given low-resolution video frames using depth network + +## test +Put your video frames to the input directory and run `test.py` with arguments `` and ``. +``` +python test.py +``` +`` is the depth of network of 16, 28, 52. +`` is the type of input frames, `G` denotes GT inputs and `L` denotes LR inputs. + +For example, `python test.py 16 G` super-resolve input frames in `./inputs/G/*` using `16` depth network. + +## video +[![supplementary video](./supple/title.png)](./supple/VSR_supple_crf28.mp4?raw=true) + +## bibtex +``` +@InProceedings{Jo_2018_CVPR, + author = {Jo, Younghyun and Oh, Seoung Wug and Kang, Jaeyeon and Kim, Seon Joo}, + title = {Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation}, + booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, + year = {2018} +} +``` + -- cgit v1.2.3-70-g09d2