blob: f1e6019e934e18f764ce87331d3b813e4860f457 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
# Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation
This is a tensorflow implementation of the paper. [PDF](http://yhjo09.github.io/files/VSR-DUF_CVPR18.pdf)
## directory
`./inputs/G/` Ground-truth video frames
`./inputs/L/` Low-resolution video frames
`./results/<L>L/G/` Outputs from given ground-truth video frames using <L> depth network
`./results/<L>L/L/` Outputs from given low-resolution video frames using <L> depth network
## test
Put your video frames to the input directory and run `test.py` with arguments `<L>` and `<T>`.
```
python test.py <L> <T>
```
`<L>` is the depth of network of 16, 28, 52.
`<T>` is the type of input frames, `G` denotes GT inputs and `L` denotes LR inputs.
For example, `python test.py 16 G` super-resolve input frames in `./inputs/G/*` using `16` depth network.
## video
[](./supple/VSR_supple_crf28.mp4?raw=true)
## bibtex
```
@InProceedings{Jo_2018_CVPR,
author = {Jo, Younghyun and Oh, Seoung Wug and Kang, Jaeyeon and Kim, Seon Joo},
title = {Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2018}
}
```
|