diff options
| author | Cameron <cysmith1010@gmail.com> | 2016-10-11 16:44:43 -0600 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2016-10-11 16:44:43 -0600 |
| commit | 1b3e00480b9d99f34fea068009efe45bc76111fd (patch) | |
| tree | 07f30473beecdfabae8bfc4756e50ee556f23d5a /README.md | |
| parent | 7e62bbfd0b02dc9e56bcc98dd992aa55772cd2f6 (diff) | |
Modified README
Diffstat (limited to 'README.md')
| -rw-r--r-- | README.md | 5 |
1 files changed, 2 insertions, 3 deletions
@@ -165,7 +165,6 @@ Animations can be rendered by applying the algorithm to each source frame. For * Download the [VGG-19 model weights](http://www.vlfeat.org/matconvnet/pretrained/) (see the "VGG-VD models from the *Very Deep Convolutional Networks for Large-Scale Visual Recognition* project" section). More info about the VGG-19 network can be found [here](http://www.robots.ox.ac.uk/~vgg/research/very_deep/). * After downloading, copy the weights file `imagenet-vgg-verydeep-19.mat` to the project directory. - ## Usage ### Basic Usage @@ -246,8 +245,8 @@ python neural_style.py --video \ * `--init_img_type`: Image used to initialize the network. *Choices*: `content`, `random`, `style`. *Default*: `content` * `--max_size`: Maximum width or height of the input images. *Default*: `512` * `--content_weight`: Weight for the content loss function. *Default*: `5e0` -* `--style_weight`: Weight for the style loss function. *Default*: `1e4` -* `--tv_weight`: Weight for the total variational loss function. *Default*: `0` +* `--style_weight`: Weight for the style loss function. *Default*: `1e3` +* `--tv_weight`: Weight for the total variational loss function. *Default*: `1e-3` * `--temporal_weight`: Weight for the temporal loss function. *Default*: `2e2` * `--content_layers`: *Space-separated* VGG19 layer names used for the content image. *Default*: `conv4_2` * `--style_layers`: *Space-separated* VGG19 layer names used for the style image. *Default*: `relu1_1 relu2_1 relu3_1 relu4_1 relu5_1` |
