From 973e2e77a7514c7c4dfb338a021ef2c14b5efbd0 Mon Sep 17 00:00:00 2001 From: Cameron Date: Tue, 11 Oct 2016 16:04:04 -0600 Subject: Modified README --- README.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 1625cc5..0c992a7 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ Here we reproduce Figure 2 from the first paper, which renders a photograph of t ### Content / Style Tradeoff -The relative weights of the style and content transfer can be controlled. +The algorithm allows the user to trade-off the relative weight of the style and content reconstruction terms. Here we render with an increasing style weight applied to [Red Canna](http://www.georgiaokeeffe.net/red-canna.jsp):

@@ -71,7 +71,7 @@ More than one style image can be used to blend multiple artistic styles.

### Style Interpolation -When using multiple style images, the degree of blending between the images can be controlled. +When using multiple style images, the degree to which they are blended can be controlled.

@@ -83,7 +83,7 @@ When using multiple style images, the degree of blending between the images can

### Transfer style but not color -The color scheme of the original image can be preserved by including the flag `--original_colors`. +By including the flag `--original_colors` the output image will retain the colors of the original image. *Left to right*: content image, stylized image, stylized image with the original colors of the content image

@@ -270,6 +270,10 @@ python neural_style.py --video \ * `--learning_rate`: Learning-rate parameter for the Adam optimizer. *Default*: `1e1` * `--max_iterations`: Max number of iterations for the Adam or L-BFGS optimizer. *Default*: `1000` * `--print_iterations`: Number of iterations between optimizer print statements. *Default*: `50` +* `--content_loss_function`: Different constants K in the content loss function. *Choices*: `1`, `2`, `3`. *Default*: `1` +

+ +

#### Video Frame Arguments * `--video`: Boolean flag indicating if the user is creating a video. @@ -291,6 +295,10 @@ python neural_style.py --video \ Send questions or issues: cysmith1010@gmail.com +If you want to contribute, please try to: +* Avoid esoteric one-liners. +* Avoid unnecessary or nested lambda expressions. + ## Memory By default, `neural-style-tf` uses the NVIDIA cuDNN GPU backend for convolutions and L-BFGS for optimization. These produce better and faster results, but can consume a lot of memory. You can reduce memory usage with the following: -- cgit v1.2.3-70-g09d2