summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorPiotr Kozakowski <kozak000@gmail.com>2017-11-19 20:23:26 +0100
committerPiotr Kozakowski <kozak000@gmail.com>2017-11-19 20:23:26 +0100
commit4167442627b1414ff8fdc86528812b46168c656b (patch)
treef5020d2161762fad2db56f3f9ddcb3ad2deec553 /README.md
parent61e935ff5a90c8c7b9a5a5f2f54d4ec8f9742dc0 (diff)
Add weight normalization
Diffstat (limited to 'README.md')
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 34e8530..4d39632 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@ A PyTorch implementation of [SampleRNN: An Unconditional End-to-End Neural Audio
![A visual representation of the SampleRNN architecture](http://deepsound.io/images/samplernn.png)
-It's based on the reference implementation in Theano: https://github.com/soroushmehr/sampleRNN_ICLR2017. Unlike the Theano version, our code allows training models with arbitrary number of tiers, whereas the original implementation allows maximum 3 tiers. However it doesn't have weight normalization and doesn't allow using LSTM units (only GRU). For more details and motivation behind rewriting this model to PyTorch, see our blog post: http://deepsound.io/samplernn_pytorch.html.
+It's based on the reference implementation in Theano: https://github.com/soroushmehr/sampleRNN_ICLR2017. Unlike the Theano version, our code allows training models with arbitrary number of tiers, whereas the original implementation allows maximum 3 tiers. However it doesn't allow using LSTM units (only GRU). For more details and motivation behind rewriting this model to PyTorch, see our blog post: http://deepsound.io/samplernn_pytorch.html.
## Dependencies