summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorMatt Cooper <matthew_cooper@brown.edu>2016-08-12 19:48:22 -0400
committerGitHub <noreply@github.com>2016-08-12 19:48:22 -0400
commit54bd95e24253a21856f04796ef6699fd858bf0b4 (patch)
treed645001e729b526f5db44ede9c5b5e9a210a484e /README.md
parente8c64b06ab2d21eb8485837bc11a1725c6e3e880 (diff)
Update README.md
Diffstat (limited to 'README.md')
-rw-r--r--README.md31
1 files changed, 28 insertions, 3 deletions
diff --git a/README.md b/README.md
index 547ea91..596ef49 100644
--- a/README.md
+++ b/README.md
@@ -46,8 +46,33 @@ Using the error measurements outlined in the paper (Peak Signal to Noise Ratio a
- frame ...
```
3. Process training data:
- -
+ - The networks train on random 32x32 pixel crops of the input images, filtered to make sure that most clips have some movement in them. To process your input data into this form, run the script `python process_data` from the `Code/` directory, with the following options:
+ ```
+ -n/--num_clips= <# clips to process for training> (Default = 5000000)
+ -t/--train_dir= <Directory of full training frames>
+ -c/--clips_dir= <Save directory for processed clips>
+ (I suggest making this a hidden dir so the filesystem doesn't freeze
+ with so many files. DON'T `ls` THIS DIRECTORY!)
+ -o/--overwrite (Overwrites the previous data in clips_dir)
+ -H/--help (prints usage)
+ ```
+ - This can take a few hours to complete, depending on the number of clips you want.
4. Train:
- - If you want to plug-and-play with the pacman dataset, you can [download my trained models here](https://drive.google.com/open?id=0Byf787GZQ7KvR2JvMUNIZnFlbm8).
-
+ - If you want to plug-and-play with the pacman dataset, you can [download my trained models here](https://drive.google.com/open?id=0Byf787GZQ7KvR2JvMUNIZnFlbm8). Load them using the `-l` command described below.
+ - Train and test your networks by running `python avg_runner.py` from the `Code/` directory, with the following options:
+ ```
+ -l/--load_path= <Relative/path/to/saved/model>
+ -t/--test_dir= <Directory of test images>
+ -r--recursions= <# recursive predictions to make on test>
+ -a/--adversarial= <{t/f}> (Whether to use adversarial training. Default=True)
+ -n/--name= <Subdirectory of ../Data/Save/*/ in which to save output of this run>
+ -O/--overwrite (Overwrites all previous data for the model with this save name)
+ -T/--test_only (Only runs a test step -- no training)
+ -H/--help (Prints usage)
+ --stats_freq= <How often to print loss/train error stats, in # steps>
+ --summary_freq= <How often to save loss/error summaries, in # steps>
+ --img_save_freq= <How often to save generated images, in # steps>
+ --test_freq= <How often to test the model on test data, in # steps>
+ --model_save_freq= <How often to save the model, in # steps>
+ ```