diff options
| -rw-r--r-- | Codes/constant.py | 2 | ||||
| -rw-r--r-- | Codes/flownet2/.gitignore | 9 | ||||
| -rw-r--r-- | README.md | 26 |
3 files changed, 22 insertions, 15 deletions
diff --git a/Codes/constant.py b/Codes/constant.py index eafeab9..f6f78df 100644 --- a/Codes/constant.py +++ b/Codes/constant.py @@ -92,7 +92,7 @@ const.EVALUATE = args.evaluate # network constants const.HEIGHT = 256 const.WIDTH = 256 -const.FLOWNET_CHECKPOINT = 'flownet2/checkpoints/FlowNetSD/flownet-SD.ckpt-0' +const.FLOWNET_CHECKPOINT = 'models/pretrains/flownet-SD.ckpt-0' const.FLOW_HEIGHT = 384 const.FLOW_WIDTH = 512 diff --git a/Codes/flownet2/.gitignore b/Codes/flownet2/.gitignore new file mode 100644 index 0000000..31abf4e --- /dev/null +++ b/Codes/flownet2/.gitignore @@ -0,0 +1,9 @@ +__pycache__/ +*.py[cod] +*$py.class +*.o +*.so +*.so.dSYM +checkpoints/ +!checkpoints/download.sh +!checkpoints/README.md @@ -2,7 +2,7 @@ This repo is the official open source of [Future Frame Prediction for Anomaly Detection -- A New Baseline, CVPR 2018](https://arxiv.org/pdf/1712.09867.pdf) by Wen Liu, Weixinluo, Dongze Lian and Shenghua Gao. A **demo** is shown in *https://www.youtube.com/watch?v=M--wv-Y_h0A*. It is implemented in tensorflow. Please follow the instructions to run the code. -#### 1. Installation (Anaconda with python3.6 installation is recommended) +## 1. Installation (Anaconda with python3.6 installation is recommended) * Install 3rd-package dependencies of python (listed in requirements.txt) ``` numpy==1.14.1 @@ -24,7 +24,7 @@ pip install -r requirements.txt CUDA 8.0 Cudnn 6.0 ``` -#### 2. Download datasets +## 2. Download datasets cd into Data folder of project and run the shell scripts (**ped1.sh, ped2.sh, avenue.sh, shanghaitech.sh**) under the Data folder. ```shell cd Data @@ -34,7 +34,7 @@ cd Data ./shanghaitech.sh ``` -#### 3. Testing on saved models +## 3. Testing on saved models * Download the trained models ```shell cd models @@ -56,7 +56,7 @@ python inference.py --dataset avenue \ ``` -#### 4. Training from scratch (here we use ped2 and avenue datasets for examples) +## 4. Training from scratch (here we use ped2 and avenue datasets for examples) * Set hyper-parameters The default hyper-parameters, such as $\lambda_{init}$, $\lambda_{gd}$, $\lambda_{op}$, $\lambda_{adv}$ and the learning rate of G, as well as D, are all initialized in **training_hyper_params/hyper_params.ini**. * Running script (as ped2 or avenue for instances) and cd into **Codes** folder at first. @@ -111,20 +111,18 @@ Open the browser and type **https://ip:10086**. Following is the screen shot of   +Since all frames are loaded into BGR channels in training and testing, the visualized images look different from RGB channels. -#### Notes +## Notes The flow loss (temporal loss) module is based on [a TensorFlow implementation of FlowNet2](https://github.com/sampepose/flownet2-tf). Thanks for their nice work. -#### Citation +## Citation If you find this useful, please cite our work as follows: ```code -@article{liu2018ano_pred, -Author = {Wen Liu and Weixin Luo and Dongze Lian and Shenghua Gao}, -Title = {Future Frame Prediction for Anomaly Detection -- A New Baseline}, -Journal = {ArXiv e-prints}, -Year = {2017}, -Eprint = {arXiv:1712.09867}, +@INPROCEEDINGS{liu2018ano_pred, + author={W. Liu and W. Luo, D. Lian and S. Gao}, + booktitle={2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, + title={Future Frame Prediction for Anomaly Detection -- A New Baseline}, + year={2018} } ``` -While the open access of CVPR 2018 is available, welcome to cite the CVPR version. -Please contact with us if you have any questions. |
