diff options
| author | Jules Laplace <julescarbon@gmail.com> | 2018-10-31 02:15:42 +0100 |
|---|---|---|
| committer | Jules Laplace <julescarbon@gmail.com> | 2018-10-31 02:15:42 +0100 |
| commit | a16c3cf801b70670dffc7041d92f7ccec56a0e18 (patch) | |
| tree | 189c6f52c347cad780aba982c04efb8668eaa57f /datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv | |
| parent | 640fb390baf494571114bc50b8059c9823ee899e (diff) | |
| parent | ab81e78a0bca427ba9b0283ec3a1b5fc2d98cf2d (diff) | |
Merge branch 'master' of asdf.us:megapixels_dev
Diffstat (limited to 'datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv')
| -rw-r--r-- | datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv b/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv new file mode 100644 index 00000000..e22f032b --- /dev/null +++ b/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv @@ -0,0 +1 @@ +From facial parts responses to face detection: A deep learning approach|http://scholar.google.com/https://www.cv-foundation.org/openaccess/content_iccv_2015/html/Yang_From_Facial_Parts_ICCV_2015_paper.html|2015|213|12|1818335115841631894|None|http://scholar.google.com/scholar?cites=1818335115841631894&as_sdt=2005&sciodt=0,5&hl=en|http://scholar.google.com/scholar?cluster=1818335115841631894&hl=en&as_sdt=0,5|None|In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99% on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91%. Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging … |
