summaryrefslogtreecommitdiff
path: root/scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv
diff options
context:
space:
mode:
authorAdam Harvey <adam@ahprojects.com>2018-12-23 01:37:03 +0100
committerAdam Harvey <adam@ahprojects.com>2018-12-23 01:37:03 +0100
commit4452e02e8b04f3476273574a875bb60cfbb4568b (patch)
tree3ffa44f9621b736250a8b94da14a187dc785c2fe /scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv
parent2a65f7a157bd4bace970cef73529867b0e0a374d (diff)
parent5340bee951c18910fd764241945f1f136b5a22b4 (diff)
.
Diffstat (limited to 'scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv')
-rw-r--r--scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv1
1 files changed, 1 insertions, 0 deletions
diff --git a/scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv b/scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv
new file mode 100644
index 00000000..e22f032b
--- /dev/null
+++ b/scraper/datasets/scholar/entries/From Facial Parts Responses to Face Detection: A Deep Learning Approach.csv
@@ -0,0 +1 @@
+From facial parts responses to face detection: A deep learning approach|http://scholar.google.com/https://www.cv-foundation.org/openaccess/content_iccv_2015/html/Yang_From_Facial_Parts_ICCV_2015_paper.html|2015|213|12|1818335115841631894|None|http://scholar.google.com/scholar?cites=1818335115841631894&as_sdt=2005&sciodt=0,5&hl=en|http://scholar.google.com/scholar?cluster=1818335115841631894&hl=en&as_sdt=0,5|None|In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99% on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91%. Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging …