summaryrefslogtreecommitdiff
path: root/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv
diff options
context:
space:
mode:
Diffstat (limited to 'datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv')
-rw-r--r--datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv1
1 files changed, 0 insertions, 1 deletions
diff --git a/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv b/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv
deleted file mode 100644
index f7130a67..00000000
--- a/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv
+++ /dev/null
@@ -1 +0,0 @@
-Level playing field for million scale face recognition|http://openaccess.thecvf.com/content_cvpr_2017/papers/Nech_Level_Playing_Field_CVPR_2017_paper.pdf|2017|35|11|12932836311624990730|http://openaccess.thecvf.com/content_cvpr_2017/papers/Nech_Level_Playing_Field_CVPR_2017_paper.pdf|http://scholar.google.com/scholar?cites=12932836311624990730&as_sdt=2005&sciodt=0,5&hl=en|http://scholar.google.com/scholar?cluster=12932836311624990730&hl=en&as_sdt=0,5|None|Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7 M photos created with the goal to …