diff options
Diffstat (limited to 'scraper/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv')
| -rw-r--r-- | scraper/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/scraper/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv b/scraper/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv new file mode 100644 index 00000000..f7130a67 --- /dev/null +++ b/scraper/datasets/scholar/entries/Level Playing Field for Million Scale Face Recognition.csv @@ -0,0 +1 @@ +Level playing field for million scale face recognition|http://openaccess.thecvf.com/content_cvpr_2017/papers/Nech_Level_Playing_Field_CVPR_2017_paper.pdf|2017|35|11|12932836311624990730|http://openaccess.thecvf.com/content_cvpr_2017/papers/Nech_Level_Playing_Field_CVPR_2017_paper.pdf|http://scholar.google.com/scholar?cites=12932836311624990730&as_sdt=2005&sciodt=0,5&hl=en|http://scholar.google.com/scholar?cluster=12932836311624990730&hl=en&as_sdt=0,5|None|Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7 M photos created with the goal to … |
