diff options
34 files changed, 2062 insertions, 557 deletions
diff --git a/site/content/pages/about/assets/megapixels_license.pdf b/site/content/pages/about/assets/megapixels_license.pdf Binary files differnew file mode 100755 index 00000000..42771b32 --- /dev/null +++ b/site/content/pages/about/assets/megapixels_license.pdf diff --git a/site/content/pages/about/attribution.md b/site/content/pages/about/attribution.md index 3cab8d57..bf190478 100644 --- a/site/content/pages/about/attribution.md +++ b/site/content/pages/about/attribution.md @@ -36,11 +36,9 @@ If you use the MegaPixels data or any data derived from it, please cite the orig } </pre> -and include this license and attribution protocol within any derivative work. +If you redistribute any data from this site, you must also include this [license](assets/megapixels_license.pdf) in PDF format -If you publish data derived from MegaPixels, the original dataset creators should first be notified. - -The MegaPixels dataset is made available under the Open Data Commons Attribution License (https://opendatacommons.org/licenses/by/1.0/) and for academic use only. +The MegaPixel dataset is made available under the Open Data Commons Attribution License (https://opendatacommons.org/licenses/by/1.0/) and for academic use only. READABLE SUMMARY OF Open Data Commons Attribution License diff --git a/site/content/pages/about/index.md b/site/content/pages/about/index.md index 4cf390fc..36e28d22 100644 --- a/site/content/pages/about/index.md +++ b/site/content/pages/about/index.md @@ -45,12 +45,10 @@ MegaPixels is made possible with support from <a href="http://mozilla.org">Mozil MegaPixels is an art and research project first launched in 2017 for an [installation](https://ahprojects.com/megapixels-glassroom/) at Tactical Technology Collective's [GlassRoom](https://tacticaltech.org/pages/glass-room-london-press/) about face recognition datasets. In 2018 MegaPixels was extended to cover pedestrian analysis datasets for a [commission by Elevate Arts festival](https://esc.mur.at/de/node/2370) in Austria. Since then MegaPixels has evolved into a large-scale interrogation of hundreds of publicly-available face and person analysis datasets, the first of which launched on this site in April 2019. -MegaPixels aims to provide a critical perspective on machine learning image datasets, one that might otherwise escape academia and industry funded artificial intelligence think tanks that are often supported by the several of the same technology companies who have created datasets presented on this site. +MegaPixels aims to provide a critical perspective on machine learning image datasets, one that might otherwise escape academia and industry funded artificial intelligence think tanks that are often supported by the same technology companies who created many of the datasets presented on this site. MegaPixels is an independent project, designed as a public resource for educators, students, journalists, and researchers. Each dataset presented on this site undergoes a thorough review of its images, intent, and funding sources. Though the goals are similar to publishing an academic paper, MegaPixels is a website-first research project, with an academic publication to follow. -One of the main focuses of the dataset investigations presented on this site is to uncover where funding originated. Because of our emphasis on other researcher's funding sources, it is important that we are transparent about our own. This site and the past year of research have been primarily funded by a privacy art grant from Mozilla in 2018. The original MegaPixels installation in 2017 was built as a commission for and with support from Tactical Technology Collective and Mozilla. The research into pedestrian analysis datasets was funded by a commission from Elevate Arts, and continued development in 2019 is supported in part by a 1-year Researcher-in-Residence grant from Karlsruhe HfG, as well as lecture and workshop fees. - === columns 3 ##### Team diff --git a/site/content/pages/about/updates.md b/site/content/pages/about/updates.md new file mode 100644 index 00000000..3cac2143 --- /dev/null +++ b/site/content/pages/about/updates.md @@ -0,0 +1,38 @@ +------------ + +status: published +title: MegaPixels Site Updates +desc: MegaPixels Site Updates +slug: updates +cssclass: about +published: 2019-06-02 +updated: 2019-06-02 +authors: Adam Harvey + +------------ + +# Updates and Responses + +<section class="about-menu"> +<ul> +<li><a href="/about/">About</a></li> +<li><a class="current" href="/about/updates/">Updates</a></li> +<li><a href="/about/press/">Press</a></li> +<li><a href="/about/attribution/">Attribution</a></li> +<li><a href="/about/legal/">Legal / Privacy</a></li> +</ul> +</section> + +Since publishing this project, several of datasets have disappeared. Below is a chronical of recents events related to the datasets on this site. + +June 2019 + +- June 2: The Duke MTMC main webpage was deactivated and the entire dataset seems to be no longer available from Duke +- June 2: The has been https://reid-mct.github.io/2019/ +- June 1: The Brainwash face/head dataset has been taken down by its author after posting it about it + +May 2019 + +- May 31: Semantic Scholar appears to be censoring citations used in this project. Two of the citations linking the Brainwash dataset to a military research in China have been intentionally disabled. +- May 28: The Microsoft Celeb (MS Celeb) face dataset website is now 404 and all the download links are deactivated. It appears that Microsoft Reserach has shuttered access to their MS Celeb dataset. Yet it remains available, as of June 2, on [Imperial College London's website](https://ibug.doc.ic.ac.uk/resources/lightweight-face-recognition-challenge-workshop/) +-
\ No newline at end of file diff --git a/site/content/pages/datasets/index.md b/site/content/pages/datasets/index.md index 2c7def38..6e96f19e 100644 --- a/site/content/pages/datasets/index.md +++ b/site/content/pages/datasets/index.md @@ -11,6 +11,6 @@ sync: false ------------ -# Face Recognition Datasets +# Dataset Analyses -Explore face recognition datasets contributing to the growing crisis of authoritarian biometric surveillance technologies. This first group of 5 datasets focuses on image usage connected to foreign surveillance and defense organizations. +Explore face and person recognition datasets contributing to the growing crisis of authoritarian biometric surveillance. This first group of 5 datasets focuses on image usage connected to foreign surveillance and defense organizations. Since publishing this project in April 2019, the [Brainwash](https://purl.stanford.edu/sx925dc9385), [Duke MTMC](http://vision.cs.duke.edu/DukeMTMC/), and [MS Celeb](http://msceleb.org/) datasets have been taken down by their authors. The [UCCS](https://vast.uccs.edu/Opensetface/) dataset was temporarily deactivated due to metadata exposure and the [Town Centre data](http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html) remains active. diff --git a/site/content/pages/datasets/msceleb/index.md b/site/content/pages/datasets/msceleb/index.md index 909e56ec..22a799e0 100644 --- a/site/content/pages/datasets/msceleb/index.md +++ b/site/content/pages/datasets/msceleb/index.md @@ -115,8 +115,6 @@ To provide insight into where these 10 million faces images have traveled, we ma {% include 'dashboard.html' %} -{% include 'supplementary_header.html' %} - ### Footnotes [^msceleb_orig]: MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition. Accessed April 18, 2019. http://web.archive.org/web/20190418151913/http://msceleb.org/ diff --git a/site/datasets/test/helen.csv b/site/datasets/test/helen.csv new file mode 100644 index 00000000..19fb12fb --- /dev/null +++ b/site/datasets/test/helen.csv @@ -0,0 +1,324 @@ +id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year +0,,Helen,helen,0.0,0.0,,,,main,,Interactive Facial Feature Localization,2012 +1,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,bae86526b3b0197210b64cdd95cb5aca4209c98a,citation,https://arxiv.org/pdf/1802.01777.pdf,"Brute-Force Facial Landmark Analysis With a 140, 000-Way Classifier",2018 +2,China,Helen,helen,28.2290209,112.99483204,"National University of Defense Technology, China",mil,1b8541ec28564db66a08185510c8b300fa4dc793,citation,,Affine-Transformation Parameters Regression for Face Alignment,2016 +3,China,Helen,helen,31.83907195,117.26420748,University of Science and Technology of China,edu,084bd02d171e36458f108f07265386f22b34a1ae,citation,http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf,Face Alignment at 3000 FPS via Regressing Local Binary Features,2014 +4,United States,Helen,helen,47.6423318,-122.1369302,Microsoft,company,084bd02d171e36458f108f07265386f22b34a1ae,citation,http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf,Face Alignment at 3000 FPS via Regressing Local Binary Features,2014 +5,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,5bd3d08335bb4e444a86200c5e9f57fd9d719e14,citation,https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf,3 D Face Morphable Models “ Inthe-Wild ”,0 +6,United States,Helen,helen,38.7768106,-94.9442982,Amazon,company,5bd3d08335bb4e444a86200c5e9f57fd9d719e14,citation,https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf,3 D Face Morphable Models “ Inthe-Wild ”,0 +7,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,5bd3d08335bb4e444a86200c5e9f57fd9d719e14,citation,https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf,3 D Face Morphable Models “ Inthe-Wild ”,0 +8,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,12095f9b35ee88272dd5abc2d942a4f55804b31e,citation,https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf,DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild Rıza,0 +9,United States,Helen,helen,38.7768106,-94.9442982,Amazon,company,12095f9b35ee88272dd5abc2d942a4f55804b31e,citation,https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf,DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild Rıza,0 +10,United Kingdom,Helen,helen,51.5231607,-0.1282037,University College London,edu,12095f9b35ee88272dd5abc2d942a4f55804b31e,citation,https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf,DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild Rıza,0 +11,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +12,United Kingdom,Helen,helen,56.1454119,-3.9205713,University of Stirling,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +13,China,Helen,helen,31.4854255,120.2739581,Jiangnan University,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +14,China,Helen,helen,30.642769,104.06751175,"Sichuan University, Chengdu",edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +15,Germany,Helen,helen,48.48187645,9.18682404,Reutlingen University,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +16,United States,Helen,helen,45.57022705,-122.63709346,Concordia University,edu,266ed43dcea2e7db9f968b164ca08897539ca8dd,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf,Beyond Principal Components: Deep Boltzmann Machines for face modeling,2015 +17,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,266ed43dcea2e7db9f968b164ca08897539ca8dd,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf,Beyond Principal Components: Deep Boltzmann Machines for face modeling,2015 +18,Germany,Helen,helen,52.5098686,13.3984513,"Amazon Research, Berlin",company,ba1c0600d3bdb8ed9d439e8aa736a96214156284,citation,http://www.eurasip.org/Proceedings/Eusipco/Eusipco2017/papers/1570347043.pdf,Complex representations for learning statistical shape priors,2017 +19,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,ba1c0600d3bdb8ed9d439e8aa736a96214156284,citation,http://www.eurasip.org/Proceedings/Eusipco/Eusipco2017/papers/1570347043.pdf,Complex representations for learning statistical shape priors,2017 +20,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,3b470b76045745c0ef5321e0f1e0e6a4b1821339,citation,https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf,Consensus of Regression for Occlusion-Robust Facial Feature Localization,2014 +21,United States,Helen,helen,37.3309307,-121.8940485,"Adobe Research, San Jose, CA",company,3b470b76045745c0ef5321e0f1e0e6a4b1821339,citation,https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf,Consensus of Regression for Occlusion-Robust Facial Feature Localization,2014 +22,Spain,Helen,helen,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +23,Spain,Helen,helen,41.386608,2.16402,Universitat de Barcelona,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +24,Thailand,Helen,helen,13.65450525,100.49423171,Robotics Institute,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +25,United States,Helen,helen,40.44415295,-79.96243993,University of Pittsburgh,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +26,Canada,Helen,helen,43.0095971,-81.2737336,University of Western Ontario,edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +27,Canada,Helen,helen,42.960348,-81.226628,"London Healthcare Sciences Centre, Ontario, Canada",edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +28,United Kingdom,Helen,helen,55.0030632,-1.57463231,Northumbria University,edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +29,Canada,Helen,helen,43.0012953,-81.2550455,"St. Joseph's Health Care, Ontario, Canada",edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +30,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,2a4153655ad1169d482e22c468d67f3bc2c49f12,citation,http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf,Face Alignment Across Large Poses: A 3D Solution,2016 +31,United States,Helen,helen,42.718568,-84.47791571,Michigan State University,edu,2a4153655ad1169d482e22c468d67f3bc2c49f12,citation,http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf,Face Alignment Across Large Poses: A 3D Solution,2016 +32,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,655ad6ed99277b3bba1f2ea7e5da4709d6e6cf44,citation,https://arxiv.org/pdf/1803.06598.pdf,Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network,2018 +33,United States,Helen,helen,42.3614256,-71.0812092,Microsoft Research Asia,company,655ad6ed99277b3bba1f2ea7e5da4709d6e6cf44,citation,https://arxiv.org/pdf/1803.06598.pdf,Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network,2018 +34,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,232b6e2391c064d483546b9ee3aafe0ba48ca519,citation,http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf,Optimization Problems for Fast AAM Fitting in-the-Wild,2013 +35,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,232b6e2391c064d483546b9ee3aafe0ba48ca519,citation,http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf,Optimization Problems for Fast AAM Fitting in-the-Wild,2013 +36,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,75fd9acf5e5b7ed17c658cc84090c4659e5de01d,citation,http://eprints.nottingham.ac.uk/31442/1/tzimiro_CVPR15.pdf,Project-Out Cascaded Regression with an application to face alignment,2015 +37,Denmark,Helen,helen,57.01590275,9.97532827,Aalborg University,edu,087002ab569e35432cdeb8e63b2c94f1abc53ea9,citation,http://openaccess.thecvf.com/content_cvpr_workshops_2015/W09/papers/Irani_Spatiotemporal_Analysis_of_2015_CVPR_paper.pdf,Spatiotemporal analysis of RGB-D-T facial images for multimodal pain level recognition,2015 +38,Spain,Helen,helen,41.5008957,2.111553,"Computer Vision Center, UAB, Barcelona, Spain",edu,087002ab569e35432cdeb8e63b2c94f1abc53ea9,citation,http://openaccess.thecvf.com/content_cvpr_workshops_2015/W09/papers/Irani_Spatiotemporal_Analysis_of_2015_CVPR_paper.pdf,Spatiotemporal analysis of RGB-D-T facial images for multimodal pain level recognition,2015 +39,China,Helen,helen,39.9041999,116.4073963,Key Lab of Intelligent Information Processing of Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +40,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +41,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +42,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +43,Israel,Helen,helen,32.77824165,34.99565673,Open University of Israel,edu,62e913431bcef5983955e9ca160b91bb19d9de42,citation,https://arxiv.org/pdf/1511.04031.pdf,Facial Landmark Detection with Tweaked Convolutional Neural Networks,2018 +44,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,034b3f3bac663fb814336a69a9fd3514ca0082b9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/alabort_cvpr2015.pdf,Unifying holistic and Parts-Based Deformable Model fitting,2015 +45,China,Helen,helen,39.9808333,116.34101249,Beihang University,edu,86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd,citation,https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf,Attentional Alignment Networks,2018 +46,United States,Helen,helen,32.7283683,-97.11201835,University of Texas at Arlington,edu,86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd,citation,https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf,Attentional Alignment Networks,2018 +47,China,Helen,helen,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd,citation,https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf,Attentional Alignment Networks,2018 +48,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,4068574b8678a117d9a434360e9c12fe6232dae0,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos_automatic_2014.pdf,Automatic Construction of Deformable Models In-the-Wild,2014 +49,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,1d0128b9f96f4c11c034d41581f23eb4b4dd7780,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/robust_spherical_harmonics.pdf,Automatic construction Of robust spherical harmonic subspaces,2015 +50,China,Helen,helen,39.9041999,116.4073963,Key Lab of Intelligent Information Processing of Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +51,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +52,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +53,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,ac6c3b3e92ff5fbcd8f7967696c7aae134bea209,citation,https://arxiv.org/pdf/1607.05046.pdf,Deep Cascaded Bi-Network for Face Hallucination,2016 +54,China,Helen,helen,22.59805605,113.98533784,Shenzhen Institutes of Advanced Technology,edu,ac6c3b3e92ff5fbcd8f7967696c7aae134bea209,citation,https://arxiv.org/pdf/1607.05046.pdf,Deep Cascaded Bi-Network for Face Hallucination,2016 +55,United States,Helen,helen,37.36566745,-120.42158888,"University of California, Merced",edu,ac6c3b3e92ff5fbcd8f7967696c7aae134bea209,citation,https://arxiv.org/pdf/1607.05046.pdf,Deep Cascaded Bi-Network for Face Hallucination,2016 +56,United States,Helen,helen,42.3614256,-71.0812092,Microsoft Research Asia,company,63d865c66faaba68018defee0daf201db8ca79ed,citation,https://arxiv.org/pdf/1409.5230.pdf,Deep Regression for Face Alignment,2014 +57,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,35f921def890210dda4b72247849ad7ba7d35250,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Zhou_Exemplar-Based_Graph_Matching_2013_ICCV_paper.pdf,Exemplar-Based Graph Matching for Robust Facial Landmark Localization,2013 +58,United States,Helen,helen,42.3614256,-71.0812092,Microsoft Research Asia,company,898ff1bafee2a6fb3c848ad07f6f292416b5f07d,citation,,Face Alignment via Regressing Local Binary Features,2016 +59,China,Helen,helen,31.83907195,117.26420748,University of Science and Technology of China,edu,898ff1bafee2a6fb3c848ad07f6f292416b5f07d,citation,,Face Alignment via Regressing Local Binary Features,2016 +60,United States,Helen,helen,47.6423318,-122.1369302,Microsoft,company,898ff1bafee2a6fb3c848ad07f6f292416b5f07d,citation,,Face Alignment via Regressing Local Binary Features,2016 +61,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +62,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +63,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +64,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,f095b5770f0ff13ba9670e3d480743c5e9ad1036,citation,http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf,Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images,2016 +65,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,f095b5770f0ff13ba9670e3d480743c5e9ad1036,citation,http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf,Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images,2016 +66,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,f095b5770f0ff13ba9670e3d480743c5e9ad1036,citation,http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf,Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images,2016 +67,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,624496296af19243d5f05e7505fd927db02fd0ce,citation,http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf,Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild,2014 +68,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,624496296af19243d5f05e7505fd927db02fd0ce,citation,http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf,Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild,2014 +69,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,6a4ebd91c4d380e21da0efb2dee276897f56467a,citation,http://eprints.nottingham.ac.uk/31441/1/tzimiroICIP14b.pdf,HOG active appearance models,2014 +70,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,696236fb6f986f6d5565abb01f402d09db68e5fa,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf,Learning adaptive receptive fields for deep image parsing networks,2017 +71,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,696236fb6f986f6d5565abb01f402d09db68e5fa,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf,Learning adaptive receptive fields for deep image parsing networks,2017 +72,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,696236fb6f986f6d5565abb01f402d09db68e5fa,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf,Learning adaptive receptive fields for deep image parsing networks,2017 +73,United Kingdom,Helen,helen,52.17638955,0.14308882,University of Cambridge,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +74,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +75,Germany,Helen,helen,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +76,United States,Helen,helen,45.55236,-122.9142988,Intel Corporation,company,9ef2b2db11ed117521424c275c3ce1b5c696b9b3,citation,https://arxiv.org/pdf/1511.04404.pdf,Robust Face Alignment Using a Mixture of Invariant Experts,2016 +77,Germany,Helen,helen,48.7863462,9.2380718,Daimler AG,company,3a8846ca16df5dfb2daadc189ed40c13d2ddc0c5,citation,https://arxiv.org/pdf/1901.10143.pdf,Validation loss for landmark detection,2019 +78,South Africa,Helen,helen,-33.95828745,18.45997349,University of Cape Town,edu,3bc376f29bc169279105d33f59642568de36f17f,citation,http://www.dip.ee.uct.ac.za/~nicolls/publish/sm14-visapp.pdf,Active shape models with SIFT descriptors and MARS,2014 +79,United States,Helen,helen,33.9832526,-118.40417,USC Institute for Creative Technologies,edu,0a6d344112b5af7d1abbd712f83c0d70105211d0,citation,http://ict.usc.edu/pubs/Constrained%20local%20neural%20fields%20for%20robust%20facial%20landmark%20detection%20in%20the%20wild.pdf,Constrained Local Neural Fields for Robust Facial Landmark Detection in the Wild,2013 +80,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,3be8f1f7501978287af8d7ebfac5963216698249,citation,https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf,Deep Cascaded Regression for Face Alignment,2015 +81,Singapore,Helen,helen,1.2962018,103.77689944,National University of Singapore,edu,3be8f1f7501978287af8d7ebfac5963216698249,citation,https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf,Deep Cascaded Regression for Face Alignment,2015 +82,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,329d58e8fb30f1bf09acb2f556c9c2f3e768b15c,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf,Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment,2017 +83,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,329d58e8fb30f1bf09acb2f556c9c2f3e768b15c,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf,Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment,2017 +84,France,Helen,helen,48.8407791,2.5873259,University of Paris-Est,edu,0293721d276856f0425d4417e22381de3350ac32,citation,https://hal-upec-upem.archives-ouvertes.fr/hal-01790317/file/RK_SSD_2018.pdf,Customer Satisfaction Measuring Based on the Most Significant Facial Emotion,2018 +85,Tunisia,Helen,helen,34.7361066,10.7427275,"University of Sfax, Tunisia",edu,0293721d276856f0425d4417e22381de3350ac32,citation,https://hal-upec-upem.archives-ouvertes.fr/hal-01790317/file/RK_SSD_2018.pdf,Customer Satisfaction Measuring Based on the Most Significant Facial Emotion,2018 +86,United States,Helen,helen,42.4505507,-76.4783513,Cornell University,edu,ce9e1dfa7705623bb67df3a91052062a0a0ca456,citation,https://arxiv.org/pdf/1611.05507.pdf,Deep Feature Interpolation for Image Content Changes,2017 +87,United States,Helen,helen,38.8997145,-77.0485992,George Washington University,edu,ce9e1dfa7705623bb67df3a91052062a0a0ca456,citation,https://arxiv.org/pdf/1611.05507.pdf,Deep Feature Interpolation for Image Content Changes,2017 +88,China,Helen,helen,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,2d294bde112b892068636f3a48300b3c033d98da,citation,https://arxiv.org/pdf/1808.01558.pdf,Deep Multi-Center Learning for Face Alignment,2018 +89,China,Helen,helen,31.2284923,121.40211389,East China Normal University,edu,2d294bde112b892068636f3a48300b3c033d98da,citation,https://arxiv.org/pdf/1808.01558.pdf,Deep Multi-Center Learning for Face Alignment,2018 +90,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,30cd39388b5c1aae7d8153c0ab9d54b61b474ffe,citation,https://arxiv.org/pdf/1510.09083.pdf,Deep Recurrent Regression for Facial Landmark Detection,2018 +91,Singapore,Helen,helen,1.2962018,103.77689944,National University of Singapore,edu,30cd39388b5c1aae7d8153c0ab9d54b61b474ffe,citation,https://arxiv.org/pdf/1510.09083.pdf,Deep Recurrent Regression for Facial Landmark Detection,2018 +92,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,0209389b8369aaa2a08830ac3b2036d4901ba1f1,citation,https://arxiv.org/pdf/1612.01202.pdf,DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild,2017 +93,United Kingdom,Helen,helen,51.5231607,-0.1282037,University College London,edu,0209389b8369aaa2a08830ac3b2036d4901ba1f1,citation,https://arxiv.org/pdf/1612.01202.pdf,DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild,2017 +94,United States,Helen,helen,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,191d30e7e7360d565b0c1e2814b5bcbd86a11d41,citation,http://homepages.rpi.edu/~wuy9/DiscriminativeDeepFaceShape/DiscriminativeDeepFaceShape_IJCV.pdf,Discriminative Deep Face Shape Model for Facial Point Detection,2014 +95,United States,Helen,helen,39.2899685,-76.62196103,University of Maryland,edu,ceeb67bf53ffab1395c36f1141b516f893bada27,citation,https://arxiv.org/pdf/1601.07950.pdf,Face Alignment by Local Deep Descriptor Regression,2016 +96,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,ceeb67bf53ffab1395c36f1141b516f893bada27,citation,https://arxiv.org/pdf/1601.07950.pdf,Face Alignment by Local Deep Descriptor Regression,2016 +97,United States,Helen,helen,43.1576969,-77.58829158,University of Rochester,edu,beb8d7c128ccbdc6b63959a763ebc505a5313c06,citation,https://arxiv.org/pdf/1812.03252.pdf,Face Completion with Semantic Knowledge and Collaborative Adversarial Learning,2018 +98,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,beb8d7c128ccbdc6b63959a763ebc505a5313c06,citation,https://arxiv.org/pdf/1812.03252.pdf,Face Completion with Semantic Knowledge and Collaborative Adversarial Learning,2018 +99,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,438e7999c937b94f0f6384dbeaa3febff6d283b6,citation,https://arxiv.org/pdf/1705.02402.pdf,"Face Detection, Bounding Box Aggregation and Pose Estimation for Robust Facial Landmark Localisation in the Wild",2017 +100,China,Helen,helen,31.4854255,120.2739581,Jiangnan University,edu,438e7999c937b94f0f6384dbeaa3febff6d283b6,citation,https://arxiv.org/pdf/1705.02402.pdf,"Face Detection, Bounding Box Aggregation and Pose Estimation for Robust Facial Landmark Localisation in the Wild",2017 +101,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,84e6669b47670f9f4f49c0085311dce0e178b685,citation,https://arxiv.org/pdf/1502.00852.pdf,Face frontalization for Alignment and Recognition,2015 +102,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,84e6669b47670f9f4f49c0085311dce0e178b685,citation,https://arxiv.org/pdf/1502.00852.pdf,Face frontalization for Alignment and Recognition,2015 +103,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,2f7aa942313b1eb12ebfab791af71d0a3830b24c,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf,Feature-Based Lucas–Kanade and Active Appearance Models,2015 +104,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,2f7aa942313b1eb12ebfab791af71d0a3830b24c,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf,Feature-Based Lucas–Kanade and Active Appearance Models,2015 +105,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,1c1a98df3d0d5e2034ea723994bdc85af45934db,citation,http://www.cs.nott.ac.uk/~pszmv/Documents/ICCV-300w_cameraready.pdf,Guided Unsupervised Learning of Mode Specific Models for Facial Point Detection in the Wild,2013 +106,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,f070d739fb812d38571ec77490ccd8777e95ce7a,citation,https://zhzhanp.github.io/papers/PR2015.pdf,Hierarchical facial landmark localization via cascaded random binary patterns,2015 +107,China,Helen,helen,22.53521465,113.9315911,Shenzhen University,edu,f070d739fb812d38571ec77490ccd8777e95ce7a,citation,https://zhzhanp.github.io/papers/PR2015.pdf,Hierarchical facial landmark localization via cascaded random binary patterns,2015 +108,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,87e6cb090aecfc6f03a3b00650a5c5f475dfebe1,citation,https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf,Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection,2016 +109,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,87e6cb090aecfc6f03a3b00650a5c5f475dfebe1,citation,https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf,Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection,2016 +110,Singapore,Helen,helen,1.2962018,103.77689944,National University of Singapore,edu,0ea7b7fff090c707684fd4dc13e0a8f39b300a97,citation,https://arxiv.org/pdf/1711.06055.pdf,Integrated Face Analytics Networks through Cross-Dataset Hybrid Training,2017 +111,China,Helen,helen,39.9586652,116.30971281,Beijing Institute of Technology,edu,0ea7b7fff090c707684fd4dc13e0a8f39b300a97,citation,https://arxiv.org/pdf/1711.06055.pdf,Integrated Face Analytics Networks through Cross-Dataset Hybrid Training,2017 +112,China,Helen,helen,23.0490047,113.3971571,South China University of China,edu,7d7be6172fc2884e1da22d1e96d5899a29831ad2,citation,https://arxiv.org/pdf/1703.01605.pdf,L2GSCI: Local to Global Seam Cutting and Integrating for Accurate Face Contour Extraction,2017 +113,China,Helen,helen,22.46935655,114.19474194,Education University of Hong Kong,edu,7d7be6172fc2884e1da22d1e96d5899a29831ad2,citation,https://arxiv.org/pdf/1703.01605.pdf,L2GSCI: Local to Global Seam Cutting and Integrating for Accurate Face Contour Extraction,2017 +114,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,d28d32af7ef9889ef9cb877345a90ea85e70f7f1,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Kim_Local.pdf,Local-Global Landmark Confidences for Face Recognition,2017 +115,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,d28d32af7ef9889ef9cb877345a90ea85e70f7f1,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Kim_Local.pdf,Local-Global Landmark Confidences for Face Recognition,2017 +116,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,303a7099c01530fa0beb197eb1305b574168b653,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf,Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders,2016 +117,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,303a7099c01530fa0beb197eb1305b574168b653,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf,Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders,2016 +118,Sweden,Helen,helen,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,1824b1ccace464ba275ccc86619feaa89018c0ad,citation,http://www.csc.kth.se/~vahidk/face/KazemiCVPR14.pdf,One millisecond face alignment with an ensemble of regression trees,2014 +119,United States,Helen,helen,35.3103441,-80.73261617,University of North Carolina at Charlotte,edu,89002a64e96a82486220b1d5c3f060654b24ef2a,citation,http://research.rutgers.edu/~shaoting/paper/ICCV15_face.pdf,PIEFA: Personalized Incremental and Ensemble Face Alignment,2015 +120,United States,Helen,helen,45.57022705,-122.63709346,Concordia University,edu,6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d,citation,https://arxiv.org/pdf/1607.00659.pdf,Robust Deep Appearance Models,2016 +121,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d,citation,https://arxiv.org/pdf/1607.00659.pdf,Robust Deep Appearance Models,2016 +122,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf,Robust FEC-CNN: A High Accuracy Facial Landmark Detection System,2017 +123,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf,Robust FEC-CNN: A High Accuracy Facial Landmark Detection System,2017 +124,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,788a7b59ea72e23ef4f86dc9abb4450efefeca41,citation,http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf,Robust Statistical Face Frontalization,2015 +125,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,788a7b59ea72e23ef4f86dc9abb4450efefeca41,citation,http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf,Robust Statistical Face Frontalization,2015 +126,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0,citation,http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf,Robust Statistical Frontalization of Human and Animal Faces,2016 +127,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0,citation,http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf,Robust Statistical Frontalization of Human and Animal Faces,2016 +128,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,04ff69aa20da4eeccdabbe127e3641b8e6502ec0,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf,Sequential Face Alignment via Person-Specific Modeling in the Wild,2016 +129,United States,Helen,helen,32.7283683,-97.11201835,University of Texas at Arlington,edu,04ff69aa20da4eeccdabbe127e3641b8e6502ec0,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf,Sequential Face Alignment via Person-Specific Modeling in the Wild,2016 +130,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,c8ca6a2dc41516c16ea0747e9b3b7b1db788dbdd,citation,https://arxiv.org/pdf/1609.02825.pdf,Track Facial Points in Unconstrained Videos,2016 +131,United States,Helen,helen,32.7298718,-97.1140116,The University of Texas at Arlington,edu,c8ca6a2dc41516c16ea0747e9b3b7b1db788dbdd,citation,https://arxiv.org/pdf/1609.02825.pdf,Track Facial Points in Unconstrained Videos,2016 +132,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,433a6d6d2a3ed8a6502982dccc992f91d665b9b3,citation,https://arxiv.org/pdf/1409.0602.pdf,Transferring Landmark Annotations for Cross-Dataset Face Alignment.,2014 +133,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,433a6d6d2a3ed8a6502982dccc992f91d665b9b3,citation,https://arxiv.org/pdf/1409.0602.pdf,Transferring Landmark Annotations for Cross-Dataset Face Alignment.,2014 +134,Canada,Helen,helen,49.8091536,-97.13304179,University of Manitoba,edu,3bf249f716a384065443abc6172f4bdef88738d9,citation,https://arxiv.org/pdf/1812.01063.pdf,A Hybrid Instance-based Transfer Learning Method,2018 +135,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,afdf9a3464c3b015f040982750f6b41c048706f5,citation,https://arxiv.org/pdf/1608.05477.pdf,A Recurrent Encoder-Decoder Network for Sequential Face Alignment,2016 +136,South Korea,Helen,helen,37.26728,126.9841151,Seoul National University,edu,b4362cd87ad219790800127ddd366cc465606a78,citation,https://pdfs.semanticscholar.org/b436/2cd87ad219790800127ddd366cc465606a78.pdf,A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy,2015 +137,Canada,Helen,helen,43.66333345,-79.39769975,University of Toronto,edu,3a54b23cdbd159bb32c39c3adcba8229e3237e56,citation,https://arxiv.org/pdf/1805.12302.pdf,Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization,2018 +138,United States,Helen,helen,32.8800604,-117.2340135,University of California San Diego,edu,3ac0aefb379dedae4a6054e649e98698b3e5fb82,citation,https://arxiv.org/pdf/1802.02137.pdf,An Occluded Stacked Hourglass Approach to Facial Landmark Localization and Occlusion Estimation,2017 +139,United Kingdom,Helen,helen,53.8066815,-1.5550328,The University of Leeds,edu,c5ea084531212284ce3f1ca86a6209f0001de9d1,citation,https://pdfs.semanticscholar.org/c5ea/084531212284ce3f1ca86a6209f0001de9d1.pdf,Audio-visual speech processing for multimedia localisation,2016 +140,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,06c2dfe1568266ad99368fc75edf79585e29095f,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/joan_cvpr2014.pdf,Bayesian Active Appearance Models,2014 +141,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,ccf16bcf458e4d7a37643b8364594656287f5bfc,citation,https://pdfs.semanticscholar.org/ccf1/6bcf458e4d7a37643b8364594656287f5bfc.pdf,Cascade for Landmark Guided Semantic Part Segmentation,2016 +142,China,Helen,helen,31.4854255,120.2739581,Jiangnan University,edu,60824ee635777b4ee30fcc2485ef1e103b8e7af9,citation,http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf,Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting,2015 +143,United Kingdom,Helen,helen,51.2421839,-0.5905421,University of Surrey Guildford,edu,60824ee635777b4ee30fcc2485ef1e103b8e7af9,citation,http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf,Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting,2015 +144,China,Helen,helen,22.304572,114.17976285,Hong Kong Polytechnic University,edu,4836b084a583d2e794eb6a94982ea30d7990f663,citation,https://arxiv.org/pdf/1611.06642.pdf,Cascaded Face Alignment via Intimacy Definition Feature,2017 +145,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,72a1852c78b5e95a57efa21c92bdc54219975d8f,citation,http://eprints.nottingham.ac.uk/31303/1/prl_blockwise_SDM.pdf,Cascaded regression with sparsified feature covariance matrix for facial landmark detection,2016 +146,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,4140498e96a5ff3ba816d13daf148fffb9a2be3f,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Li_Constrained.pdf,Constrained Ensemble Initialization for Facial Landmark Tracking in Video,2017 +147,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,963d0d40de8780161b70d28d2b125b5222e75596,citation,https://arxiv.org/pdf/1611.08657.pdf,Convolutional Experts Constrained Local Model for Facial Landmark Detection,2017 +148,United States,Helen,helen,32.87935255,-117.23110049,"University of California, San Diego",edu,ee418372b0038bd3b8ae82bd1518d5c01a33a7ec,citation,https://pdfs.semanticscholar.org/ee41/8372b0038bd3b8ae82bd1518d5c01a33a7ec.pdf,CSE 255 Winter 2015 Assignment 1 : Eye Detection using Histogram of Oriented Gradients and Adaboost Classifier,2015 +149,Poland,Helen,helen,52.22165395,21.00735776,Warsaw University of Technology,edu,f27b8b8f2059248f77258cf8595e9434cf0b0228,citation,https://arxiv.org/pdf/1706.01789.pdf,Deep Alignment Network: A Convolutional Neural Network for Robust Face Alignment,2017 +150,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,a0b1990dd2b4cd87e4fd60912cc1552c34792770,citation,https://pdfs.semanticscholar.org/a0b1/990dd2b4cd87e4fd60912cc1552c34792770.pdf,Deep Constrained Local Models for Facial Landmark Detection,2016 +151,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,38cbb500823057613494bacd0078aa0e57b30af8,citation,https://arxiv.org/pdf/1704.08772.pdf,Deep Face Deblurring,2017 +152,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,9b8f7a6850d991586b7186f0bb7e424924a9fd74,citation,https://ibug.doc.ic.ac.uk/media/uploads/documents/disentangling-modes-variation.pdf,Disentangling the Modes of Variation in Unlabelled Data,2018 +153,China,Helen,helen,30.642769,104.06751175,"Sichuan University, Chengdu",edu,b29b42f7ab8d25d244bfc1413a8d608cbdc51855,citation,https://arxiv.org/pdf/1702.02719.pdf,Effective face landmark localization via single deep network,2017 +154,China,Helen,helen,22.304572,114.17976285,Hong Kong Polytechnic University,edu,4cfa8755fe23a8a0b19909fa4dec54ce6c1bd2f7,citation,https://arxiv.org/pdf/1611.09956.pdf,Efficient likelihood Bayesian constrained local model,2017 +155,China,Helen,helen,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,5c820e47981d21c9dddde8d2f8020146e600368f,citation,https://pdfs.semanticscholar.org/5c82/0e47981d21c9dddde8d2f8020146e600368f.pdf,Extended Supervised Descent Method for Robust Face Alignment,2014 +156,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,f633d6dc02b2e55eb24b89f2b8c6df94a2de86dd,citation,http://parnec.nuaa.edu.cn/pubs/xiaoyang%20tan/journal/2016/JXPR-2016.pdf,Face alignment by robust discriminative Hough voting,2016 +157,Romania,Helen,helen,46.7723581,23.5852075,Technical University,edu,f0ae807627f81acb63eb5837c75a1e895a92c376,citation,https://pdfs.semanticscholar.org/f0ae/807627f81acb63eb5837c75a1e895a92c376.pdf,Facial Landmark Detection using Ensemble of Cascaded Regressions,2016 +158,Czech Republic,Helen,helen,50.0764296,14.41802312,Czech Technical University,edu,37c8514df89337f34421dc27b86d0eb45b660a5e,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Uricar_Facial_Landmark_Tracking_ICCV_2015_paper.pdf,Facial Landmark Tracking by Tree-Based Deformable Part Model Based Detector,2015 +159,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,5b0bf1063b694e4b1575bb428edb4f3451d9bf04,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Yang_Facial_Shape_Tracking_ICCV_2015_paper.pdf,Facial Shape Tracking via Spatio-Temporal Cascade Shape Regression,2015 +160,Switzerland,Helen,helen,47.376313,8.5476699,ETH Zurich,edu,a66d89357ada66d98d242c124e1e8d96ac9b37a0,citation,https://arxiv.org/pdf/1608.06451.pdf,Failure Detection for Facial Landmark Detectors,2016 +161,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,f1b4583c576d6d8c661b4b2c82bdebf3ba3d7e53,citation,https://arxiv.org/pdf/1707.05653.pdf,Faster than Real-Time Facial Alignment: A 3D Spatial Transformer Network Approach in Unconstrained Poses,2017 +162,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,70a69569ba61f3585cd90c70ca5832e838fa1584,citation,https://pdfs.semanticscholar.org/70a6/9569ba61f3585cd90c70ca5832e838fa1584.pdf,Friendly Faces: Weakly Supervised Character Identification,2014 +163,United States,Helen,helen,37.36566745,-120.42158888,"University of California, Merced",edu,f0a4a3fb6997334511d7b8fc090f9ce894679faf,citation,https://arxiv.org/pdf/1704.05838.pdf,Generative Face Completion,2017 +164,United States,Helen,helen,28.0599999,-82.41383619,University of South Florida,edu,ba21fd28003994480f713b0a1276160fea2e89b5,citation,https://pdfs.semanticscholar.org/ba21/fd28003994480f713b0a1276160fea2e89b5.pdf,Identification of Individuals from Ears in Real World Conditions,2018 +165,United States,Helen,helen,28.59899755,-81.19712501,University of Central Florida,edu,a40edf6eb979d1ddfe5894fac7f2cf199519669f,citation,https://arxiv.org/pdf/1704.08740.pdf,Improving Facial Attribute Prediction Using Semantic Segmentation,2017 +166,Germany,Helen,helen,48.263011,11.666857,Technical University of Munich,edu,e6178de1ef15a6a973aad2791ce5fbabc2cb8ae5,citation,https://pdfs.semanticscholar.org/e617/8de1ef15a6a973aad2791ce5fbabc2cb8ae5.pdf,Improving Facial Landmark Detection via a Super-Resolution Inception Network,2017 +167,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,9ca0626366e136dac6bfd628cec158e26ed959c7,citation,https://arxiv.org/pdf/1811.02194.pdf,In-the-wild Facial Expression Recognition in Extreme Poses,2017 +168,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,500b92578e4deff98ce20e6017124e6d2053b451,citation,http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf,Incremental Face Alignment in the Wild,2014 +169,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,500b92578e4deff98ce20e6017124e6d2053b451,citation,http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf,Incremental Face Alignment in the Wild,2014 +170,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,8dd162c9419d29564e9777dd523382a20c683d89,citation,https://arxiv.org/pdf/1806.02479.pdf,Interlinked Convolutional Neural Networks for Face Parsing,2015 +171,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,2c14c3bb46275da5706c466f9f51f4424ffda914,citation,http://braismartinez.com/media/documents/2015ivc_-_l21-based_regression_and_prediction_accumulation_across_views_for_robust_facial_landmark_detection.pdf,"L2, 1-based regression and prediction accumulation across views for robust facial landmark detection",2016 +172,China,Helen,helen,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,c00f402b9cfc3f8dd2c74d6b3552acbd1f358301,citation,https://arxiv.org/pdf/1608.00207.pdf,Learning deep representation from coarse to fine for face alignment,2016 +173,China,Helen,helen,31.83907195,117.26420748,University of Science and Technology of China,edu,b5f79df712ad535d88ae784a617a30c02e0551ca,citation,http://staff.ustc.edu.cn/~juyong/Papers/FaceAlignment-2015.pdf,Locating Facial Landmarks Using Probabilistic Random Forest,2015 +174,United Kingdom,Helen,helen,52.3793131,-1.5604252,University of Warwick,edu,0bc53b338c52fc635687b7a6c1e7c2b7191f42e5,citation,https://pdfs.semanticscholar.org/a32a/8d6d4c3b4d69544763be48ffa7cb0d7f2f23.pdf,Loglet SIFT for Part Description in Deformable Part Models: Application to Face Alignment,2016 +175,United Kingdom,Helen,helen,53.4717306,-2.2399239,Manchester Metropolitan University,edu,6fd4048bfe3123e94c2648e53a56bc6bf8ff4cdd,citation,https://pdfs.semanticscholar.org/6fd4/048bfe3123e94c2648e53a56bc6bf8ff4cdd.pdf,Micro-facial movement detection using spatio-temporal features,2016 +176,United Kingdom,Helen,helen,51.5247272,-0.03931035,Queen Mary University of London,edu,0f81b0fa8df5bf3fcfa10f20120540342a0c92e5,citation,https://arxiv.org/pdf/1501.05152.pdf,"Mirror, mirror on the wall, tell me, is the error small?",2015 +177,South Africa,Helen,helen,-33.95828745,18.45997349,University of Cape Town,edu,36e8ef2e5d52a78dddf0002e03918b101dcdb326,citation,http://www.milbo.org/stasm-files/multiview-active-shape-models-with-sift-for-300w.pdf,Multiview Active Shape Models with SIFT Descriptors for the 300-W Face Landmark Challenge,2013 +178,United States,Helen,helen,40.51865195,-74.44099801,State University of New Jersey,edu,bbc5f4052674278c96abe7ff9dc2d75071b6e3f3,citation,https://pdfs.semanticscholar.org/287b/7baff99d6995fd5852002488eb44659be6c1.pdf,Nonlinear Hierarchical Part-Based Regression for Unconstrained Face Alignment,2016 +179,United States,Helen,helen,33.6404952,-117.8442962,University of California at Irvine,edu,bd13f50b8997d0733169ceba39b6eb1bda3eb1aa,citation,https://arxiv.org/pdf/1506.08347.pdf,Occlusion Coherence: Detecting and Localizing Occluded Faces,2015 +180,United States,Helen,helen,33.6404952,-117.8442962,University of California Irvine,edu,65126e0b1161fc8212643b8ff39c1d71d262fbc1,citation,http://vision.ics.uci.edu/papers/GhiasiF_CVPR_2014/GhiasiF_CVPR_2014.pdf,Occlusion Coherence: Localizing Occluded Faces with a Hierarchical Deformable Part Model,2014 +181,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,4a8480d58c30dc484bda08969e754cd13a64faa1,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/paper_offline.pdf,Offline Deformable Face Tracking in Arbitrary Videos,2015 +182,Germany,Helen,helen,52.14005065,11.64471248,Otto von Guericke University,edu,7d1688ce0b48096e05a66ead80e9270260cb8082,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w44/Saxen_Real_vs._Fake_ICCV_2017_paper.pdf,Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors,2017 +183,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,3c6cac7ecf546556d7c6050f7b693a99cc8a57b3,citation,https://pdfs.semanticscholar.org/3c6c/ac7ecf546556d7c6050f7b693a99cc8a57b3.pdf,Robust facial landmark detection in the wild,2016 +184,Germany,Helen,helen,53.8338371,10.7035939,Institute of Systems and Robotics,edu,4a04d4176f231683fd68ccf0c76fcc0c44d05281,citation,http://home.isr.uc.pt/~pedromartins/Publications/pmartins_icip2018.pdf,Simultaneous Cascaded Regression,2018 +185,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,11fc332bdcc843aad7475bb4566e73a957dffda5,citation,https://arxiv.org/pdf/1805.03356.pdf,SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting,2018 +186,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,d140c5add2cddd4a572f07358d666fe00e8f4fe1,citation,https://pdfs.semanticscholar.org/d140/c5add2cddd4a572f07358d666fe00e8f4fe1.pdf,Statistically Learned Deformable Eye Models,2014 +187,Australia,Helen,helen,-33.8809651,151.20107299,University of Technology Sydney,edu,77875d6e4d8c7ed3baeb259fd5696e921f59d7ad,citation,https://arxiv.org/pdf/1803.04108.pdf,Style Aggregated Network for Facial Landmark Detection,2018 +188,Germany,Helen,helen,50.7791703,6.06728733,RWTH Aachen University,edu,d32b155138dafd0a9099980eceec6081ab51b861,citation,https://arxiv.org/pdf/1902.03459.pdf,Super-realtime facial landmark detection and shape fitting by deep regression of shape model parameters,2019 +189,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,59d8fa6fd91cdb72cd0fa74c04016d79ef5a752b,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Zafeiriou_The_Menpo_Facial_CVPR_2017_paper.pdf,The Menpo Facial Landmark Localisation Challenge: A Step Towards the Solution,2017 +190,Sweden,Helen,helen,55.7039571,13.1902011,Lund University,edu,995d55fdf5b6fe7fb630c93a424700d4bc566104,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Nilsson_The_One_Triangle_ICCV_2015_paper.pdf,The One Triangle Three Parallelograms Sampling Strategy and Its Application in Shape Regression,2015 +191,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,671bfefb22d2044ab3e4402703bb88a10a7da78a,citation,https://arxiv.org/pdf/1811.03492.pdf,Triple consistency loss for pairing distributions in GAN-based face synthesis.,2018 +192,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,5c124b57699be19cd4eb4e1da285b4a8c84fc80d,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Zhao_Unified_Face_Analysis_2014_CVPR_paper.pdf,Unified Face Analysis by Iterative Multi-output Random Forests,2014 +193,France,Helen,helen,49.3849757,1.0683257,"INSA Rouen, France",edu,891b10c4b3b92ca30c9b93170ec9abd71f6099c4,citation,https://pdfs.semanticscholar.org/891b/10c4b3b92ca30c9b93170ec9abd71f6099c4.pdf,2 New Statement for Structured Output Regression Problems,2015 +194,France,Helen,helen,49.4583047,1.0688892,Rouen University,edu,891b10c4b3b92ca30c9b93170ec9abd71f6099c4,citation,https://pdfs.semanticscholar.org/891b/10c4b3b92ca30c9b93170ec9abd71f6099c4.pdf,2 New Statement for Structured Output Regression Problems,2015 +195,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +196,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +197,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +198,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,303065c44cf847849d04da16b8b1d9a120cef73a,citation,https://arxiv.org/pdf/1701.05360.pdf,"3D Face Morphable Models ""In-the-Wild""",2017 +199,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,2e3d081c8f0e10f138314c4d2c11064a981c1327,citation,https://arxiv.org/pdf/1603.06015.pdf,A Comprehensive Performance Evaluation of Deformable Face Tracking “In-the-Wild”,2017 +200,Italy,Helen,helen,40.3515155,18.1750161,"National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy",edu,6e38011e38a1c893b90a48e8f8eae0e22d2008e8,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Del_Coco_A_Computer_Vision_ICCV_2017_paper.pdf,A Computer Vision Based Approach for Understanding Emotional Involvements in Children with Autism Spectrum Disorders,2017 +201,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,131e395c94999c55c53afead65d81be61cd349a4,citation,https://arxiv.org/pdf/1612.02203.pdf,A Functional Regression Approach to Facial Landmark Tracking,2018 +202,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,131e395c94999c55c53afead65d81be61cd349a4,citation,https://arxiv.org/pdf/1612.02203.pdf,A Functional Regression Approach to Facial Landmark Tracking,2018 +203,France,Helen,helen,49.4583047,1.0688892,Normandie University,edu,2df4d05119fe3fbf1f8112b3ad901c33728b498a,citation,https://pdfs.semanticscholar.org/2df4/d05119fe3fbf1f8112b3ad901c33728b498a.pdf,A regularization scheme for structured output problems : an application to facial landmark detection,2016 +204,United States,Helen,helen,40.00471095,-83.02859368,Ohio State University,edu,9993f1a7cfb5b0078f339b9a6bfa341da76a3168,citation,https://arxiv.org/pdf/1609.09058.pdf,"A Simple, Fast and Highly-Accurate Algorithm to Recover 3D Shape from 2D Landmarks on a Single Image",2018 +205,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,5f5906168235613c81ad2129e2431a0e5ef2b6e4,citation,https://arxiv.org/pdf/1601.00199.pdf,A Unified Framework for Compositional Fitting of Active Appearance Models,2016 +206,France,Helen,helen,49.4583047,1.0688892,Rouen University,edu,0b0958493e43ca9c131315bcfb9a171d52ecbb8a,citation,https://pdfs.semanticscholar.org/0b09/58493e43ca9c131315bcfb9a171d52ecbb8a.pdf,A Unified Neural Based Model for Structured Output Problems,2015 +207,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,b730908bc1f80b711c031f3ea459e4de09a3d324,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf,Active Orientation Models for Face Alignment In-the-Wild,2014 +208,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,b730908bc1f80b711c031f3ea459e4de09a3d324,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf,Active Orientation Models for Face Alignment In-the-Wild,2014 +209,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,293ade202109c7f23637589a637bdaed06dc37c9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf,Adaptive cascaded regression,2016 +210,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,293ade202109c7f23637589a637bdaed06dc37c9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf,Adaptive cascaded regression,2016 +211,Australia,Helen,helen,-34.920603,138.6062277,Adelaide University,edu,45e7ddd5248977ba8ec61be111db912a4387d62f,citation,https://arxiv.org/pdf/1711.00253.pdf,Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization,2017 +212,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,45e7ddd5248977ba8ec61be111db912a4387d62f,citation,https://arxiv.org/pdf/1711.00253.pdf,Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization,2017 +213,China,Helen,helen,32.035225,118.855317,Nanjing University of Science & Technology,edu,45e7ddd5248977ba8ec61be111db912a4387d62f,citation,https://arxiv.org/pdf/1711.00253.pdf,Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization,2017 +214,United States,Helen,helen,38.99203005,-76.9461029,University of Maryland College Park,edu,3504907a2e3c81d78e9dfe71c93ac145b1318f9c,citation,https://arxiv.org/pdf/1605.02686.pdf,An End-to-End System for Unconstrained Face Verification with Deep Convolutional Neural Networks,2015 +215,United States,Helen,helen,39.738444,-84.17918747,University of Dayton,edu,1f9ae272bb4151817866511bd970bffb22981a49,citation,https://arxiv.org/pdf/1709.03170.pdf,An Iterative Regression Approach for Face Pose Estimation from RGB Images,2017 +216,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,86c053c162c08bc3fe093cc10398b9e64367a100,citation,https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf,Cascade of forests for face alignment,2015 +217,United Kingdom,Helen,helen,51.5247272,-0.03931035,Queen Mary University of London,edu,86c053c162c08bc3fe093cc10398b9e64367a100,citation,https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf,Cascade of forests for face alignment,2015 +218,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,056ba488898a1a1b32daec7a45e0d550e0c51ae4,citation,https://arxiv.org/pdf/1608.01137.pdf,Cascaded Continuous Regression for Real-Time Incremental Face Tracking,2016 +219,United States,Helen,helen,43.07982815,-89.43066425,University of Wisconsin Madison,edu,2e091b311ac48c18aaedbb5117e94213f1dbb529,citation,http://pages.cs.wisc.edu/~lizhang/projects/collab-face-landmarks/SmithECCV2014.pdf,Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets,2014 +220,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,faead8f2eb54c7bc33bc7d0569adc7a4c2ec4c3b,citation,https://arxiv.org/pdf/1611.10152.pdf,Combining Data-Driven and Model-Driven Methods for Robust Facial Landmark Detection,2018 +221,Canada,Helen,helen,45.3290959,-75.6619858,"National Research Council, Italy",edu,08ecc281cdf954e405524287ee5920e7c4fb597e,citation,https://pdfs.semanticscholar.org/08ec/c281cdf954e405524287ee5920e7c4fb597e.pdf,Computational Assessment of Facial Expression Production in ASD Children,2018 +222,United Kingdom,Helen,helen,51.5247272,-0.03931035,Queen Mary University of London,edu,dee406a7aaa0f4c9d64b7550e633d81bc66ff451,citation,https://arxiv.org/pdf/1710.01453.pdf,Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning,2017 +223,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,dee406a7aaa0f4c9d64b7550e633d81bc66ff451,citation,https://arxiv.org/pdf/1710.01453.pdf,Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning,2017 +224,United Kingdom,Helen,helen,52.17638955,0.14308882,University of Cambridge,edu,029b53f32079063047097fa59cfc788b2b550c4b,citation,https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf,Continuous Conditional Neural Fields for Structured Regression,2014 +225,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,029b53f32079063047097fa59cfc788b2b550c4b,citation,https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf,Continuous Conditional Neural Fields for Structured Regression,2014 +226,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,88e2efab01e883e037a416c63a03075d66625c26,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w36/Zadeh_Convolutional_Experts_Constrained_ICCV_2017_paper.pdf,Convolutional Experts Constrained Local Model for 3D Facial Landmark Detection,2017 +227,Sweden,Helen,helen,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,656a59954de3c9fcf82ffcef926af6ade2f3fdb5,citation,https://pdfs.semanticscholar.org/656a/59954de3c9fcf82ffcef926af6ade2f3fdb5.pdf,Convolutional Network Representation for Visual Recognition,2017 +228,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,7360a2adcd6e3fe744b7d7aec5c08ee31094dfd4,citation,https://ibug.doc.ic.ac.uk/media/uploads/documents/deep-deformable-convolutional.pdf,Deep and Deformable: Convolutional Mixtures of Deformable Part-Based Models,2018 +229,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,7360a2adcd6e3fe744b7d7aec5c08ee31094dfd4,citation,https://ibug.doc.ic.ac.uk/media/uploads/documents/deep-deformable-convolutional.pdf,Deep and Deformable: Convolutional Mixtures of Deformable Part-Based Models,2018 +230,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,5239001571bc64de3e61be0be8985860f08d7e7e,citation,https://arxiv.org/pdf/1607.06871.pdf,Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling,2016 +231,United States,Helen,helen,45.57022705,-122.63709346,Concordia University,edu,5239001571bc64de3e61be0be8985860f08d7e7e,citation,https://arxiv.org/pdf/1607.06871.pdf,Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling,2016 +232,United States,Helen,helen,37.3239177,-122.0129693,"NEC Labs, Cupertino, CA",company,61f04606528ecf4a42b49e8ac2add2e9f92c0def,citation,https://arxiv.org/pdf/1605.01014.pdf,Deep Deformation Network for Object Landmark Localization,2016 +233,France,Helen,helen,49.4583047,1.0688892,Normandie University,edu,9ca7899338129f4ba6744f801e722d53a44e4622,citation,https://arxiv.org/pdf/1504.07550.pdf,Deep neural networks regularization for structured output prediction,2018 +234,China,Helen,helen,39.9808333,116.34101249,Beihang University,edu,5a7e62fdea39a4372e25cbbadc01d9b2204af95a,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf,Direct Shape Regression Networks for End-to-End Face Alignment,2018 +235,United States,Helen,helen,32.7283683,-97.11201835,University of Texas at Arlington,edu,5a7e62fdea39a4372e25cbbadc01d9b2204af95a,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf,Direct Shape Regression Networks for End-to-End Face Alignment,2018 +236,China,Helen,helen,34.1235825,108.83546,Xidian University,edu,5a7e62fdea39a4372e25cbbadc01d9b2204af95a,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf,Direct Shape Regression Networks for End-to-End Face Alignment,2018 +237,United States,Helen,helen,43.07982815,-89.43066425,University of Wisconsin Madison,edu,0eac652139f7ab44ff1051584b59f2dc1757f53b,citation,https://arxiv.org/pdf/1611.01584.pdf,Efficient Branching Cascaded Regression for Face Alignment under Significant Head Rotation,2016 +238,Brazil,Helen,helen,-13.0024602,-38.5089752,Federal University of Bahia,edu,b07582d1a59a9c6f029d0d8328414c7bef64dca0,citation,https://arxiv.org/pdf/1710.07662.pdf,Employing Fusion of Learned and Handcrafted Features for Unconstrained Ear Recognition,2018 +239,United States,Helen,helen,28.0599999,-82.41383619,University of South Florida,edu,b07582d1a59a9c6f029d0d8328414c7bef64dca0,citation,https://arxiv.org/pdf/1710.07662.pdf,Employing Fusion of Learned and Handcrafted Features for Unconstrained Ear Recognition,2018 +240,Spain,Helen,helen,41.5008957,2.111553,Autonomous University of Barcelona,edu,a40f8881a36bc01f3ae356b3e57eac84e989eef0,citation,https://arxiv.org/pdf/1703.03305.pdf,"End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks",2017 +241,Netherlands,Helen,helen,51.816701,5.865272,Radboud University Nijmegen,edu,a40f8881a36bc01f3ae356b3e57eac84e989eef0,citation,https://arxiv.org/pdf/1703.03305.pdf,"End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks",2017 +242,Spain,Helen,helen,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,a40f8881a36bc01f3ae356b3e57eac84e989eef0,citation,https://arxiv.org/pdf/1703.03305.pdf,"End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks",2017 +243,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,49258cc3979103681848284470056956b77caf80,citation,https://5443dcab-a-62cb3a1a-s-sites.googlegroups.com/site/tuftsyuewu/epat-euclidean-perturbation.pdf?attachauth=ANoY7crlk9caZscfn0KRjed81DVoV-Ec6ZHI7txQrJiM_NBic36WKIg-ODwefcBtfgfKdS1iX28MlSXNyB7pE0D7opPjlGqxBVVa1UuIiydhFOgkXlXGfrYqSPS6749JeYWDkfvwWraRfB_CK8bu77jAEA2sIVNgaVRa_7zvmzwnstLwSUowbYC1LRc5yDt8ieT_jdEb_TuhMgR2j03BdHgyUkVjl0TXRukYHWglDOxzHAKwj0vsb4U%3D&attredirects=0,EPAT: Euclidean Perturbation Analysis and Transform - An Agnostic Data Adaptation Framework for Improving Facial Landmark Detectors,2017 +244,United States,Helen,helen,37.3307703,-121.8940951,Adobe,company,992ebd81eb448d1eef846bfc416fc929beb7d28b,citation,https://pdfs.semanticscholar.org/992e/bd81eb448d1eef846bfc416fc929beb7d28b.pdf,Exemplar-Based Face Parsing Supplementary Material,2013 +245,United States,Helen,helen,43.07982815,-89.43066425,University of Wisconsin Madison,edu,992ebd81eb448d1eef846bfc416fc929beb7d28b,citation,https://pdfs.semanticscholar.org/992e/bd81eb448d1eef846bfc416fc929beb7d28b.pdf,Exemplar-Based Face Parsing Supplementary Material,2013 +246,China,Helen,helen,35.86166,104.195397,"Megvii Inc. (Face++), China",company,1a8ccc23ed73db64748e31c61c69fe23c48a2bb1,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Zhou_Extensive_Facial_Landmark_2013_ICCV_paper.pdf,Extensive Facial Landmark Localization with Coarse-to-Fine Convolutional Network Cascade,2013 +247,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,6d8c9a1759e7204eacb4eeb06567ad0ef4229f93,citation,https://arxiv.org/pdf/1707.05938.pdf,"Face Alignment Robust to Pose, Expressions and Occlusions",2016 +248,United States,Helen,helen,42.718568,-84.47791571,Michigan State University,edu,6d8c9a1759e7204eacb4eeb06567ad0ef4229f93,citation,https://arxiv.org/pdf/1707.05938.pdf,"Face Alignment Robust to Pose, Expressions and Occlusions",2016 +249,Poland,Helen,helen,52.22165395,21.00735776,Warsaw University of Technology,edu,eb48a58b873295d719827e746d51b110f5716d6c,citation,https://arxiv.org/pdf/1706.01820.pdf,Face Alignment Using K-Cluster Regression Forests With Weighted Splitting,2016 +250,France,Helen,helen,48.8507603,2.3412757,"Sorbonne Universités, Paris, France",edu,31e57fa83ac60c03d884774d2b515813493977b9,citation,https://arxiv.org/pdf/1703.01597.pdf,Face Alignment with Cascaded Semi-Parametric Deep Greedy Neural Forests,2018 +251,United States,Helen,helen,30.44235995,-84.29747867,Florida State University,edu,9207671d9e2b668c065e06d9f58f597601039e5e,citation,https://pdfs.semanticscholar.org/9207/671d9e2b668c065e06d9f58f597601039e5e.pdf,Face Detection Using a 3D Model on Face Keypoints,2014 +252,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,bc704680b5032eadf78c4e49f548ba14040965bf,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Trigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.pdf,"Face Normals ""In-the-Wild"" Using Fully Convolutional Networks",2017 +253,United Kingdom,Helen,helen,51.5231607,-0.1282037,University College London,edu,bc704680b5032eadf78c4e49f548ba14040965bf,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Trigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.pdf,"Face Normals ""In-the-Wild"" Using Fully Convolutional Networks",2017 +254,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,a4ce0f8cfa7d9aa343cb30b0792bb379e20ef41b,citation,https://arxiv.org/pdf/1812.03887.pdf,Facial Landmark Machines: A Backbone-Branches Architecture with Progressive Representation Learning,2018 +255,China,Helen,helen,22.2081469,114.25964115,University of Hong Kong,edu,a4ce0f8cfa7d9aa343cb30b0792bb379e20ef41b,citation,https://arxiv.org/pdf/1812.03887.pdf,Facial Landmark Machines: A Backbone-Branches Architecture with Progressive Representation Learning,2018 +256,Israel,Helen,helen,32.06932925,34.84334339,Bar-Ilan University,edu,e4f032ee301d4a4b3d598e6fa6cffbcdb9cdfdd1,citation,https://arxiv.org/pdf/1805.01760.pdf,Facial Landmark Point Localization using Coarse-to-Fine Deep Recurrent Neural Network,2018 +257,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,ebedc841a2c1b3a9ab7357de833101648281ff0e,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf,Facial landmarking for in-the-wild images with local inference based on global appearance,2015 +258,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,ebedc841a2c1b3a9ab7357de833101648281ff0e,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf,Facial landmarking for in-the-wild images with local inference based on global appearance,2015 +259,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,375435fb0da220a65ac9e82275a880e1b9f0a557,citation,http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf,From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild,2015 +260,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,375435fb0da220a65ac9e82275a880e1b9f0a557,citation,http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf,From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild,2015 +261,China,Helen,helen,31.30104395,121.50045497,Fudan University,edu,37381718559f767fc496cc34ceb98ff18bc7d3e1,citation,https://pdfs.semanticscholar.org/3738/1718559f767fc496cc34ceb98ff18bc7d3e1.pdf,Harnessing Synthesized Abstraction Images to Improve Facial Attribute Recognition,2018 +262,China,Helen,helen,31.19884,121.432567,Jiaotong University,edu,37381718559f767fc496cc34ceb98ff18bc7d3e1,citation,https://pdfs.semanticscholar.org/3738/1718559f767fc496cc34ceb98ff18bc7d3e1.pdf,Harnessing Synthesized Abstraction Images to Improve Facial Attribute Recognition,2018 +263,Spain,Helen,helen,42.797263,-1.6321518,Public University of Navarra,edu,8c0a47c61143ceb5bbabef403923e4bf92fb854d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Larumbe_Improved_Strategies_for_ICCV_2017_paper.pdf,Improved Strategies for HPE Employing Learning-by-Synthesis Approaches,2017 +264,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,3352426a67eabe3516812cb66a77aeb8b4df4d1b,citation,https://arxiv.org/pdf/1708.06023.pdf,Joint Multi-view Face Alignment in the Wild,2017 +265,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,390f3d7cdf1ce127ecca65afa2e24c563e9db93b,citation,https://pdfs.semanticscholar.org/6e80/a3558f9170f97c103137ea2e18ddd782e8d7.pdf,Learning and Transferring Multi-task Deep Representation for Face Alignment,2014 +266,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,df80fed59ffdf751a20af317f265848fe6bfb9c9,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Learning%20deep%20sharable%20and%20structural%20detectors%20for%20face%20alignment.pdf,Learning Deep Sharable and Structural Detectors for Face Alignment,2017 +267,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,d9deafd9d9e60657a7f34df5f494edff546c4fb8,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_Learning_the_Multilinear_CVPR_2017_paper.pdf,Learning the Multilinear Structure of Visual Data,2017 +268,Canada,Helen,helen,45.504384,-73.6128829,Polytechnique Montréal,edu,4f77a37753c03886ca9c9349723ec3bbfe4ee967,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Hasan_Localizing_Facial_Keypoints_2013_ICCV_paper.pdf,"Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models",2013 +269,Canada,Helen,helen,43.66333345,-79.39769975,University of Toronto,edu,4f77a37753c03886ca9c9349723ec3bbfe4ee967,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Hasan_Localizing_Facial_Keypoints_2013_ICCV_paper.pdf,"Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models",2013 +270,United States,Helen,helen,38.7768106,-94.9442982,Amazon,company,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +271,China,Helen,helen,39.993008,116.329882,SenseTime,company,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +272,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +273,United States,Helen,helen,32.87935255,-117.23110049,"University of California, San Diego",edu,1b0a071450c419138432c033f722027ec88846ea,citation,http://cvrr.ucsd.edu/publications/2016/YuenMartinTrivediITSC2016.pdf,Looking at faces in a vehicle: A deep CNN based approach and evaluation,2016 +274,Iran,Helen,helen,35.704514,51.40972058,Amirkabir University of Technology,edu,6f5ce5570dc2960b8b0e4a0a50eab84b7f6af5cb,citation,https://arxiv.org/pdf/1706.06247.pdf,Low Resolution Face Recognition Using a Two-Branch Deep Convolutional Neural Network Architecture,2017 +275,United States,Helen,helen,42.3583961,-71.09567788,MIT,edu,6f5ce5570dc2960b8b0e4a0a50eab84b7f6af5cb,citation,https://arxiv.org/pdf/1706.06247.pdf,Low Resolution Face Recognition Using a Two-Branch Deep Convolutional Neural Network Architecture,2017 +276,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,47e8db3d9adb79a87c8c02b88f432f911eb45dc5,citation,https://arxiv.org/pdf/1509.05715.pdf,MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization,2016 +277,United Kingdom,Helen,helen,53.46600455,-2.23300881,University of Manchester,edu,daa4cfde41d37b2ab497458e331556d13dd14d0b,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Rajamanoharan_Multi-View_Constrained_Local_ICCV_2015_paper.pdf,Multi-view Constrained Local Models for Large Head Angle Facial Tracking,2015 +278,China,Helen,helen,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,d03265ea9200a993af857b473c6bf12a095ca178,citation,https://pdfs.semanticscholar.org/d032/65ea9200a993af857b473c6bf12a095ca178.pdf,Multiple deep convolutional neural networks averaging for face alignment,2015 +279,France,Helen,helen,49.3849757,1.0683257,"INSA Rouen, France",edu,0a6a25ee84fc0bf7284f41eaa6fefaa58b5b329a,citation,https://arxiv.org/pdf/1807.05292.pdf,Neural Networks Regularization Through Representation Learning,2018 +280,France,Helen,helen,49.4583047,1.0688892,"LITIS, Université de Rouen, Rouen, France",edu,0a6a25ee84fc0bf7284f41eaa6fefaa58b5b329a,citation,https://arxiv.org/pdf/1807.05292.pdf,Neural Networks Regularization Through Representation Learning,2018 +281,United Kingdom,Helen,helen,53.7641378,-2.7092453,University of Central Lancashire,edu,ef52f1e2b52fd84a7e22226ed67132c6ce47b829,citation,https://pdfs.semanticscholar.org/ef52/f1e2b52fd84a7e22226ed67132c6ce47b829.pdf,Online Eye Status Detection in the Wild with Convolutional Neural Networks,2017 +282,United Kingdom,Helen,helen,50.7944026,-1.0971748,Cambridge University,edu,2fda461869f84a9298a0e93ef280f79b9fb76f94,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf,OpenFace: An open source facial behavior analysis toolkit,2016 +283,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,2fda461869f84a9298a0e93ef280f79b9fb76f94,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf,OpenFace: An open source facial behavior analysis toolkit,2016 +284,Sweden,Helen,helen,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,12d8730da5aab242795bdff17b30b6e0bac82998,citation,https://arxiv.org/pdf/1411.6509.pdf,Persistent Evidence of Local Image Properties in Generic ConvNets,2015 +285,United States,Helen,helen,33.6404952,-117.8442962,UC Irvine,edu,5711400c59a162112c57e9f899147d457537f701,citation,https://pdfs.semanticscholar.org/5711/400c59a162112c57e9f899147d457537f701.pdf,Recognizing and Segmenting Objects in the Presence of Occlusion and Clutter,2016 +286,United States,Helen,helen,41.2097516,-73.8026467,IBM Research T. J. Watson Center,company,ac5d0705a9ddba29151fd539c668ba2c0d16deb6,citation,https://arxiv.org/pdf/1801.06066.pdf,RED-Net: A Recurrent Encoder–Decoder Network for Video-Based Face Alignment,2018 +287,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,ac5d0705a9ddba29151fd539c668ba2c0d16deb6,citation,https://arxiv.org/pdf/1801.06066.pdf,RED-Net: A Recurrent Encoder–Decoder Network for Video-Based Face Alignment,2018 +288,Singapore,Helen,helen,1.3484104,103.68297965,Nanyang Technological University,edu,2bfccbf6f4e88a92a7b1f2b5c588b68c5fa45a92,citation,https://arxiv.org/pdf/1807.11079.pdf,ReenactGAN: Learning to Reenact Faces via Boundary Transfer,2018 +289,China,Helen,helen,39.993008,116.329882,SenseTime,company,2bfccbf6f4e88a92a7b1f2b5c588b68c5fa45a92,citation,https://arxiv.org/pdf/1807.11079.pdf,ReenactGAN: Learning to Reenact Faces via Boundary Transfer,2018 +290,Italy,Helen,helen,46.0658836,11.1159894,University of Trento,edu,f61829274cfe64b94361e54351f01a0376cd1253,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Tulyakov_Regressing_a_3D_ICCV_2015_paper.pdf,Regressing a 3D Face Shape from a Single Image,2015 +291,Singapore,Helen,helen,1.3484104,103.68297965,Nanyang Technological University,edu,4d23bb65c6772cb374fc05b1f10dedf9b43e63cf,citation,https://pdfs.semanticscholar.org/4d23/bb65c6772cb374fc05b1f10dedf9b43e63cf.pdf,Robust face alignment and partial face recognition,2016 +292,United States,Helen,helen,34.13710185,-118.12527487,California Institute of Technology,edu,2724ba85ec4a66de18da33925e537f3902f21249,citation,,Robust Face Landmark Estimation under Occlusion,2013 +293,United States,Helen,helen,47.6423318,-122.1369302,Microsoft,company,2724ba85ec4a66de18da33925e537f3902f21249,citation,,Robust Face Landmark Estimation under Occlusion,2013 +294,United States,Helen,helen,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,1c1f957d85b59d23163583c421755869f248ceef,citation,https://arxiv.org/pdf/1709.08127.pdf,Robust Facial Landmark Detection Under Significant Head Poses and Occlusion,2015 +295,Germany,Helen,helen,48.263011,11.666857,Technical University of Munich,edu,1121873326ab0c9f324b004aa0970a31d4f83eb8,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Merget_Robust_Facial_Landmark_CVPR_2018_paper.pdf,Robust Facial Landmark Detection via a Fully-Convolutional Local-Global Context Network,2018 +296,United States,Helen,helen,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,c3d3d2229500c555c7a7150a8b126ef874cbee1c,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Wu_Shape_Augmented_Regression_ICCV_2015_paper.pdf,Shape Augmented Regression Method for Face Alignment,2015 +297,Canada,Helen,helen,43.66333345,-79.39769975,University of Toronto,edu,33ae696546eed070717192d393f75a1583cd8e2c,citation,https://arxiv.org/pdf/1708.08508.pdf,Subspace selection to suppress confounding source domain information in AAM transfer learning,2017 +298,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +299,United Kingdom,Helen,helen,51.59029705,-0.22963221,Middlesex University,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +300,United Kingdom,Helen,helen,50.7369302,-3.53647672,University of Exeter,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +301,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +302,Germany,Helen,helen,49.01546,8.4257999,Fraunhofer,company,50ccc98d9ce06160cdf92aaf470b8f4edbd8b899,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf,Towards robust cascaded regression for face alignment in the wild,2015 +303,Germany,Helen,helen,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,50ccc98d9ce06160cdf92aaf470b8f4edbd8b899,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf,Towards robust cascaded regression for face alignment in the wild,2015 +304,Switzerland,Helen,helen,46.5184121,6.5684654,École Polytechnique Fédérale de Lausanne,edu,50ccc98d9ce06160cdf92aaf470b8f4edbd8b899,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf,Towards robust cascaded regression for face alignment in the wild,2015 +305,Poland,Helen,helen,52.22165395,21.00735776,Warsaw University of Technology,edu,e52272f92fa553687f1ac068605f1de929efafc2,citation,https://repo.pw.edu.pl/docstore/download/WUT8aeb20bbb6964b7da1cfefbf2e370139/1-s2.0-S0952197617301227-main.pdf,Using a Probabilistic Neural Network for lip-based biometric verification,2017 +306,United States,Helen,helen,33.6404952,-117.8442962,UC Irvine,edu,397085122a5cade71ef6c19f657c609f0a4f7473,citation,https://pdfs.semanticscholar.org/db11/4901d09a07ab66bffa6986bc81303e133ae1.pdf,Using Segmentation to Predict the Absence of Occluded Parts,2015 +307,China,Helen,helen,39.980196,116.333305,"CASIA, China",edu,708f4787bec9d7563f4bb8b33834de445147133b,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.pdf,Wavelet-SRNet: A Wavelet-Based CNN for Multi-scale Face Super Resolution,2017 +308,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,708f4787bec9d7563f4bb8b33834de445147133b,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.pdf,Wavelet-SRNet: A Wavelet-Based CNN for Multi-scale Face Super Resolution,2017 +309,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,044d9a8c61383312cdafbcc44b9d00d650b21c70,citation,,300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,2013 +310,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,044d9a8c61383312cdafbcc44b9d00d650b21c70,citation,,300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,2013 +311,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,044d9a8c61383312cdafbcc44b9d00d650b21c70,citation,,300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,2013 +312,Italy,Helen,helen,46.0658836,11.1159894,University of Trento,edu,b48d3694a8342b6efc18c9c9124c62406e6bf3b3,citation,,Recurrent Convolutional Shape Regression,2018 +313,United States,Helen,helen,33.9850469,-118.4694832,"Snapchat Research, Venice, CA",company,b48d3694a8342b6efc18c9c9124c62406e6bf3b3,citation,,Recurrent Convolutional Shape Regression,2018 +314,Italy,Helen,helen,40.3515155,18.1750161,"National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy",edu,523db6dee0e60a2d513759fa04aa96f2fed40ff4,citation,,Study of Mechanisms of Social Interaction Stimulation in Autism Spectrum Disorder by Assisted Humanoid Robot,2018 +315,Italy,Helen,helen,38.1937335,15.5542057,"National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Messina, Italy",edu,523db6dee0e60a2d513759fa04aa96f2fed40ff4,citation,,Study of Mechanisms of Social Interaction Stimulation in Autism Spectrum Disorder by Assisted Humanoid Robot,2018 +316,United States,Helen,helen,37.3307703,-121.8940951,Adobe,company,95f12d27c3b4914e0668a268360948bce92f7db3,citation,https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf,Interactive Facial Feature Localization,2012 +317,United States,Helen,helen,37.3936717,-122.0807262,Facebook,company,95f12d27c3b4914e0668a268360948bce92f7db3,citation,https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf,Interactive Facial Feature Localization,2012 +318,United States,Helen,helen,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,95f12d27c3b4914e0668a268360948bce92f7db3,citation,https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf,Interactive Facial Feature Localization,2012 +319,United States,Helen,helen,33.9832526,-118.40417,USC,edu,0a34fe39e9938ae8c813a81ae6d2d3a325600e5c,citation,https://arxiv.org/pdf/1708.07517.pdf,FacePoseNet: Making a Case for Landmark-Free Face Alignment,2017 +320,Israel,Helen,helen,32.77824165,34.99565673,Open University of Israel,edu,0a34fe39e9938ae8c813a81ae6d2d3a325600e5c,citation,https://arxiv.org/pdf/1708.07517.pdf,FacePoseNet: Making a Case for Landmark-Free Face Alignment,2017 +321,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,c46a4db7247d26aceafed3e4f38ce52d54361817,citation,https://arxiv.org/pdf/1609.09642.pdf,A CNN Cascade for Landmark Guided Semantic Part Segmentation,2016 +322,United States,Helen,helen,38.9869183,-76.9425543,"Maryland Univ., College Park, MD, USA",edu,59b6e9320a4e1de9216c6fc49b4b0309211b17e8,citation,https://pdfs.semanticscholar.org/59b6/e9320a4e1de9216c6fc49b4b0309211b17e8.pdf,Robust Representations for unconstrained Face Recognition and its Applications,2016 diff --git a/site/datasets/test/helen.json b/site/datasets/test/helen.json new file mode 100644 index 00000000..59065050 --- /dev/null +++ b/site/datasets/test/helen.json @@ -0,0 +1 @@ +{"id": "95f12d27c3b4914e0668a268360948bce92f7db3", "paper": {"key": "helen", "name": "Helen", "title": "Interactive Facial Feature Localization", "year": "2012", "addresses": [{"name": "Adobe", "source_name": "Adobe2", "street_adddress": "345 Park Ave, San Jose, CA 95110, USA", "lat": "37.33077030", "lng": "-121.89409510", "type": "company", "country": "United States"}]}, "citations": [{"id": "bae86526b3b0197210b64cdd95cb5aca4209c98a", "title": "Brute-Force Facial Landmark Analysis With a 140, 000-Way Classifier", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1802.01777.pdf"], "doi": []}, {"id": "1b8541ec28564db66a08185510c8b300fa4dc793", "title": "Affine-Transformation Parameters Regression for Face Alignment", "addresses": [{"name": "National University of Defense Technology, China", "source_name": "National University of Defence Technology, Changsha 410000, China", "street_adddress": "\u56fd\u9632\u79d1\u5b66\u6280\u672f\u5927\u5b66, \u4e09\u4e00\u5927\u9053, \u5f00\u798f\u533a, \u5f00\u798f\u533a (Kaifu), \u957f\u6c99\u5e02 / Changsha, \u6e56\u5357\u7701, 410073, \u4e2d\u56fd", "lat": "28.22902090", "lng": "112.99483204", "type": "mil", "country": "China"}], "year": "2016", "pdf": [], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328260", "http://doi.org/10.1109/LSP.2015.2499778"]}, {"id": "084bd02d171e36458f108f07265386f22b34a1ae", "title": "Face Alignment at 3000 FPS via Regressing Local Binary Features", "addresses": [{"name": "University of Science and Technology of China", "source_name": "University of Science and Technology of China", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u6280\u672f\u5927\u5b66 \u4e1c\u6821\u533a, 96\u53f7, \u91d1\u5be8\u8def, \u6c5f\u6dee\u5316\u80a5\u5382\u5c0f\u533a, \u829c\u6e56\u8def\u8857\u9053, \u5408\u80a5\u5e02\u533a, \u5408\u80a5\u5e02, \u5b89\u5fbd\u7701, 230026, \u4e2d\u56fd", "lat": "31.83907195", "lng": "117.26420748", "type": "edu", "country": "China"}, {"name": "Microsoft", "source_name": "Microsoft Corporation, Redmond, WA, USA", "street_adddress": "One Microsoft Way, Redmond, WA 98052, USA", "lat": "47.64233180", "lng": "-122.13693020", "type": "company", "country": "United States"}], "year": "2014", "pdf": ["http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf", "http://research.microsoft.com/en-US/people/yichenw/cvpr14_facealignment.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Ren_Face_Alignment_at_2014_CVPR_paper.pdf", "https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/yichenw-cvpr14_facealignment.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909614", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.218", "http://doi.org/10.1109/CVPR.2014.218"]}, {"id": "5bd3d08335bb4e444a86200c5e9f57fd9d719e14", "title": "3 D Face Morphable Models \u201c Inthe-Wild \u201d", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "Amazon", "source_name": "Amazon, USA", "street_adddress": "Montrose Road, Gardner, KS 66030, USA", "lat": "38.77681060", "lng": "-94.94429820", "type": "company", "country": "United States"}, {"name": "University of Oulu", "source_name": "University of Oulu", "street_adddress": "Oulun yliopisto, Biologintie, Linnanmaa, Oulu, Oulun seutukunta, Pohjois-Pohjanmaa, Pohjois-Suomen aluehallintovirasto, Pohjois-Suomi, Manner-Suomi, 90540, Suomi", "lat": "65.05921570", "lng": "25.46632601", "type": "edu", "country": "Finland"}], "year": "", "pdf": ["https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf"], "doi": []}, {"id": "12095f9b35ee88272dd5abc2d942a4f55804b31e", "title": "DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild R\u0131za", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "Amazon", "source_name": "Amazon, USA", "street_adddress": "Montrose Road, Gardner, KS 66030, USA", "lat": "38.77681060", "lng": "-94.94429820", "type": "company", "country": "United States"}, {"name": "University College London", "source_name": "University College London", "street_adddress": "UCL Institute of Education, 20, Bedford Way, Holborn, Bloomsbury, London Borough of Camden, London, Greater London, England, WC1H 0AL, UK", "lat": "51.52316070", "lng": "-0.12820370", "type": "edu", "country": "United Kingdom"}], "year": "", "pdf": ["https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf"], "doi": []}, {"id": "2d2e1d1f50645fe20c051339e9a0fca7b176422a", "title": "Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild", "addresses": [{"name": "University of Surrey", "source_name": "University of Surrey", "street_adddress": "University of Surrey, Spine Road, Guildford Park, Guildford, Surrey, South East, England, GU2 7XH, UK", "lat": "51.24303255", "lng": "-0.59001382", "type": "edu", "country": "United Kingdom"}, {"name": "University of Stirling", "source_name": "Division of Computing Science & Maths, University of Stirling, Stirling, UK", "street_adddress": "University of, Stirling FK9 4LA, United Kingdom", "lat": "56.14541190", "lng": "-3.92057130", "type": "edu", "country": "United Kingdom"}, {"name": "Jiangnan University", "source_name": "Jiangnan University", "street_adddress": "\u6c5f\u5357\u5927\u5b66\u7ad9, \u8821\u6e56\u5927\u9053, \u6ee8\u6e56\u533a, \u5357\u573a\u6751, \u6ee8\u6e56\u533a (Binhu), \u65e0\u9521\u5e02 / Wuxi, \u6c5f\u82cf\u7701, 214121, \u4e2d\u56fd", "lat": "31.48542550", "lng": "120.27395810", "type": "edu", "country": "China"}, {"name": "Sichuan University, Chengdu", "source_name": "Sichuan Univ., Chengdu", "street_adddress": "\u56db\u5ddd\u5927\u5b66\uff08\u534e\u897f\u6821\u533a\uff09, \u6821\u4e1c\u8def, \u6b66\u4faf\u533a, \u6b66\u4faf\u533a (Wuhou), \u6210\u90fd\u5e02 / Chengdu, \u56db\u5ddd\u7701, 610014, \u4e2d\u56fd", "lat": "30.64276900", "lng": "104.06751175", "type": "edu", "country": "China"}, {"name": "Reutlingen University", "source_name": "Reutlingen University", "street_adddress": "Campus Hohbuch, Campus Hochschule Reutlingen, Reutlingen, Landkreis Reutlingen, Regierungsbezirk T\u00fcbingen, Baden-W\u00fcrttemberg, 72762, Deutschland", "lat": "48.48187645", "lng": "9.18682404", "type": "edu", "country": "Germany"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1803.05536.pdf"], "doi": []}, {"id": "266ed43dcea2e7db9f968b164ca08897539ca8dd", "title": "Beyond Principal Components: Deep Boltzmann Machines for face modeling", "addresses": [{"name": "Concordia University", "source_name": "Concordia University", "street_adddress": "Concordia University, 2811, Northeast Holman Street, Concordia, Portland, Multnomah County, Oregon, 97211, USA", "lat": "45.57022705", "lng": "-122.63709346", "type": "edu", "country": "United States"}, {"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/ext/3B_037_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Duong_Beyond_Principal_Components_2015_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299111", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2015.7299111", "http://doi.org/10.1109/CVPR.2015.7299111"]}, {"id": "ba1c0600d3bdb8ed9d439e8aa736a96214156284", "title": "Complex representations for learning statistical shape priors", "addresses": [{"name": "Amazon Research, Berlin", "source_name": "Amazon Research, Berlin, Germany", "street_adddress": "Krausenstra\u00dfe 38, 10117 Berlin, Germany", "lat": "52.50986860", "lng": "13.39845130", "type": "company", "country": "Germany"}, {"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["http://www.eurasip.org/Proceedings/Eusipco/Eusipco2017/papers/1570347043.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/eusipco_2017.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8081394", "http://doi.org/10.23919/EUSIPCO.2017.8081394", "https://doi.org/10.23919/EUSIPCO.2017.8081394"]}, {"id": "3b470b76045745c0ef5321e0f1e0e6a4b1821339", "title": "Consensus of Regression for Occlusion-Robust Facial Feature Localization", "addresses": [{"name": "Rutgers University", "source_name": "Rutgers University", "street_adddress": "Rutgers Cook Campus - North, Biel Road, New Brunswick, Middlesex County, New Jersey, 08901, USA", "lat": "40.47913175", "lng": "-74.43168868", "type": "edu", "country": "United States"}, {"name": "Adobe Research, San Jose, CA", "source_name": "Adobe Research, San Jose, CA 95110, USA", "street_adddress": "345 Park Ave, San Jose, CA 95110, USA", "lat": "37.33093070", "lng": "-121.89404850", "type": "company", "country": "United States"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf"], "doi": []}, {"id": "cc4fc9a309f300e711e09712701b1509045a8e04", "title": "Continuous Supervised Descent Method for Facial Landmark Localisation", "addresses": [{"name": "Universitat Oberta de Catalunya", "source_name": "Universitat Oberta de Catalunya", "street_adddress": "Universitat Oberta de Catalunya, 156, Rambla del Poblenou, Proven\u00e7als del Poblenou, Sant Mart\u00ed, Barcelona, BCN, CAT, 08018, Espa\u00f1a", "lat": "41.40657415", "lng": "2.19453410", "type": "edu", "country": "Spain"}, {"name": "Universitat de Barcelona", "source_name": "Universitat de Barcelona & Computer Vision Center, Barcelona, Spain", "street_adddress": "Gran Via de les Corts Catalanes, 585, 08007 Barcelona, Spain", "lat": "41.38660800", "lng": "2.16402000", "type": "edu", "country": "Spain"}, {"name": "Robotics Institute", "source_name": "Robotics Institute", "street_adddress": "Institute for Field Robotics, \u0e1b\u0e23\u0e30\u0e0a\u0e32\u0e2d\u0e38\u0e17\u0e34\u0e28, \u0e01\u0e23\u0e38\u0e07\u0e40\u0e17\u0e1e\u0e21\u0e2b\u0e32\u0e19\u0e04\u0e23, \u0e40\u0e02\u0e15\u0e23\u0e32\u0e29\u0e0e\u0e23\u0e4c\u0e1a\u0e39\u0e23\u0e13\u0e30, \u0e01\u0e23\u0e38\u0e07\u0e40\u0e17\u0e1e\u0e21\u0e2b\u0e32\u0e19\u0e04\u0e23, 10140, \u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22", "lat": "13.65450525", "lng": "100.49423171", "type": "edu", "country": "Thailand"}, {"name": "University of Pittsburgh", "source_name": "University of Pittsburgh", "street_adddress": "University of Pittsburgh, Sutherland Drive, West Oakland, PGH, Allegheny County, Pennsylvania, 15240, USA", "lat": "40.44415295", "lng": "-79.96243993", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf"], "doi": []}, {"id": "f7ae38a073be7c9cd1b92359131b9c8374579b13", "title": "Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression", "addresses": [{"name": "University of Western Ontario", "source_name": "University of Western Ontario, London, ON, Canada", "street_adddress": "1151 Richmond St, London, ON N6A 3K7, Canada", "lat": "43.00959710", "lng": "-81.27373360", "type": "edu", "country": "Canada"}, {"name": "London Healthcare Sciences Centre, Ontario, Canada", "source_name": "London Healthcare Sciences Centre, London, ON, Canada", "street_adddress": "800 Commissioners Rd E, London, ON N6A 5W9, Canada", "lat": "42.96034800", "lng": "-81.22662800", "type": "edu", "country": "Canada"}, {"name": "Northumbria University", "source_name": "Northumbria University", "street_adddress": "Northumbria University, Birkdale Close, High Heaton, Newcastle upon Tyne, Tyne and Wear, North East England, England, NE7 7TP, UK", "lat": "55.00306320", "lng": "-1.57463231", "type": "edu", "country": "United Kingdom"}, {"name": "St. Joseph's Health Care, Ontario, Canada", "source_name": "St. Joseph\u2019s Health Care, London, ON, Canada", "street_adddress": "268 Grosvenor St, London, ON N6A 4V2, Canada", "lat": "43.00129530", "lng": "-81.25504550", "type": "edu", "country": "Canada"}], "year": "2017", "pdf": ["http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7487053", "http://doi.org/10.1109/TNNLS.2016.2573260", "https://doi.org/10.1109/TNNLS.2016.2573260", "https://www.ncbi.nlm.nih.gov/pubmed/27295694"]}, {"id": "2a4153655ad1169d482e22c468d67f3bc2c49f12", "title": "Face Alignment Across Large Poses: A 3D Solution", "addresses": [{"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "Michigan State University", "source_name": "Michigan State University", "street_adddress": "Michigan State University, Farm Lane, East Lansing, Ingham County, Michigan, 48824, USA", "lat": "42.71856800", "lng": "-84.47791571", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf", "http://cvlab.cse.msu.edu/pdfs/Liu_StanLi_CVPR2016.pdf", "http://openaccess.thecvf.com/content_cvpr_2016/supplemental/Zhu_Face_Alignment_Across_2016_CVPR_supplemental.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhu_Face_Alignment_Across_CVPR_2016_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7780392", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2016.23", "http://doi.org/10.1109/CVPR.2016.23"]}, {"id": "655ad6ed99277b3bba1f2ea7e5da4709d6e6cf44", "title": "Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network", "addresses": [{"name": "University of Chinese Academy of Sciences", "source_name": "University of Chinese Academy of Sciences", "street_adddress": "University of Chinese Academy of Sciences, UCAS, Yuquanlu, \u7389\u6cc9\u8def, \u7530\u6751, \u6d77\u6dc0\u533a, 100049, \u4e2d\u56fd", "lat": "39.90828040", "lng": "116.24585270", "type": "edu", "country": "China"}, {"name": "Microsoft Research Asia", "source_name": "Microsoft Research Asia", "street_adddress": "1 Memorial Dr, Cambridge, MA 02142, USA", "lat": "42.36142560", "lng": "-71.08120920", "type": "company", "country": "United States"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1803.06598.pdf"], "doi": []}, {"id": "232b6e2391c064d483546b9ee3aafe0ba48ca519", "title": "Optimization Problems for Fast AAM Fitting in-the-Wild", "addresses": [{"name": "University of Lincoln", "source_name": "University of Lincoln", "street_adddress": "University of Lincoln, Brayford Way, Whitton Park, New Boultham, Lincoln, Lincolnshire, East Midlands, England, LN6 7TS, UK", "lat": "53.22853665", "lng": "-0.54873472", "type": "edu", "country": "United Kingdom"}, {"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2013", "pdf": ["http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf", "http://eprints.eemcs.utwente.nl/24238/01/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/tzimiro_pantic_iccv2013.pdf", "http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Tzimiropoulos_Optimization_Problems_for_2013_ICCV_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6751183", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2013.79", "http://doi.org/10.1109/ICCV.2013.79"]}, {"id": "75fd9acf5e5b7ed17c658cc84090c4659e5de01d", "title": "Project-Out Cascaded Regression with an application to face alignment", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://eprints.nottingham.ac.uk/31442/1/tzimiro_CVPR15.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/2B_035.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/2B_035_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/ext/2B_035_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Tzimiropoulos_Project-Out_Cascaded_Regression_2015_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298989", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2015.7298989", "http://doi.org/10.1109/CVPR.2015.7298989"]}, {"id": "087002ab569e35432cdeb8e63b2c94f1abc53ea9", "title": "Spatiotemporal analysis of RGB-D-T facial images for multimodal pain level recognition", "addresses": [{"name": "Aalborg University", "source_name": "Aalborg University", "street_adddress": "AAU, Pontoppidanstr\u00e6de, S\u00f8nder Tranders, Aalborg, Aalborg Kommune, Region Nordjylland, 9220, Danmark", "lat": "57.01590275", "lng": "9.97532827", "type": "edu", "country": "Denmark"}, {"name": "Computer Vision Center, UAB, Barcelona, Spain", "source_name": "Computer Vision Center, UAB, Barcelona, Spain", "street_adddress": "Campus UAB, Edifici O, s/n, 08193 Cerdanyola del Vall\u00e8s, Barcelona, Spain", "lat": "41.50089570", "lng": "2.11155300", "type": "edu", "country": "Spain"}], "year": "2015", "pdf": ["http://openaccess.thecvf.com/content_cvpr_workshops_2015/W09/papers/Irani_Spatiotemporal_Analysis_of_2015_CVPR_paper.pdf", "http://sergioescalera.com/wp-content/uploads/2015/07/CVPR2015MoeslundSlides.pdf", "http://vbn.aau.dk/files/210011403/PID3686207.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W09/papers/Irani_Spatiotemporal_Analysis_of_2015_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301341", "http://doi.ieeecomputersociety.org/10.1109/CVPRW.2015.7301341", "http://doi.org/10.1109/CVPRW.2015.7301341"]}, {"id": "090ff8f992dc71a1125636c1adffc0634155b450", "title": "Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment", "addresses": [{"name": "Key Lab of Intelligent Information Processing of Chinese Academy of Sciences", "source_name": "Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology (ICT), CAS, Beijing, China", "street_adddress": "Beijing, China", "lat": "39.90419990", "lng": "116.40739630", "type": "edu", "country": "China"}, {"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "University of Chinese Academy of Sciences", "source_name": "University of Chinese Academy of Sciences", "street_adddress": "University of Chinese Academy of Sciences, UCAS, Yuquanlu, \u7389\u6cc9\u8def, \u7530\u6751, \u6d77\u6dc0\u533a, 100049, \u4e2d\u56fd", "lat": "39.90828040", "lng": "116.24585270", "type": "edu", "country": "China"}, {"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf"], "doi": []}, {"id": "62e913431bcef5983955e9ca160b91bb19d9de42", "title": "Facial Landmark Detection with Tweaked Convolutional Neural Networks", "addresses": [{"name": "Open University of Israel", "source_name": "Open University of Israel", "street_adddress": "\u05d4\u05d0\u05d5\u05e0\u05d9\u05d1\u05e8\u05e1\u05d9\u05d8\u05d4 \u05d4\u05e4\u05ea\u05d5\u05d7\u05d4, 15, \u05d0\u05d1\u05d0 \u05d7\u05d5\u05e9\u05d9, \u05d7\u05d9\u05e4\u05d4, \u05d2\u05d1\u05e2\u05ea \u05d3\u05d0\u05d5\u05e0\u05e1, \u05d7\u05d9\u05e4\u05d4, \u05de\u05d7\u05d5\u05d6 \u05d7\u05d9\u05e4\u05d4, NO, \u05d9\u05e9\u05e8\u05d0\u05dc", "lat": "32.77824165", "lng": "34.99565673", "type": "edu", "country": "Israel"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1511.04031.pdf"], "doi": []}, {"id": "034b3f3bac663fb814336a69a9fd3514ca0082b9", "title": "Unifying holistic and Parts-Based Deformable Model fitting", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/alabort_cvpr2015.pdf", "http://openaccess.thecvf.com/content_cvpr_2015/supplemental/Alabort-i-Medina_Unifying_Holistic_and_2015_CVPR_supplemental.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/2B_037.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/2B_037_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/ext/2B_037_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Alabort-i-Medina_Unifying_Holistic_and_2015_CVPR_paper.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/alabort_cvpr2015.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/unifying_holistic_parts_based.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298991", "http://doi.org/10.1109/CVPR.2015.7298991", "https://doi.org/10.1109/CVPR.2015.7298991"]}, {"id": "86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd", "title": "Attentional Alignment Networks", "addresses": [{"name": "Beihang University", "source_name": "Beihang University", "street_adddress": "\u5317\u4eac\u822a\u7a7a\u822a\u5929\u5927\u5b66, 37, \u5b66\u9662\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100083, \u4e2d\u56fd", "lat": "39.98083330", "lng": "116.34101249", "type": "edu", "country": "China"}, {"name": "University of Texas at Arlington", "source_name": "University of Texas at Arlington", "street_adddress": "University of Texas at Arlington, South Nedderman Drive, Arlington, Tarrant County, Texas, 76010, USA", "lat": "32.72836830", "lng": "-97.11201835", "type": "edu", "country": "United States"}, {"name": "Shanghai Jiao Tong University", "source_name": "Shanghai Jiao Tong University", "street_adddress": "\u4e0a\u6d77\u4ea4\u901a\u5927\u5b66\uff08\u5f90\u6c47\u6821\u533a\uff09, \u6dee\u6d77\u897f\u8def, \u756a\u79ba\u5c0f\u533a, \u5e73\u9634\u6865, \u5f90\u6c47\u533a, \u4e0a\u6d77\u5e02, 200052, \u4e2d\u56fd", "lat": "31.20081505", "lng": "121.42840681", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf"], "doi": []}, {"id": "4068574b8678a117d9a434360e9c12fe6232dae0", "title": "Automatic Construction of Deformable Models In-the-Wild", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos_automatic_2014.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Antonakos_Automatic_Construction_of_2014_CVPR_paper.pdf", "http://www.visionmeetscognition.org/fpic2014/Camera_Ready/Paper%2031.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos_automatic_2014_poster.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos_automatic_2014_supp.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909630", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.234", "http://doi.org/10.1109/CVPR.2014.234"]}, {"id": "1d0128b9f96f4c11c034d41581f23eb4b4dd7780", "title": "Automatic construction Of robust spherical harmonic subspaces", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/robust_spherical_harmonics.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_011.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_011_ext.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Snape_Automatic_Construction_Of_2015_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298604", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2015.7298604", "http://doi.org/10.1109/CVPR.2015.7298604"]}, {"id": "22e2066acfb795ac4db3f97d2ac176d6ca41836c", "title": "Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment", "addresses": [{"name": "Key Lab of Intelligent Information Processing of Chinese Academy of Sciences", "source_name": "Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology (ICT), CAS, Beijing, China", "street_adddress": "Beijing, China", "lat": "39.90419990", "lng": "116.40739630", "type": "edu", "country": "China"}, {"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "University of Chinese Academy of Sciences", "source_name": "University of Chinese Academy of Sciences", "street_adddress": "University of Chinese Academy of Sciences, UCAS, Yuquanlu, \u7389\u6cc9\u8def, \u7530\u6751, \u6d77\u6dc0\u533a, 100049, \u4e2d\u56fd", "lat": "39.90828040", "lng": "116.24585270", "type": "edu", "country": "China"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf"], "doi": []}, {"id": "ac6c3b3e92ff5fbcd8f7967696c7aae134bea209", "title": "Deep Cascaded Bi-Network for Face Hallucination", "addresses": [{"name": "Chinese University of Hong Kong", "source_name": "Chinese University of Hong Kong", "street_adddress": "Hong Kong, \u99ac\u6599\u6c34\u6c60\u65c1\u8def", "lat": "22.41626320", "lng": "114.21093180", "type": "edu", "country": "China"}, {"name": "Shenzhen Institutes of Advanced Technology", "source_name": "Shenzhen Institutes of Advanced Technology", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u6df1\u5733\u5148\u8fdb\u6280\u672f\u7814\u7a76\u9662, 1068, \u79d1\u7814\u8def, \u6df1\u5733\u5927\u5b66\u57ce, \u4e09\u5751\u6751, \u5357\u5c71\u533a, \u6df1\u5733\u5e02, \u5e7f\u4e1c\u7701, 518000, \u4e2d\u56fd", "lat": "22.59805605", "lng": "113.98533784", "type": "edu", "country": "China"}, {"name": "University of California, Merced", "source_name": "University of California, Merced", "street_adddress": "University of California, Merced, Ansel Adams Road, Merced County, California, USA", "lat": "37.36566745", "lng": "-120.42158888", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1607.05046.pdf"], "doi": []}, {"id": "63d865c66faaba68018defee0daf201db8ca79ed", "title": "Deep Regression for Face Alignment", "addresses": [{"name": "Microsoft Research Asia", "source_name": "Microsoft Research Asia", "street_adddress": "1 Memorial Dr, Cambridge, MA 02142, USA", "lat": "42.36142560", "lng": "-71.08120920", "type": "company", "country": "United States"}], "year": "2014", "pdf": ["https://arxiv.org/pdf/1409.5230.pdf"], "doi": []}, {"id": "35f921def890210dda4b72247849ad7ba7d35250", "title": "Exemplar-Based Graph Matching for Robust Facial Landmark Localization", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2013", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Zhou_Exemplar-Based_Graph_Matching_2013_ICCV_paper.pdf", "http://www.f-zhou.com/fa/2013_ICCV_EGM.pdf", "http://www.ri.cmu.edu/pub_files/2013/12/2013_ICCV_EGM.pdf", "https://www.ri.cmu.edu/pub_files/2013/12/ICCV_EGM.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6751237", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2013.131", "http://doi.org/10.1109/ICCV.2013.131"]}, {"id": "898ff1bafee2a6fb3c848ad07f6f292416b5f07d", "title": "Face Alignment via Regressing Local Binary Features", "addresses": [{"name": "Microsoft Research Asia", "source_name": "Microsoft Research Asia", "street_adddress": "1 Memorial Dr, Cambridge, MA 02142, USA", "lat": "42.36142560", "lng": "-71.08120920", "type": "company", "country": "United States"}, {"name": "University of Science and Technology of China", "source_name": "University of Science and Technology of China", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u6280\u672f\u5927\u5b66 \u4e1c\u6821\u533a, 96\u53f7, \u91d1\u5be8\u8def, \u6c5f\u6dee\u5316\u80a5\u5382\u5c0f\u533a, \u829c\u6e56\u8def\u8857\u9053, \u5408\u80a5\u5e02\u533a, \u5408\u80a5\u5e02, \u5b89\u5fbd\u7701, 230026, \u4e2d\u56fd", "lat": "31.83907195", "lng": "117.26420748", "type": "edu", "country": "China"}, {"name": "Microsoft", "source_name": "Microsoft Corporation, Redmond, WA, USA", "street_adddress": "One Microsoft Way, Redmond, WA 98052, USA", "lat": "47.64233180", "lng": "-122.13693020", "type": "company", "country": "United States"}], "year": "2016", "pdf": [], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384759", "http://doi.org/10.1109/TIP.2016.2518867", "https://www.ncbi.nlm.nih.gov/pubmed/26800539", "https://www.wikidata.org/entity/Q50538837"]}, {"id": "71b07c537a9e188b850192131bfe31ef206a39a0", "title": "Faces InThe-Wild Challenge : database and results", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf"], "doi": []}, {"id": "f095b5770f0ff13ba9670e3d480743c5e9ad1036", "title": "Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}, {"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf", "http://eprints.eemcs.utwente.nl/27574/01/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf"], "doi": ["http://doi.org/10.1007/s11263-016-0950-1", "https://doi.org/10.1007/s11263-016-0950-1"]}, {"id": "624496296af19243d5f05e7505fd927db02fd0ce", "title": "Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild", "addresses": [{"name": "University of Lincoln", "source_name": "University of Lincoln", "street_adddress": "University of Lincoln, Brayford Way, Whitton Park, New Boultham, Lincoln, Lincolnshire, East Midlands, England, LN6 7TS, UK", "lat": "53.22853665", "lng": "-0.54873472", "type": "edu", "country": "United Kingdom"}, {"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/tzimiro_pantic_cvpr_2014.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Tzimiropoulos_Gauss-Newton_Deformable_Part_2014_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909635", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.239", "http://doi.org/10.1109/CVPR.2014.239"]}, {"id": "6a4ebd91c4d380e21da0efb2dee276897f56467a", "title": "HOG active appearance models", "addresses": [{"name": "University of Lincoln", "source_name": "University of Lincoln", "street_adddress": "University of Lincoln, Brayford Way, Whitton Park, New Boultham, Lincoln, Lincolnshire, East Midlands, England, LN6 7TS, UK", "lat": "53.22853665", "lng": "-0.54873472", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["http://eprints.nottingham.ac.uk/31441/1/tzimiroICIP14b.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/07025044.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2014hog.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2014hog_poster.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7025044", "http://doi.org/10.1109/ICIP.2014.7025044"]}, {"id": "696236fb6f986f6d5565abb01f402d09db68e5fa", "title": "Learning adaptive receptive fields for deep image parsing networks", "addresses": [{"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "Nanjing University", "source_name": "Nanjing University", "street_adddress": "NJU, \u4e09\u6c5f\u8def, \u9f13\u697c\u533a, \u5357\u4eac\u5e02, \u6c5f\u82cf\u7701, 210093, \u4e2d\u56fd", "lat": "32.05659570", "lng": "118.77408833", "type": "edu", "country": "China"}, {"name": "University of Chinese Academy of Sciences", "source_name": "University of Chinese Academy of Sciences", "street_adddress": "University of Chinese Academy of Sciences, UCAS, Yuquanlu, \u7389\u6cc9\u8def, \u7530\u6751, \u6d77\u6dc0\u533a, 100049, \u4e2d\u56fd", "lat": "39.90828040", "lng": "116.24585270", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf", "http://openaccess.thecvf.com/content_cvpr_2017/supplemental/Wei_Learning_Adaptive_Receptive_2017_CVPR_supplemental.pdf"], "doi": ["http://doi.org/10.1007/s41095-018-0112-1", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2017.420", "http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8099903", "https://doi.org/10.1007/s41095-018-0112-1"]}, {"id": "c17a332e59f03b77921942d487b4b102b1ee73b6", "title": "Learning an appearance-based gaze estimator from one million synthesised images", "addresses": [{"name": "University of Cambridge", "source_name": "University of Cambridge", "street_adddress": "Clifford Allbutt Lecture Theatre, Robinson Way, Romsey, Cambridge, Cambridgeshire, East of England, England, CB2 0QH, UK", "lat": "52.17638955", "lng": "0.14308882", "type": "edu", "country": "United Kingdom"}, {"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}, {"name": "Max Planck Institute for Informatics", "source_name": "Max Planck Institute for Informatics", "street_adddress": "MPII, E1 4, Campus, Universit\u00e4t, Sankt Johann, Bezirk Mitte, Saarbr\u00fccken, Regionalverband Saarbr\u00fccken, Saarland, 66123, Deutschland", "lat": "49.25795660", "lng": "7.04577417", "type": "edu", "country": "Germany"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf"], "doi": []}, {"id": "9ef2b2db11ed117521424c275c3ce1b5c696b9b3", "title": "Robust Face Alignment Using a Mixture of Invariant Experts", "addresses": [{"name": "Intel Corporation", "source_name": "Intel Corporation & Portland State University, Hillsboro, OR, USA", "street_adddress": "6397 NE Evergreen Pkwy, Hillsboro, OR 97124, USA", "lat": "45.55236000", "lng": "-122.91429880", "type": "company", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1511.04404.pdf"], "doi": []}, {"id": "3a8846ca16df5dfb2daadc189ed40c13d2ddc0c5", "title": "Validation loss for landmark detection", "addresses": [{"name": "Daimler AG", "source_name": "Daimler AG, 70327 Stuttgart-Untertuerkheim, Germany", "street_adddress": "Mercedesstra\u00dfe 128, 70327 Stuttgart, Germany", "lat": "48.78634620", "lng": "9.23807180", "type": "company", "country": "Germany"}], "year": "2019", "pdf": ["https://arxiv.org/pdf/1901.10143.pdf"], "doi": []}, {"id": "3bc376f29bc169279105d33f59642568de36f17f", "title": "Active shape models with SIFT descriptors and MARS", "addresses": [{"name": "University of Cape Town", "source_name": "University of Cape Town", "street_adddress": "University of Cape Town, Engineering Mall, Cape Town Ward 59, Cape Town, City of Cape Town, Western Cape, CAPE TOWN, South Africa", "lat": "-33.95828745", "lng": "18.45997349", "type": "edu", "country": "South Africa"}], "year": "2014", "pdf": ["http://www.dip.ee.uct.ac.za/~nicolls/publish/sm14-visapp.pdf", "http://www.milbo.org/stasm-files/active-shape-models-with-sift-and-mars.pdf", "http://www.scitepress.org/Papers/2014/46800/46800.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294955", "http://doi.org/10.5220/0004680003800387"]}, {"id": "0a6d344112b5af7d1abbd712f83c0d70105211d0", "title": "Constrained Local Neural Fields for Robust Facial Landmark Detection in the Wild", "addresses": [{"name": "USC Institute for Creative Technologies", "source_name": "USC Institute for Creative Technologies", "street_adddress": "12015 E Waterfront Dr, Los Angeles, CA 90094, USA", "lat": "33.98325260", "lng": "-118.40417000", "type": "edu", "country": "United States"}], "year": "2013", "pdf": ["http://ict.usc.edu/pubs/Constrained%20local%20neural%20fields%20for%20robust%20facial%20landmark%20detection%20in%20the%20wild.pdf", "http://www.cl.cam.ac.uk/research/rainbow/projects/ccnf/files/iccv2014.pdf", "http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Baltrusaitis_Constrained_Local_Neural_2013_ICCV_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755919"]}, {"id": "3be8f1f7501978287af8d7ebfac5963216698249", "title": "Deep Cascaded Regression for Face Alignment", "addresses": [{"name": "Sun Yat-Sen University", "source_name": "Sun Yat-Sen University", "street_adddress": "\u4e2d\u5927, \u65b0\u6e2f\u897f\u8def, \u9f99\u8239\u6ed8, \u5eb7\u4e50, \u6d77\u73e0\u533a (Haizhu), \u5e7f\u5dde\u5e02, \u5e7f\u4e1c\u7701, 510105, \u4e2d\u56fd", "lat": "23.09461185", "lng": "113.28788994", "type": "edu", "country": "China"}, {"name": "National University of Singapore", "source_name": "National University of Singapore", "street_adddress": "NUS, Former 1936 British Outpost, Nepal Hill, Clementi, Southwest, 117542, Singapore", "lat": "1.29620180", "lng": "103.77689944", "type": "edu", "country": "Singapore"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf"], "doi": []}, {"id": "329d58e8fb30f1bf09acb2f556c9c2f3e768b15c", "title": "Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment", "addresses": [{"name": "Tsinghua University", "source_name": "Tsinghua University", "street_adddress": "\u6e05\u534e\u5927\u5b66, 30, \u53cc\u6e05\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100084, \u4e2d\u56fd", "lat": "40.00229045", "lng": "116.32098908", "type": "edu", "country": "China"}, {"name": "Chinese University of Hong Kong", "source_name": "Chinese University of Hong Kong", "street_adddress": "Hong Kong, \u99ac\u6599\u6c34\u6c60\u65c1\u8def", "lat": "22.41626320", "lng": "114.21093180", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8014995", "http://doi.ieeecomputersociety.org/10.1109/CVPRW.2017.261", "http://doi.org/10.1109/CVPRW.2017.261"]}, {"id": "0293721d276856f0425d4417e22381de3350ac32", "title": "Customer Satisfaction Measuring Based on the Most Significant Facial Emotion", "addresses": [{"name": "University of Paris-Est", "source_name": "University of Paris-Est, Gaspard-Monge Computer Science Laboratory A3SI, ESIEE Paris, CNRS, France", "street_adddress": "6-8 Avenue Blaise Pascal, 77420 Champs-sur-Marne, France", "lat": "48.84077910", "lng": "2.58732590", "type": "edu", "country": "France"}, {"name": "University of Sfax, Tunisia", "source_name": "REGIM-Labo: REsearch Groups in Intelligent Machines, University of Sfax, ENIS, BP 1173, Sfax, 3038, Tunisia", "street_adddress": "Universit\u00e9 de Route de l'A\u00e9roport Km 0.5 BP 1169 .3029 Sfax, Sfax, Tunisia", "lat": "34.73610660", "lng": "10.74272750", "type": "edu", "country": "Tunisia"}], "year": "2018", "pdf": ["https://hal-upec-upem.archives-ouvertes.fr/hal-01790317/file/RK_SSD_2018.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8570588", "https://hal-upec-upem.archives-ouvertes.fr/hal-01790317/document"]}, {"id": "ce9e1dfa7705623bb67df3a91052062a0a0ca456", "title": "Deep Feature Interpolation for Image Content Changes", "addresses": [{"name": "Cornell University", "source_name": "Cornell University", "street_adddress": "Cornell University, Forest Home Drive, Forest Home, Tompkins County, New York, 14853, USA", "lat": "42.45055070", "lng": "-76.47835130", "type": "edu", "country": "United States"}, {"name": "George Washington University", "source_name": "George Washington University, USA", "street_adddress": "2121 I St NW, Washington, DC 20052, USA", "lat": "38.89971450", "lng": "-77.04859920", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1611.05507.pdf"], "doi": []}, {"id": "2d294bde112b892068636f3a48300b3c033d98da", "title": "Deep Multi-Center Learning for Face Alignment", "addresses": [{"name": "Shanghai Jiao Tong University", "source_name": "Shanghai Jiao Tong University", "street_adddress": "\u4e0a\u6d77\u4ea4\u901a\u5927\u5b66\uff08\u5f90\u6c47\u6821\u533a\uff09, \u6dee\u6d77\u897f\u8def, \u756a\u79ba\u5c0f\u533a, \u5e73\u9634\u6865, \u5f90\u6c47\u533a, \u4e0a\u6d77\u5e02, 200052, \u4e2d\u56fd", "lat": "31.20081505", "lng": "121.42840681", "type": "edu", "country": "China"}, {"name": "East China Normal University", "source_name": "East China Normal University", "street_adddress": "\u534e\u4e1c\u5e08\u8303\u5927\u5b66, 3663, \u4e2d\u5c71\u5317\u8def, \u66f9\u5bb6\u6e21, \u666e\u9640\u533a, \u666e\u9640\u533a (Putuo), \u4e0a\u6d77\u5e02, 200062, \u4e2d\u56fd", "lat": "31.22849230", "lng": "121.40211389", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1808.01558.pdf"], "doi": []}, {"id": "30cd39388b5c1aae7d8153c0ab9d54b61b474ffe", "title": "Deep Recurrent Regression for Facial Landmark Detection", "addresses": [{"name": "Sun Yat-Sen University", "source_name": "Sun Yat-Sen University", "street_adddress": "\u4e2d\u5927, \u65b0\u6e2f\u897f\u8def, \u9f99\u8239\u6ed8, \u5eb7\u4e50, \u6d77\u73e0\u533a (Haizhu), \u5e7f\u5dde\u5e02, \u5e7f\u4e1c\u7701, 510105, \u4e2d\u56fd", "lat": "23.09461185", "lng": "113.28788994", "type": "edu", "country": "China"}, {"name": "National University of Singapore", "source_name": "National University of Singapore", "street_adddress": "NUS, Former 1936 British Outpost, Nepal Hill, Clementi, Southwest, 117542, Singapore", "lat": "1.29620180", "lng": "103.77689944", "type": "edu", "country": "Singapore"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1510.09083.pdf"], "doi": []}, {"id": "0209389b8369aaa2a08830ac3b2036d4901ba1f1", "title": "DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University College London", "source_name": "University College London", "street_adddress": "UCL Institute of Education, 20, Bedford Way, Holborn, Bloomsbury, London Borough of Camden, London, Greater London, England, WC1H 0AL, UK", "lat": "51.52316070", "lng": "-0.12820370", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1612.01202.pdf"], "doi": []}, {"id": "191d30e7e7360d565b0c1e2814b5bcbd86a11d41", "title": "Discriminative Deep Face Shape Model for Facial Point Detection", "addresses": [{"name": "Rensselaer Polytechnic Institute", "source_name": "Rensselaer Polytechnic Institute", "street_adddress": "Rensselaer Polytechnic Institute, Sage Avenue, Downtown, City of Troy, Rensselaer County, New York, 12180, USA", "lat": "42.72984590", "lng": "-73.67950216", "type": "edu", "country": "United States"}], "year": "2014", "pdf": ["http://homepages.rpi.edu/~wuy9/DiscriminativeDeepFaceShape/DiscriminativeDeepFaceShape_IJCV.pdf", "https://www.ecse.rpi.edu/~cvrl/Publication/pdf/Wu2014.pdf", "https://www.ecse.rpi.edu/~cvrl/wuy/Face_shape_prior_RBM.pdf"], "doi": ["http://doi.org/10.1007/s11263-014-0775-8"]}, {"id": "ceeb67bf53ffab1395c36f1141b516f893bada27", "title": "Face Alignment by Local Deep Descriptor Regression", "addresses": [{"name": "University of Maryland", "source_name": "University of Maryland", "street_adddress": "The Grand Garage, 5, North Paca Street, Seton Hill, Baltimore, Maryland, 21201, USA", "lat": "39.28996850", "lng": "-76.62196103", "type": "edu", "country": "United States"}, {"name": "Rutgers University", "source_name": "Rutgers University", "street_adddress": "Rutgers Cook Campus - North, Biel Road, New Brunswick, Middlesex County, New Jersey, 08901, USA", "lat": "40.47913175", "lng": "-74.43168868", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1601.07950.pdf"], "doi": []}, {"id": "beb8d7c128ccbdc6b63959a763ebc505a5313c06", "title": "Face Completion with Semantic Knowledge and Collaborative Adversarial Learning", "addresses": [{"name": "University of Rochester", "source_name": "University of Rochester", "street_adddress": "Memorial Art Gallery, 500, University Avenue, East End, Rochester, Monroe County, New York, 14607, USA", "lat": "43.15769690", "lng": "-77.58829158", "type": "edu", "country": "United States"}, {"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1812.03252.pdf"], "doi": []}, {"id": "438e7999c937b94f0f6384dbeaa3febff6d283b6", "title": "Face Detection, Bounding Box Aggregation and Pose Estimation for Robust Facial Landmark Localisation in the Wild", "addresses": [{"name": "University of Surrey", "source_name": "University of Surrey", "street_adddress": "University of Surrey, Spine Road, Guildford Park, Guildford, Surrey, South East, England, GU2 7XH, UK", "lat": "51.24303255", "lng": "-0.59001382", "type": "edu", "country": "United Kingdom"}, {"name": "Jiangnan University", "source_name": "Jiangnan University", "street_adddress": "\u6c5f\u5357\u5927\u5b66\u7ad9, \u8821\u6e56\u5927\u9053, \u6ee8\u6e56\u533a, \u5357\u573a\u6751, \u6ee8\u6e56\u533a (Binhu), \u65e0\u9521\u5e02 / Wuxi, \u6c5f\u82cf\u7701, 214121, \u4e2d\u56fd", "lat": "31.48542550", "lng": "120.27395810", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1705.02402.pdf"], "doi": []}, {"id": "84e6669b47670f9f4f49c0085311dce0e178b685", "title": "Face frontalization for Alignment and Recognition", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1502.00852.pdf"], "doi": []}, {"id": "2f7aa942313b1eb12ebfab791af71d0a3830b24c", "title": "Feature-Based Lucas\u2013Kanade and Active Appearance Models", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7104116", "http://doi.org/10.1109/TIP.2015.2431445", "https://www.ncbi.nlm.nih.gov/pubmed/25966479", "https://www.wikidata.org/entity/Q47652714"]}, {"id": "1c1a98df3d0d5e2034ea723994bdc85af45934db", "title": "Guided Unsupervised Learning of Mode Specific Models for Facial Point Detection in the Wild", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2013", "pdf": ["http://www.cs.nott.ac.uk/~pszmv/Documents/ICCV-300w_cameraready.pdf", "http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Jaiswal_Guided_Unsupervised_Learning_2013_ICCV_paper.pdf", "http://www.researchgate.net/profile/Michel_Valstar/publication/262361649_Guided_Unsupervised_Learning_of_Mode_Specific_Models_for_Facial_Point_Detection_in_the_Wild/links/54006a5b0cf24c81027deadb.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755921", "http://doi.org/10.1109/ICCVW.2013.56"]}, {"id": "f070d739fb812d38571ec77490ccd8777e95ce7a", "title": "Hierarchical facial landmark localization via cascaded random binary patterns", "addresses": [{"name": "Chinese University of Hong Kong", "source_name": "Chinese University of Hong Kong", "street_adddress": "Hong Kong, \u99ac\u6599\u6c34\u6c60\u65c1\u8def", "lat": "22.41626320", "lng": "114.21093180", "type": "edu", "country": "China"}, {"name": "Shenzhen University", "source_name": "Shenzhen University", "street_adddress": "\u6df1\u5733\u5927\u5b66, 3688, \u5357\u6d77\u5927\u9053, \u86c7\u53e3, \u540c\u4e50\u6751, \u5357\u5c71\u533a, \u6df1\u5733\u5e02, \u5e7f\u4e1c\u7701, 518060, \u4e2d\u56fd", "lat": "22.53521465", "lng": "113.93159110", "type": "edu", "country": "China"}], "year": "2015", "pdf": ["https://zhzhanp.github.io/papers/PR2015.pdf"], "doi": ["http://doi.org/10.1016/j.patcog.2014.09.007", "https://doi.org/10.1016/j.patcog.2014.09.007"]}, {"id": "87e6cb090aecfc6f03a3b00650a5c5f475dfebe1", "title": "Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection", "addresses": [{"name": "University of Southern California", "source_name": "University of Southern California", "street_adddress": "University of Southern California, Watt Way, Saint James Park, LA, Los Angeles County, California, 90089, USA", "lat": "34.02241490", "lng": "-118.28634407", "type": "edu", "country": "United States"}, {"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf"], "doi": []}, {"id": "0ea7b7fff090c707684fd4dc13e0a8f39b300a97", "title": "Integrated Face Analytics Networks through Cross-Dataset Hybrid Training", "addresses": [{"name": "National University of Singapore", "source_name": "National University of Singapore", "street_adddress": "NUS, Former 1936 British Outpost, Nepal Hill, Clementi, Southwest, 117542, Singapore", "lat": "1.29620180", "lng": "103.77689944", "type": "edu", "country": "Singapore"}, {"name": "Beijing Institute of Technology", "source_name": "Beijing Institute of Technology University", "street_adddress": "\u5317\u4eac\u7406\u5de5\u5927\u5b66, 5, \u4e2d\u5173\u6751\u5357\u5927\u8857, \u4e2d\u5173\u6751, \u7a3b\u9999\u56ed\u5357\u793e\u533a, \u6d77\u6dc0\u533a, \u5317\u4eac\u5e02, 100872, \u4e2d\u56fd", "lat": "39.95866520", "lng": "116.30971281", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1711.06055.pdf"], "doi": []}, {"id": "7d7be6172fc2884e1da22d1e96d5899a29831ad2", "title": "L2GSCI: Local to Global Seam Cutting and Integrating for Accurate Face Contour Extraction", "addresses": [{"name": "South China University of China", "source_name": "South China University of China", "street_adddress": "\u534e\u5de5\u7ad9, \u5927\u5b66\u57ce\u4e2d\u73af\u4e1c\u8def, \u5e7f\u5dde\u5927\u5b66\u57ce, \u65b0\u9020, \u756a\u79ba\u533a (Panyu), \u5e7f\u5dde\u5e02, \u5e7f\u4e1c\u7701, 510006, \u4e2d\u56fd", "lat": "23.04900470", "lng": "113.39715710", "type": "edu", "country": "China"}, {"name": "Education University of Hong Kong", "source_name": "The Education University of Hong Kong", "street_adddress": "\u9999\u6e2f\u6559\u80b2\u5927\u5b78 The Education University of Hong Kong, \u9732\u5c4f\u8def Lo Ping Road, \u9cf3\u5712 Fung Yuen, \u4e0b\u5751 Ha Hang, \u65b0\u754c New Territories, HK, DD5 1119, \u4e2d\u56fd", "lat": "22.46935655", "lng": "114.19474194", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1703.01605.pdf"], "doi": []}, {"id": "d28d32af7ef9889ef9cb877345a90ea85e70f7f1", "title": "Local-Global Landmark Confidences for Face Recognition", "addresses": [{"name": "University of Southern California", "source_name": "University of Southern California", "street_adddress": "University of Southern California, Watt Way, Saint James Park, LA, Los Angeles County, California, 90089, USA", "lat": "34.02241490", "lng": "-118.28634407", "type": "edu", "country": "United States"}, {"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Kim_Local.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7961805", "http://doi.ieeecomputersociety.org/10.1109/FG.2017.84", "http://doi.org/10.1109/FG.2017.84"]}, {"id": "303a7099c01530fa0beb197eb1305b574168b653", "title": "Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders", "addresses": [{"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "University of Chinese Academy of Sciences", "source_name": "University of Chinese Academy of Sciences", "street_adddress": "University of Chinese Academy of Sciences, UCAS, Yuquanlu, \u7389\u6cc9\u8def, \u7530\u6751, \u6d77\u6dc0\u533a, 100049, \u4e2d\u56fd", "lat": "39.90828040", "lng": "116.24585270", "type": "edu", "country": "China"}], "year": "2016", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7780742", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2016.373", "http://doi.org/10.1109/CVPR.2016.373"]}, {"id": "1824b1ccace464ba275ccc86619feaa89018c0ad", "title": "One millisecond face alignment with an ensemble of regression trees", "addresses": [{"name": "KTH Royal Institute of Technology, Stockholm", "source_name": "KTH Royal Institute of Technology, Stockholm", "street_adddress": "KTH, Teknikringen, L\u00e4rkstaden, Norra Djurg\u00e5rden, \u00d6stermalms stadsdelsomr\u00e5de, Sthlm, Stockholm, Stockholms l\u00e4n, Svealand, 114 28, Sverige", "lat": "59.34986645", "lng": "18.07063213", "type": "edu", "country": "Sweden"}], "year": "2014", "pdf": ["http://www.csc.kth.se/~vahidk/face/KazemiCVPR14.pdf", "http://www.csc.kth.se/~vahidk/papers/KazemiCVPR14.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Kazemi_One_Millisecond_Face_2014_CVPR_paper.pdf", "http://www.nada.kth.se/~sullivan/Papers/Kazemi_cvpr14.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909637", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.241", "http://doi.org/10.1109/CVPR.2014.241"]}, {"id": "89002a64e96a82486220b1d5c3f060654b24ef2a", "title": "PIEFA: Personalized Incremental and Ensemble Face Alignment", "addresses": [{"name": "University of North Carolina at Charlotte", "source_name": "University of North Carolina at Charlotte", "street_adddress": "Lot 20, Poplar Terrace Drive, Charlotte, Mecklenburg County, North Carolina, 28223, USA", "lat": "35.31034410", "lng": "-80.73261617", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["http://research.rutgers.edu/~shaoting/paper/ICCV15_face.pdf", "http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Peng_PIEFA_Personalized_Incremental_ICCV_2015_paper.pdf", "https://webpages.uncc.edu/~szhang16/paper/ICCV15_face.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410799", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2015.442", "http://doi.org/10.1109/ICCV.2015.442"]}, {"id": "6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d", "title": "Robust Deep Appearance Models", "addresses": [{"name": "Concordia University", "source_name": "Concordia University", "street_adddress": "Concordia University, 2811, Northeast Holman Street, Concordia, Portland, Multnomah County, Oregon, 97211, USA", "lat": "45.57022705", "lng": "-122.63709346", "type": "edu", "country": "United States"}, {"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1607.00659.pdf"], "doi": []}, {"id": "7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2", "title": "Robust FEC-CNN: A High Accuracy Facial Landmark Detection System", "addresses": [{"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "University of Chinese Academy of Sciences", "source_name": "University of Chinese Academy of Sciences", "street_adddress": "University of Chinese Academy of Sciences, UCAS, Yuquanlu, \u7389\u6cc9\u8def, \u7530\u6751, \u6d77\u6dc0\u533a, 100049, \u4e2d\u56fd", "lat": "39.90828040", "lng": "116.24585270", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8014989", "http://doi.ieeecomputersociety.org/10.1109/CVPRW.2017.255", "http://doi.org/10.1109/CVPRW.2017.255"]}, {"id": "788a7b59ea72e23ef4f86dc9abb4450efefeca41", "title": "Robust Statistical Face Frontalization", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2015", "pdf": ["http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf", "http://eprints.mdx.ac.uk/23776/1/C23.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/robust_frontalization.pdf", "http://openaccess.thecvf.com/content_iccv_2015/papers/Sagonas_Robust_Statistical_Face_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410798", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2015.441", "http://doi.org/10.1109/ICCV.2015.441"]}, {"id": "7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0", "title": "Robust Statistical Frontalization of Human and Animal Faces", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2016", "pdf": ["http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/sagonas2016rsf.pdf"], "doi": ["http://doi.org/10.1007/s11263-016-0920-7", "https://doi.org/10.1007/s11263-016-0920-7"]}, {"id": "04ff69aa20da4eeccdabbe127e3641b8e6502ec0", "title": "Sequential Face Alignment via Person-Specific Modeling in the Wild", "addresses": [{"name": "Rutgers University", "source_name": "Rutgers University", "street_adddress": "Rutgers Cook Campus - North, Biel Road, New Brunswick, Middlesex County, New Jersey, 08901, USA", "lat": "40.47913175", "lng": "-74.43168868", "type": "edu", "country": "United States"}, {"name": "University of Texas at Arlington", "source_name": "University of Texas at Arlington", "street_adddress": "University of Texas at Arlington, South Nedderman Drive, Arlington, Tarrant County, Texas, 76010, USA", "lat": "32.72836830", "lng": "-97.11201835", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7789684", "http://doi.ieeecomputersociety.org/10.1109/CVPRW.2016.194", "http://doi.org/10.1109/CVPRW.2016.194"]}, {"id": "c8ca6a2dc41516c16ea0747e9b3b7b1db788dbdd", "title": "Track Facial Points in Unconstrained Videos", "addresses": [{"name": "Rutgers University", "source_name": "Rutgers University", "street_adddress": "Rutgers Cook Campus - North, Biel Road, New Brunswick, Middlesex County, New Jersey, 08901, USA", "lat": "40.47913175", "lng": "-74.43168868", "type": "edu", "country": "United States"}, {"name": "The University of Texas at Arlington", "source_name": "The University of Texas at Arlington", "street_adddress": "701 W Nedderman Dr, Arlington, TX 76019, USA", "lat": "32.72987180", "lng": "-97.11401160", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1609.02825.pdf"], "doi": []}, {"id": "433a6d6d2a3ed8a6502982dccc992f91d665b9b3", "title": "Transferring Landmark Annotations for Cross-Dataset Face Alignment.", "addresses": [{"name": "Chinese University of Hong Kong", "source_name": "Chinese University of Hong Kong", "street_adddress": "Hong Kong, \u99ac\u6599\u6c34\u6c60\u65c1\u8def", "lat": "22.41626320", "lng": "114.21093180", "type": "edu", "country": "China"}, {"name": "Tsinghua University", "source_name": "Tsinghua University", "street_adddress": "\u6e05\u534e\u5927\u5b66, 30, \u53cc\u6e05\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100084, \u4e2d\u56fd", "lat": "40.00229045", "lng": "116.32098908", "type": "edu", "country": "China"}], "year": "2014", "pdf": ["https://arxiv.org/pdf/1409.0602.pdf"], "doi": []}, {"id": "3bf249f716a384065443abc6172f4bdef88738d9", "title": "A Hybrid Instance-based Transfer Learning Method", "addresses": [{"name": "University of Manitoba", "source_name": "University of Manitoba", "street_adddress": "University of Manitoba, Gillson Street, Normand Park, Saint Vital, Winnipeg, Manitoba, R3T 2N2, Canada", "lat": "49.80915360", "lng": "-97.13304179", "type": "edu", "country": "Canada"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1812.01063.pdf"], "doi": []}, {"id": "afdf9a3464c3b015f040982750f6b41c048706f5", "title": "A Recurrent Encoder-Decoder Network for Sequential Face Alignment", "addresses": [{"name": "Rutgers University", "source_name": "Rutgers University", "street_adddress": "Rutgers Cook Campus - North, Biel Road, New Brunswick, Middlesex County, New Jersey, 08901, USA", "lat": "40.47913175", "lng": "-74.43168868", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1608.05477.pdf"], "doi": []}, {"id": "b4362cd87ad219790800127ddd366cc465606a78", "title": "A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy", "addresses": [{"name": "Seoul National University", "source_name": "Seoul National University", "street_adddress": "\uc11c\uc6b8\ub300\ud559\uad50, \uc11c\ud638\ub3d9\ub85c, \uc11c\ub454\ub3d9, \uad8c\uc120\uad6c, \uc218\uc6d0\uc2dc, \uacbd\uae30, 16614, \ub300\ud55c\ubbfc\uad6d", "lat": "37.26728000", "lng": "126.98411510", "type": "edu", "country": "South Korea"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/b436/2cd87ad219790800127ddd366cc465606a78.pdf"], "doi": []}, {"id": "3a54b23cdbd159bb32c39c3adcba8229e3237e56", "title": "Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization", "addresses": [{"name": "University of Toronto", "source_name": "University of Toronto", "street_adddress": "University of Toronto, St. George Street, Bloor Street Culture Corridor, Old Toronto, Toronto, Ontario, M5S 1A5, Canada", "lat": "43.66333345", "lng": "-79.39769975", "type": "edu", "country": "Canada"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1805.12302.pdf"], "doi": []}, {"id": "3ac0aefb379dedae4a6054e649e98698b3e5fb82", "title": "An Occluded Stacked Hourglass Approach to Facial Landmark Localization and Occlusion Estimation", "addresses": [{"name": "University of California San Diego", "source_name": "Laboratory for Intelligent and Safe Automobiles, University of California San Diego (UCSD), USA", "street_adddress": "9500 Gilman Dr, La Jolla, CA 92093, USA", "lat": "32.88006040", "lng": "-117.23401350", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1802.02137.pdf"], "doi": []}, {"id": "c5ea084531212284ce3f1ca86a6209f0001de9d1", "title": "Audio-visual speech processing for multimedia localisation", "addresses": [{"name": "The University of Leeds", "source_name": "The University of Leeds, United Kingdom", "street_adddress": "Leeds LS2 9JT, UK", "lat": "53.80668150", "lng": "-1.55503280", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/c5ea/084531212284ce3f1ca86a6209f0001de9d1.pdf"], "doi": []}, {"id": "06c2dfe1568266ad99368fc75edf79585e29095f", "title": "Bayesian Active Appearance Models", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/joan_cvpr2014.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909835", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.439", "http://doi.org/10.1109/CVPR.2014.439"]}, {"id": "ccf16bcf458e4d7a37643b8364594656287f5bfc", "title": "Cascade for Landmark Guided Semantic Part Segmentation", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/ccf1/6bcf458e4d7a37643b8364594656287f5bfc.pdf"], "doi": []}, {"id": "60824ee635777b4ee30fcc2485ef1e103b8e7af9", "title": "Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting", "addresses": [{"name": "Jiangnan University", "source_name": "Jiangnan University", "street_adddress": "\u6c5f\u5357\u5927\u5b66\u7ad9, \u8821\u6e56\u5927\u9053, \u6ee8\u6e56\u533a, \u5357\u573a\u6751, \u6ee8\u6e56\u533a (Binhu), \u65e0\u9521\u5e02 / Wuxi, \u6c5f\u82cf\u7701, 214121, \u4e2d\u56fd", "lat": "31.48542550", "lng": "120.27395810", "type": "edu", "country": "China"}, {"name": "University of Surrey Guildford", "source_name": "University of Surrey Guildford, UK", "street_adddress": "388 Stag Hill, Guildford GU2 7XH, UK", "lat": "51.24218390", "lng": "-0.59054210", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf", "http://personal.ee.surrey.ac.uk/Personal/Z.Feng/pdf/IEEETIP2015.pdf", "http://www.ee.surrey.ac.uk/CVSSP/Publications/papers/Feng-TIP-2015.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126999", "http://doi.org/10.1109/TIP.2015.2446944", "https://www.ncbi.nlm.nih.gov/pubmed/26087493", "https://www.wikidata.org/entity/Q40823182"]}, {"id": "4836b084a583d2e794eb6a94982ea30d7990f663", "title": "Cascaded Face Alignment via Intimacy Definition Feature", "addresses": [{"name": "Hong Kong Polytechnic University", "source_name": "Hong Kong Polytechnic University", "street_adddress": "hong kong, 11, \u80b2\u624d\u9053 Yuk Choi Road, \u5c16\u6c99\u5480 Tsim Sha Tsui, \u6cb9\u5c16\u65fa\u5340 Yau Tsim Mong District, \u4e5d\u9f8d Kowloon, HK, 00000, \u4e2d\u56fd", "lat": "22.30457200", "lng": "114.17976285", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1611.06642.pdf"], "doi": []}, {"id": "72a1852c78b5e95a57efa21c92bdc54219975d8f", "title": "Cascaded regression with sparsified feature covariance matrix for facial landmark detection", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["http://eprints.nottingham.ac.uk/31303/1/prl_blockwise_SDM.pdf", "http://www.cs.nott.ac.uk/~pszmv/Documents/prl_blockwise_SDM.pdf"], "doi": ["http://doi.org/10.1016/j.patrec.2015.11.014"]}, {"id": "4140498e96a5ff3ba816d13daf148fffb9a2be3f", "title": "Constrained Ensemble Initialization for Facial Landmark Tracking in Video", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Li_Constrained.pdf", "https://www.cl.cam.ac.uk/~tb346/pub/papers/fg2017_ensemble.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7961809", "http://doi.ieeecomputersociety.org/10.1109/FG.2017.88", "http://doi.org/10.1109/FG.2017.88"]}, {"id": "963d0d40de8780161b70d28d2b125b5222e75596", "title": "Convolutional Experts Constrained Local Model for Facial Landmark Detection", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1611.08657.pdf"], "doi": []}, {"id": "ee418372b0038bd3b8ae82bd1518d5c01a33a7ec", "title": "CSE 255 Winter 2015 Assignment 1 : Eye Detection using Histogram of Oriented Gradients and Adaboost Classifier", "addresses": [{"name": "University of California, San Diego", "source_name": "University of California, San Diego", "street_adddress": "UCSD, 9500, Gilman Drive, Sixth College, University City, San Diego, San Diego County, California, 92093, USA", "lat": "32.87935255", "lng": "-117.23110049", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/ee41/8372b0038bd3b8ae82bd1518d5c01a33a7ec.pdf"], "doi": []}, {"id": "f27b8b8f2059248f77258cf8595e9434cf0b0228", "title": "Deep Alignment Network: A Convolutional Neural Network for Robust Face Alignment", "addresses": [{"name": "Warsaw University of Technology", "source_name": "Warsaw University of Technology", "street_adddress": "Politechnika Warszawska, 1, Plac Politechniki, VIII, \u015ar\u00f3dmie\u015bcie, Warszawa, mazowieckie, 00-661, RP", "lat": "52.22165395", "lng": "21.00735776", "type": "edu", "country": "Poland"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1706.01789.pdf"], "doi": []}, {"id": "a0b1990dd2b4cd87e4fd60912cc1552c34792770", "title": "Deep Constrained Local Models for Facial Landmark Detection", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/a0b1/990dd2b4cd87e4fd60912cc1552c34792770.pdf"], "doi": []}, {"id": "38cbb500823057613494bacd0078aa0e57b30af8", "title": "Deep Face Deblurring", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1704.08772.pdf"], "doi": []}, {"id": "9b8f7a6850d991586b7186f0bb7e424924a9fd74", "title": "Disentangling the Modes of Variation in Unlabelled Data", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2018", "pdf": ["https://ibug.doc.ic.ac.uk/media/uploads/documents/disentangling-modes-variation.pdf", "https://www.doc.ic.ac.uk/~ipanagak/pub/jour/J14.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8214214", "http://doi.ieeecomputersociety.org/10.1109/TPAMI.2017.2783940", "http://doi.org/10.1109/TPAMI.2017.2783940", "https://www.ncbi.nlm.nih.gov/pubmed/29990016"]}, {"id": "b29b42f7ab8d25d244bfc1413a8d608cbdc51855", "title": "Effective face landmark localization via single deep network", "addresses": [{"name": "Sichuan University, Chengdu", "source_name": "Sichuan Univ., Chengdu", "street_adddress": "\u56db\u5ddd\u5927\u5b66\uff08\u534e\u897f\u6821\u533a\uff09, \u6821\u4e1c\u8def, \u6b66\u4faf\u533a, \u6b66\u4faf\u533a (Wuhou), \u6210\u90fd\u5e02 / Chengdu, \u56db\u5ddd\u7701, 610014, \u4e2d\u56fd", "lat": "30.64276900", "lng": "104.06751175", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1702.02719.pdf"], "doi": []}, {"id": "4cfa8755fe23a8a0b19909fa4dec54ce6c1bd2f7", "title": "Efficient likelihood Bayesian constrained local model", "addresses": [{"name": "Hong Kong Polytechnic University", "source_name": "Hong Kong Polytechnic University", "street_adddress": "hong kong, 11, \u80b2\u624d\u9053 Yuk Choi Road, \u5c16\u6c99\u5480 Tsim Sha Tsui, \u6cb9\u5c16\u65fa\u5340 Yau Tsim Mong District, \u4e5d\u9f8d Kowloon, HK, 00000, \u4e2d\u56fd", "lat": "22.30457200", "lng": "114.17976285", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1611.09956.pdf"], "doi": []}, {"id": "5c820e47981d21c9dddde8d2f8020146e600368f", "title": "Extended Supervised Descent Method for Robust Face Alignment", "addresses": [{"name": "Beijing University of Posts and Telecommunications", "source_name": "Beijing University of Posts and Telecommunications", "street_adddress": "\u5317\u4eac\u90ae\u7535\u5927\u5b66, \u897f\u571f\u57ce\u8def, \u6d77\u6dc0\u533a, \u5317\u4eac\u5e02, 100082, \u4e2d\u56fd", "lat": "39.96014880", "lng": "116.35193921", "type": "edu", "country": "China"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/5c82/0e47981d21c9dddde8d2f8020146e600368f.pdf"], "doi": []}, {"id": "f633d6dc02b2e55eb24b89f2b8c6df94a2de86dd", "title": "Face alignment by robust discriminative Hough voting", "addresses": [{"name": "Nanjing University", "source_name": "Nanjing University", "street_adddress": "NJU, \u4e09\u6c5f\u8def, \u9f13\u697c\u533a, \u5357\u4eac\u5e02, \u6c5f\u82cf\u7701, 210093, \u4e2d\u56fd", "lat": "32.05659570", "lng": "118.77408833", "type": "edu", "country": "China"}], "year": "2016", "pdf": ["http://parnec.nuaa.edu.cn/pubs/xiaoyang%20tan/journal/2016/JXPR-2016.pdf", "http://parnec.nuaa.edu.cn/xtan/paper/x-jin-pr.pdf"], "doi": ["http://doi.org/10.1016/j.patcog.2016.05.017"]}, {"id": "f0ae807627f81acb63eb5837c75a1e895a92c376", "title": "Facial Landmark Detection using Ensemble of Cascaded Regressions", "addresses": [{"name": "Technical University", "source_name": "Faculty of Computer Science, Technical University, Cluj-Napoca, Romania", "street_adddress": "Strada George Bari\u021biu 26-28, Cluj-Napoca 400027, Romania", "lat": "46.77235810", "lng": "23.58520750", "type": "edu", "country": "Romania"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/f0ae/807627f81acb63eb5837c75a1e895a92c376.pdf"], "doi": []}, {"id": "37c8514df89337f34421dc27b86d0eb45b660a5e", "title": "Facial Landmark Tracking by Tree-Based Deformable Part Model Based Detector", "addresses": [{"name": "Czech Technical University", "source_name": "Czech Technical University", "street_adddress": "\u010cesk\u00e9 vysok\u00e9 u\u010den\u00ed technick\u00e9 v Praze, Resslova, Nov\u00e9 M\u011bsto, Praha, okres Hlavn\u00ed m\u011bsto Praha, Hlavn\u00ed m\u011bsto Praha, Praha, 11121, \u010cesko", "lat": "50.07642960", "lng": "14.41802312", "type": "edu", "country": "Czech Republic"}], "year": "2015", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Uricar_Facial_Landmark_Tracking_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406476", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2015.127", "http://doi.org/10.1109/ICCVW.2015.127"]}, {"id": "5b0bf1063b694e4b1575bb428edb4f3451d9bf04", "title": "Facial Shape Tracking via Spatio-Temporal Cascade Shape Regression", "addresses": [{"name": "Nanjing University", "source_name": "Nanjing University", "street_adddress": "NJU, \u4e09\u6c5f\u8def, \u9f13\u697c\u533a, \u5357\u4eac\u5e02, \u6c5f\u82cf\u7701, 210093, \u4e2d\u56fd", "lat": "32.05659570", "lng": "118.77408833", "type": "edu", "country": "China"}], "year": "2015", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Yang_Facial_Shape_Tracking_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406480", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2015.131", "http://doi.org/10.1109/ICCVW.2015.131"]}, {"id": "a66d89357ada66d98d242c124e1e8d96ac9b37a0", "title": "Failure Detection for Facial Landmark Detectors", "addresses": [{"name": "ETH Zurich", "source_name": "ETH Zurich", "street_adddress": "R\u00e4mistrasse 101, 8092 Z\u00fcrich, Switzerland", "lat": "47.37631300", "lng": "8.54766990", "type": "edu", "country": "Switzerland"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1608.06451.pdf"], "doi": []}, {"id": "f1b4583c576d6d8c661b4b2c82bdebf3ba3d7e53", "title": "Faster than Real-Time Facial Alignment: A 3D Spatial Transformer Network Approach in Unconstrained Poses", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1707.05653.pdf"], "doi": []}, {"id": "70a69569ba61f3585cd90c70ca5832e838fa1584", "title": "Friendly Faces: Weakly Supervised Character Identification", "addresses": [{"name": "University of Surrey", "source_name": "University of Surrey", "street_adddress": "University of Surrey, Spine Road, Guildford Park, Guildford, Surrey, South East, England, GU2 7XH, UK", "lat": "51.24303255", "lng": "-0.59001382", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/70a6/9569ba61f3585cd90c70ca5832e838fa1584.pdf"], "doi": []}, {"id": "f0a4a3fb6997334511d7b8fc090f9ce894679faf", "title": "Generative Face Completion", "addresses": [{"name": "University of California, Merced", "source_name": "University of California, Merced", "street_adddress": "University of California, Merced, Ansel Adams Road, Merced County, California, USA", "lat": "37.36566745", "lng": "-120.42158888", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1704.05838.pdf"], "doi": []}, {"id": "ba21fd28003994480f713b0a1276160fea2e89b5", "title": "Identification of Individuals from Ears in Real World Conditions", "addresses": [{"name": "University of South Florida", "source_name": "University of South Florida", "street_adddress": "University of South Florida, Leroy Collins Boulevard, Tampa, Hillsborough County, Florida, 33620, USA", "lat": "28.05999990", "lng": "-82.41383619", "type": "edu", "country": "United States"}], "year": "2018", "pdf": ["https://pdfs.semanticscholar.org/ba21/fd28003994480f713b0a1276160fea2e89b5.pdf"], "doi": []}, {"id": "a40edf6eb979d1ddfe5894fac7f2cf199519669f", "title": "Improving Facial Attribute Prediction Using Semantic Segmentation", "addresses": [{"name": "University of Central Florida", "source_name": "University of Central Florida", "street_adddress": "University of Central Florida, Libra Drive, University Park, Orange County, Florida, 32816, USA", "lat": "28.59899755", "lng": "-81.19712501", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1704.08740.pdf"], "doi": []}, {"id": "e6178de1ef15a6a973aad2791ce5fbabc2cb8ae5", "title": "Improving Facial Landmark Detection via a Super-Resolution Inception Network", "addresses": [{"name": "Technical University of Munich", "source_name": "Computer Aided Medical Procedures, Technical University of Munich, Garching, Germany", "street_adddress": "Boltzmannstra\u00dfe 3, 85748 Garching bei M\u00fcnchen, Germany", "lat": "48.26301100", "lng": "11.66685700", "type": "edu", "country": "Germany"}], "year": "2017", "pdf": ["https://pdfs.semanticscholar.org/e617/8de1ef15a6a973aad2791ce5fbabc2cb8ae5.pdf"], "doi": []}, {"id": "9ca0626366e136dac6bfd628cec158e26ed959c7", "title": "In-the-wild Facial Expression Recognition in Extreme Poses", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1811.02194.pdf"], "doi": []}, {"id": "500b92578e4deff98ce20e6017124e6d2053b451", "title": "Incremental Face Alignment in the Wild", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2014", "pdf": ["http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/aasthanacvpr2014.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Asthana_Incremental_Face_Alignment_2014_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909636", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.240", "http://doi.org/10.1109/CVPR.2014.240"]}, {"id": "8dd162c9419d29564e9777dd523382a20c683d89", "title": "Interlinked Convolutional Neural Networks for Face Parsing", "addresses": [{"name": "Tsinghua University", "source_name": "Tsinghua University", "street_adddress": "\u6e05\u534e\u5927\u5b66, 30, \u53cc\u6e05\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100084, \u4e2d\u56fd", "lat": "40.00229045", "lng": "116.32098908", "type": "edu", "country": "China"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1806.02479.pdf"], "doi": []}, {"id": "2c14c3bb46275da5706c466f9f51f4424ffda914", "title": "L2, 1-based regression and prediction accumulation across views for robust facial landmark detection", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["http://braismartinez.com/media/documents/2015ivc_-_l21-based_regression_and_prediction_accumulation_across_views_for_robust_facial_landmark_detection.pdf", "http://www.cs.nott.ac.uk/~pszmv/Documents/2015IVC_L21.pdf"], "doi": ["http://doi.org/10.1016/j.imavis.2015.09.003"]}, {"id": "c00f402b9cfc3f8dd2c74d6b3552acbd1f358301", "title": "Learning deep representation from coarse to fine for face alignment", "addresses": [{"name": "Shanghai Jiao Tong University", "source_name": "Shanghai Jiao Tong University", "street_adddress": "\u4e0a\u6d77\u4ea4\u901a\u5927\u5b66\uff08\u5f90\u6c47\u6821\u533a\uff09, \u6dee\u6d77\u897f\u8def, \u756a\u79ba\u5c0f\u533a, \u5e73\u9634\u6865, \u5f90\u6c47\u533a, \u4e0a\u6d77\u5e02, 200052, \u4e2d\u56fd", "lat": "31.20081505", "lng": "121.42840681", "type": "edu", "country": "China"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1608.00207.pdf"], "doi": []}, {"id": "b5f79df712ad535d88ae784a617a30c02e0551ca", "title": "Locating Facial Landmarks Using Probabilistic Random Forest", "addresses": [{"name": "University of Science and Technology of China", "source_name": "University of Science and Technology of China", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u6280\u672f\u5927\u5b66 \u4e1c\u6821\u533a, 96\u53f7, \u91d1\u5be8\u8def, \u6c5f\u6dee\u5316\u80a5\u5382\u5c0f\u533a, \u829c\u6e56\u8def\u8857\u9053, \u5408\u80a5\u5e02\u533a, \u5408\u80a5\u5e02, \u5b89\u5fbd\u7701, 230026, \u4e2d\u56fd", "lat": "31.83907195", "lng": "117.26420748", "type": "edu", "country": "China"}], "year": "2015", "pdf": ["http://staff.ustc.edu.cn/~juyong/Papers/FaceAlignment-2015.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273853", "http://doi.org/10.1109/LSP.2015.2480758"]}, {"id": "0bc53b338c52fc635687b7a6c1e7c2b7191f42e5", "title": "Loglet SIFT for Part Description in Deformable Part Models: Application to Face Alignment", "addresses": [{"name": "University of Warwick", "source_name": "University of Warwick", "street_adddress": "University of Warwick, University Road, Kirby Corner, Cannon Park, Coventry, West Midlands Combined Authority, West Midlands, England, CV4 7AL, UK", "lat": "52.37931310", "lng": "-1.56042520", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/a32a/8d6d4c3b4d69544763be48ffa7cb0d7f2f23.pdf"], "doi": []}, {"id": "6fd4048bfe3123e94c2648e53a56bc6bf8ff4cdd", "title": "Micro-facial movement detection using spatio-temporal features", "addresses": [{"name": "Manchester Metropolitan University", "source_name": "Mathematics and Digital Technology, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK", "street_adddress": "John Dalton Building, Manchester Metropolitan University, Chester St, Manchester M1 5GD, United Kingdom", "lat": "53.47173060", "lng": "-2.23992390", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/6fd4/048bfe3123e94c2648e53a56bc6bf8ff4cdd.pdf"], "doi": []}, {"id": "0f81b0fa8df5bf3fcfa10f20120540342a0c92e5", "title": "Mirror, mirror on the wall, tell me, is the error small?", "addresses": [{"name": "Queen Mary University of London", "source_name": "Queen Mary University of London", "street_adddress": "Queen Mary (University of London), Mile End Road, Globe Town, Mile End, London Borough of Tower Hamlets, London, Greater London, England, E1 4NS, UK", "lat": "51.52472720", "lng": "-0.03931035", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1501.05152.pdf"], "doi": []}, {"id": "36e8ef2e5d52a78dddf0002e03918b101dcdb326", "title": "Multiview Active Shape Models with SIFT Descriptors for the 300-W Face Landmark Challenge", "addresses": [{"name": "University of Cape Town", "source_name": "University of Cape Town", "street_adddress": "University of Cape Town, Engineering Mall, Cape Town Ward 59, Cape Town, City of Cape Town, Western Cape, CAPE TOWN, South Africa", "lat": "-33.95828745", "lng": "18.45997349", "type": "edu", "country": "South Africa"}], "year": "2013", "pdf": ["http://www.milbo.org/stasm-files/multiview-active-shape-models-with-sift-for-300w.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755922"]}, {"id": "bbc5f4052674278c96abe7ff9dc2d75071b6e3f3", "title": "Nonlinear Hierarchical Part-Based Regression for Unconstrained Face Alignment", "addresses": [{"name": "State University of New Jersey", "source_name": "The State University of New Jersey", "street_adddress": "Rutgers New Brunswick: Livingston Campus, Joyce Kilmer Avenue, Piscataway Township, Middlesex County, New Jersey, 08854, USA", "lat": "40.51865195", "lng": "-74.44099801", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/287b/7baff99d6995fd5852002488eb44659be6c1.pdf"], "doi": []}, {"id": "bd13f50b8997d0733169ceba39b6eb1bda3eb1aa", "title": "Occlusion Coherence: Detecting and Localizing Occluded Faces", "addresses": [{"name": "University of California at Irvine", "source_name": "Computational Vision Group, University of California at Irvine, Irvine, CA, USA", "street_adddress": "Irvine, CA 92697, USA", "lat": "33.64049520", "lng": "-117.84429620", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1506.08347.pdf"], "doi": []}, {"id": "65126e0b1161fc8212643b8ff39c1d71d262fbc1", "title": "Occlusion Coherence: Localizing Occluded Faces with a Hierarchical Deformable Part Model", "addresses": [{"name": "University of California Irvine", "source_name": "University of California Irvine, Irvine", "street_adddress": "Irvine, CA 92697, USA", "lat": "33.64049520", "lng": "-117.84429620", "type": "edu", "country": "United States"}], "year": "2014", "pdf": ["http://vision.ics.uci.edu/papers/GhiasiF_CVPR_2014/GhiasiF_CVPR_2014.pdf", "http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Ghiasi_Occlusion_Coherence_Localizing_2014_CVPR_paper.pdf", "http://www.ics.uci.edu/~gghiasi/papers/gf-cvpr14-poster.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909641", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.306", "http://doi.org/10.1109/CVPR.2014.306"]}, {"id": "4a8480d58c30dc484bda08969e754cd13a64faa1", "title": "Offline Deformable Face Tracking in Arbitrary Videos", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/paper_offline.pdf", "http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Chrysos_Offline_Deformable_Face_ICCV_2015_paper.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/paper_offline.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406475", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2015.126", "http://doi.org/10.1109/ICCVW.2015.126"]}, {"id": "7d1688ce0b48096e05a66ead80e9270260cb8082", "title": "Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors", "addresses": [{"name": "Otto von Guericke University", "source_name": "Otto von Guericke University", "street_adddress": "Otto-von-Guericke-Universit\u00e4t Magdeburg, 2, Universit\u00e4tsplatz, Kr\u00f6kentorviertel/Breiter Weg NA, Alte Neustadt, Magdeburg, Sachsen-Anhalt, 39106, Deutschland", "lat": "52.14005065", "lng": "11.64471248", "type": "edu", "country": "Germany"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w44/Saxen_Real_vs._Fake_ICCV_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8265574", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2017.363", "http://doi.org/10.1109/ICCVW.2017.363"]}, {"id": "3c6cac7ecf546556d7c6050f7b693a99cc8a57b3", "title": "Robust facial landmark detection in the wild", "addresses": [{"name": "University of Surrey", "source_name": "University of Surrey", "street_adddress": "University of Surrey, Spine Road, Guildford Park, Guildford, Surrey, South East, England, GU2 7XH, UK", "lat": "51.24303255", "lng": "-0.59001382", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/3c6c/ac7ecf546556d7c6050f7b693a99cc8a57b3.pdf"], "doi": []}, {"id": "4a04d4176f231683fd68ccf0c76fcc0c44d05281", "title": "Simultaneous Cascaded Regression", "addresses": [{"name": "Institute of Systems and Robotics", "source_name": "Institute of Systems and Robotics", "street_adddress": "Institut f\u00fcr Robotik und Kognitive Systeme, 160, Ratzeburger Allee, Strecknitz, Sankt J\u00fcrgen, Strecknitz, L\u00fcbeck, Schleswig-Holstein, 23562, Deutschland", "lat": "53.83383710", "lng": "10.70359390", "type": "edu", "country": "Germany"}], "year": "2018", "pdf": ["http://home.isr.uc.pt/~pedromartins/Publications/pmartins_icip2018.pdf", "http://home.isr.uc.pt/~pedromartins/Publications/pmartins_icip2018_slides.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8451313", "http://doi.org/10.1109/ICIP.2018.8451313"]}, {"id": "11fc332bdcc843aad7475bb4566e73a957dffda5", "title": "SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting", "addresses": [{"name": "University of Southern California", "source_name": "University of Southern California", "street_adddress": "University of Southern California, Watt Way, Saint James Park, LA, Los Angeles County, California, 90089, USA", "lat": "34.02241490", "lng": "-118.28634407", "type": "edu", "country": "United States"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1805.03356.pdf"], "doi": []}, {"id": "d140c5add2cddd4a572f07358d666fe00e8f4fe1", "title": "Statistically Learned Deformable Eye Models", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/d140/c5add2cddd4a572f07358d666fe00e8f4fe1.pdf"], "doi": []}, {"id": "77875d6e4d8c7ed3baeb259fd5696e921f59d7ad", "title": "Style Aggregated Network for Facial Landmark Detection", "addresses": [{"name": "University of Technology Sydney", "source_name": "University of Technology Sydney", "street_adddress": "University of Technology Sydney, Omnibus Lane, Ultimo, Sydney, NSW, 2007, Australia", "lat": "-33.88096510", "lng": "151.20107299", "type": "edu", "country": "Australia"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1803.04108.pdf"], "doi": []}, {"id": "d32b155138dafd0a9099980eceec6081ab51b861", "title": "Super-realtime facial landmark detection and shape fitting by deep regression of shape model parameters", "addresses": [{"name": "RWTH Aachen University", "source_name": "RWTH Aachen University", "street_adddress": "RWTH Aachen, Mies-van-der-Rohe-Stra\u00dfe, K\u00f6nigsh\u00fcgel, Aachen-Mitte, Aachen, St\u00e4dteregion Aachen, Regierungsbezirk K\u00f6ln, Nordrhein-Westfalen, 52074, Deutschland", "lat": "50.77917030", "lng": "6.06728733", "type": "edu", "country": "Germany"}], "year": "2019", "pdf": ["https://arxiv.org/pdf/1902.03459.pdf"], "doi": []}, {"id": "59d8fa6fd91cdb72cd0fa74c04016d79ef5a752b", "title": "The Menpo Facial Landmark Localisation Challenge: A Step Towards the Solution", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Zafeiriou_The_Menpo_Facial_CVPR_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8014997", "http://doi.ieeecomputersociety.org/10.1109/CVPRW.2017.263", "http://doi.org/10.1109/CVPRW.2017.263"]}, {"id": "995d55fdf5b6fe7fb630c93a424700d4bc566104", "title": "The One Triangle Three Parallelograms Sampling Strategy and Its Application in Shape Regression", "addresses": [{"name": "Lund University", "source_name": "Lund University", "street_adddress": "TEM at Lund University, 9, Klostergatan, Stadsk\u00e4rnan, Centrum, Lund, Sk\u00e5ne, G\u00f6taland, 22222, Sverige", "lat": "55.70395710", "lng": "13.19020110", "type": "edu", "country": "Sweden"}], "year": "2015", "pdf": ["http://openaccess.thecvf.com/content_iccv_2015/papers/Nilsson_The_One_Triangle_ICCV_2015_paper.pdf", "http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Nilsson_The_One_Triangle_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410537", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2015.180", "http://doi.org/10.1109/ICCV.2015.180"]}, {"id": "671bfefb22d2044ab3e4402703bb88a10a7da78a", "title": "Triple consistency loss for pairing distributions in GAN-based face synthesis.", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1811.03492.pdf"], "doi": []}, {"id": "5c124b57699be19cd4eb4e1da285b4a8c84fc80d", "title": "Unified Face Analysis by Iterative Multi-output Random Forests", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Zhao_Unified_Face_Analysis_2014_CVPR_paper.pdf", "http://www.iis.ee.ic.ac.uk/icvl/doc/cvpr14_xiaowei.pdf", "https://labicvl.github.io/docs/pubs/Xiaowei_CVPR_2014.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909624", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2014.228", "http://doi.org/10.1109/CVPR.2014.228"]}, {"id": "891b10c4b3b92ca30c9b93170ec9abd71f6099c4", "title": "2 New Statement for Structured Output Regression Problems", "addresses": [{"name": "INSA Rouen, France", "source_name": "Laboratoire d'Informatique, de Traitement de l'Information et des Systemes, INSA Rouen, Avenue de l'Universite, 76800, Saint-Etienne-du-Rouvray, France", "street_adddress": "685 Avenue de l'Universit\u00e9, 76800 Saint-\u00c9tienne-du-Rouvray, France", "lat": "49.38497570", "lng": "1.06832570", "type": "edu", "country": "France"}, {"name": "Rouen University", "source_name": "LITIS Laboratory, Rouen University, Rouen, France", "street_adddress": "1 Rue Thomas Becket, 76130 Mont-Saint-Aignan, France", "lat": "49.45830470", "lng": "1.06888920", "type": "edu", "country": "France"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/891b/10c4b3b92ca30c9b93170ec9abd71f6099c4.pdf"], "doi": []}, {"id": "e4754afaa15b1b53e70743880484b8d0736990ff", "title": "300 Faces In-The-Wild Challenge: database and results", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2016", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf", "https://spiral.imperial.ac.uk:8443/bitstream/10044/1/32322/2/300w.pdf"], "doi": ["https://doi.org/10.1016/j.imavis.2016.01.002"]}, {"id": "303065c44cf847849d04da16b8b1d9a120cef73a", "title": "3D Face Morphable Models \"In-the-Wild\"", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1701.05360.pdf"], "doi": []}, {"id": "2e3d081c8f0e10f138314c4d2c11064a981c1327", "title": "A Comprehensive Performance Evaluation of Deformable Face Tracking \u201cIn-the-Wild\u201d", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1603.06015.pdf"], "doi": []}, {"id": "6e38011e38a1c893b90a48e8f8eae0e22d2008e8", "title": "A Computer Vision Based Approach for Understanding Emotional Involvements in Children with Autism Spectrum Disorders", "addresses": [{"name": "National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy", "source_name": "National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy", "street_adddress": "73100 Lecce, Province of Lecce, Italy", "lat": "40.35151550", "lng": "18.17501610", "type": "edu", "country": "Italy"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Del_Coco_A_Computer_Vision_ICCV_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8265376", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2017.166", "http://doi.org/10.1109/ICCVW.2017.166"]}, {"id": "131e395c94999c55c53afead65d81be61cd349a4", "title": "A Functional Regression Approach to Facial Landmark Tracking", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}, {"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1612.02203.pdf"], "doi": []}, {"id": "2df4d05119fe3fbf1f8112b3ad901c33728b498a", "title": "A regularization scheme for structured output problems : an application to facial landmark detection", "addresses": [{"name": "Normandie University", "source_name": "Normandie Univ, INSA Rouen, LITIS, 76000 Rouen, France", "street_adddress": "1 Rue Thomas Becket, 76130 Mont-Saint-Aignan, France", "lat": "49.45830470", "lng": "1.06888920", "type": "edu", "country": "France"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/2df4/d05119fe3fbf1f8112b3ad901c33728b498a.pdf"], "doi": []}, {"id": "9993f1a7cfb5b0078f339b9a6bfa341da76a3168", "title": "A Simple, Fast and Highly-Accurate Algorithm to Recover 3D Shape from 2D Landmarks on a Single Image", "addresses": [{"name": "Ohio State University", "source_name": "The Ohio State University", "street_adddress": "The Ohio State University, Woody Hayes Drive, Columbus, Franklin County, Ohio, 43210, USA", "lat": "40.00471095", "lng": "-83.02859368", "type": "edu", "country": "United States"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1609.09058.pdf"], "doi": []}, {"id": "5f5906168235613c81ad2129e2431a0e5ef2b6e4", "title": "A Unified Framework for Compositional Fitting of Active Appearance Models", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1601.00199.pdf"], "doi": []}, {"id": "0b0958493e43ca9c131315bcfb9a171d52ecbb8a", "title": "A Unified Neural Based Model for Structured Output Problems", "addresses": [{"name": "Rouen University", "source_name": "LITIS Laboratory, Rouen University, Rouen, France", "street_adddress": "1 Rue Thomas Becket, 76130 Mont-Saint-Aignan, France", "lat": "49.45830470", "lng": "1.06888920", "type": "edu", "country": "France"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/0b09/58493e43ca9c131315bcfb9a171d52ecbb8a.pdf"], "doi": []}, {"id": "b730908bc1f80b711c031f3ea459e4de09a3d324", "title": "Active Orientation Models for Face Alignment In-the-Wild", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Lincoln", "source_name": "University of Lincoln", "street_adddress": "University of Lincoln, Brayford Way, Whitton Park, New Boultham, Lincoln, Lincolnshire, East Midlands, England, LN6 7TS, UK", "lat": "53.22853665", "lng": "-0.54873472", "type": "edu", "country": "United Kingdom"}], "year": "2014", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914605", "http://doi.org/10.1109/TIFS.2014.2361018"]}, {"id": "293ade202109c7f23637589a637bdaed06dc37c9", "title": "Adaptive cascaded regression", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Oulu", "source_name": "University of Oulu", "street_adddress": "Oulun yliopisto, Biologintie, Linnanmaa, Oulu, Oulun seutukunta, Pohjois-Pohjanmaa, Pohjois-Suomen aluehallintovirasto, Pohjois-Suomi, Manner-Suomi, 90540, Suomi", "lat": "65.05921570", "lng": "25.46632601", "type": "edu", "country": "Finland"}], "year": "2016", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/sup/antonakos2016adaptive_supp.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7532638", "http://doi.org/10.1109/ICIP.2016.7532638"]}, {"id": "45e7ddd5248977ba8ec61be111db912a4387d62f", "title": "Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization", "addresses": [{"name": "Adelaide University", "source_name": "Adelaide University, Australia", "street_adddress": "Adelaide SA 5005, Australia", "lat": "-34.92060300", "lng": "138.60622770", "type": "edu", "country": "Australia"}, {"name": "Nanjing University", "source_name": "Nanjing University", "street_adddress": "NJU, \u4e09\u6c5f\u8def, \u9f13\u697c\u533a, \u5357\u4eac\u5e02, \u6c5f\u82cf\u7701, 210093, \u4e2d\u56fd", "lat": "32.05659570", "lng": "118.77408833", "type": "edu", "country": "China"}, {"name": "Nanjing University of Science & Technology", "source_name": "Nanjing University of Science & Technology, Nanjing, People\u2019s Republic of China", "street_adddress": "China, Jiangsu, Nanjing, Xuanwu, \u4e2d\u5c71\u95e8\u5916\u5927\u8857", "lat": "32.03522500", "lng": "118.85531700", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1711.00253.pdf"], "doi": []}, {"id": "3504907a2e3c81d78e9dfe71c93ac145b1318f9c", "title": "An End-to-End System for Unconstrained Face Verification with Deep Convolutional Neural Networks", "addresses": [{"name": "University of Maryland College Park", "source_name": "University of Maryland College Park", "street_adddress": "University of Maryland, College Park, Farm Drive, Acredale, College Park, Prince George's County, Maryland, 20742, USA", "lat": "38.99203005", "lng": "-76.94610290", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1605.02686.pdf"], "doi": []}, {"id": "1f9ae272bb4151817866511bd970bffb22981a49", "title": "An Iterative Regression Approach for Face Pose Estimation from RGB Images", "addresses": [{"name": "University of Dayton", "source_name": "University of Dayton", "street_adddress": "University of Dayton, Caldwell Street, South Park Historic District, Dayton, Montgomery, Ohio, 45409, USA", "lat": "39.73844400", "lng": "-84.17918747", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1709.03170.pdf"], "doi": []}, {"id": "86c053c162c08bc3fe093cc10398b9e64367a100", "title": "Cascade of forests for face alignment", "addresses": [{"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}, {"name": "Queen Mary University of London", "source_name": "Queen Mary University of London", "street_adddress": "Queen Mary (University of London), Mile End Road, Globe Town, Mile End, London Borough of Tower Hamlets, London, Greater London, England, E1 4NS, UK", "lat": "51.52472720", "lng": "-0.03931035", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf"], "doi": []}, {"id": "056ba488898a1a1b32daec7a45e0d550e0c51ae4", "title": "Cascaded Continuous Regression for Real-Time Incremental Face Tracking", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1608.01137.pdf"], "doi": []}, {"id": "2e091b311ac48c18aaedbb5117e94213f1dbb529", "title": "Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets", "addresses": [{"name": "University of Wisconsin Madison", "source_name": "University of Wisconsin Madison", "street_adddress": "University of Wisconsin-Madison, Marsh Lane, Madison, Dane County, Wisconsin, 53705-2221, USA", "lat": "43.07982815", "lng": "-89.43066425", "type": "edu", "country": "United States"}], "year": "2014", "pdf": ["http://pages.cs.wisc.edu/~lizhang/projects/collab-face-landmarks/SmithECCV2014.pdf", "http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/ECCV_2014/papers/8694/86940078.pdf"], "doi": ["http://doi.org/10.1007/978-3-319-10599-4_6"]}, {"id": "faead8f2eb54c7bc33bc7d0569adc7a4c2ec4c3b", "title": "Combining Data-Driven and Model-Driven Methods for Robust Facial Landmark Detection", "addresses": [{"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1611.10152.pdf"], "doi": []}, {"id": "08ecc281cdf954e405524287ee5920e7c4fb597e", "title": "Computational Assessment of Facial Expression Production in ASD Children", "addresses": [{"name": "National Research Council, Italy", "source_name": "National Research Council, Italy", "street_adddress": "Research Private, Ottawa, ON K1V, Canada", "lat": "45.32909590", "lng": "-75.66198580", "type": "edu", "country": "Canada"}], "year": "2018", "pdf": ["https://pdfs.semanticscholar.org/08ec/c281cdf954e405524287ee5920e7c4fb597e.pdf"], "doi": []}, {"id": "dee406a7aaa0f4c9d64b7550e633d81bc66ff451", "title": "Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning", "addresses": [{"name": "Queen Mary University of London", "source_name": "Queen Mary University of London", "street_adddress": "Queen Mary (University of London), Mile End Road, Globe Town, Mile End, London Borough of Tower Hamlets, London, Greater London, England, E1 4NS, UK", "lat": "51.52472720", "lng": "-0.03931035", "type": "edu", "country": "United Kingdom"}, {"name": "Sun Yat-Sen University", "source_name": "Sun Yat-Sen University", "street_adddress": "\u4e2d\u5927, \u65b0\u6e2f\u897f\u8def, \u9f99\u8239\u6ed8, \u5eb7\u4e50, \u6d77\u73e0\u533a (Haizhu), \u5e7f\u5dde\u5e02, \u5e7f\u4e1c\u7701, 510105, \u4e2d\u56fd", "lat": "23.09461185", "lng": "113.28788994", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1710.01453.pdf"], "doi": []}, {"id": "029b53f32079063047097fa59cfc788b2b550c4b", "title": "Continuous Conditional Neural Fields for Structured Regression", "addresses": [{"name": "University of Cambridge", "source_name": "University of Cambridge", "street_adddress": "Clifford Allbutt Lecture Theatre, Robinson Way, Romsey, Cambridge, Cambridgeshire, East of England, England, CB2 0QH, UK", "lat": "52.17638955", "lng": "0.14308882", "type": "edu", "country": "United Kingdom"}, {"name": "University of Southern California", "source_name": "University of Southern California", "street_adddress": "University of Southern California, Watt Way, Saint James Park, LA, Los Angeles County, California, 90089, USA", "lat": "34.02241490", "lng": "-118.28634407", "type": "edu", "country": "United States"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf"], "doi": []}, {"id": "88e2efab01e883e037a416c63a03075d66625c26", "title": "Convolutional Experts Constrained Local Model for 3D Facial Landmark Detection", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w36/Zadeh_Convolutional_Experts_Constrained_ICCV_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8265507", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2017.296", "http://doi.org/10.1109/ICCVW.2017.296"]}, {"id": "656a59954de3c9fcf82ffcef926af6ade2f3fdb5", "title": "Convolutional Network Representation for Visual Recognition", "addresses": [{"name": "KTH Royal Institute of Technology, Stockholm", "source_name": "KTH Royal Institute of Technology, Stockholm", "street_adddress": "KTH, Teknikringen, L\u00e4rkstaden, Norra Djurg\u00e5rden, \u00d6stermalms stadsdelsomr\u00e5de, Sthlm, Stockholm, Stockholms l\u00e4n, Svealand, 114 28, Sverige", "lat": "59.34986645", "lng": "18.07063213", "type": "edu", "country": "Sweden"}], "year": "2017", "pdf": ["https://pdfs.semanticscholar.org/656a/59954de3c9fcf82ffcef926af6ade2f3fdb5.pdf"], "doi": []}, {"id": "7360a2adcd6e3fe744b7d7aec5c08ee31094dfd4", "title": "Deep and Deformable: Convolutional Mixtures of Deformable Part-Based Models", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Oulu", "source_name": "University of Oulu", "street_adddress": "Oulun yliopisto, Biologintie, Linnanmaa, Oulu, Oulun seutukunta, Pohjois-Pohjanmaa, Pohjois-Suomen aluehallintovirasto, Pohjois-Suomi, Manner-Suomi, 90540, Suomi", "lat": "65.05921570", "lng": "25.46632601", "type": "edu", "country": "Finland"}], "year": "2018", "pdf": ["https://ibug.doc.ic.ac.uk/media/uploads/documents/deep-deformable-convolutional.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8373833", "http://doi.ieeecomputersociety.org/10.1109/FG.2018.00040", "http://doi.org/10.1109/FG.2018.00040"]}, {"id": "5239001571bc64de3e61be0be8985860f08d7e7e", "title": "Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}, {"name": "Concordia University", "source_name": "Concordia University", "street_adddress": "Concordia University, 2811, Northeast Holman Street, Concordia, Portland, Multnomah County, Oregon, 97211, USA", "lat": "45.57022705", "lng": "-122.63709346", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1607.06871.pdf"], "doi": []}, {"id": "61f04606528ecf4a42b49e8ac2add2e9f92c0def", "title": "Deep Deformation Network for Object Landmark Localization", "addresses": [{"name": "NEC Labs, Cupertino, CA", "source_name": "NEC Labs, Cupertino, CA", "street_adddress": "10080 N Wolfe Rd # Sw3350, Cupertino, CA 95014, USA", "lat": "37.32391770", "lng": "-122.01296930", "type": "company", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1605.01014.pdf"], "doi": []}, {"id": "9ca7899338129f4ba6744f801e722d53a44e4622", "title": "Deep neural networks regularization for structured output prediction", "addresses": [{"name": "Normandie University", "source_name": "Normandie Univ, INSA Rouen, LITIS, 76000 Rouen, France", "street_adddress": "1 Rue Thomas Becket, 76130 Mont-Saint-Aignan, France", "lat": "49.45830470", "lng": "1.06888920", "type": "edu", "country": "France"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1504.07550.pdf"], "doi": []}, {"id": "5a7e62fdea39a4372e25cbbadc01d9b2204af95a", "title": "Direct Shape Regression Networks for End-to-End Face Alignment", "addresses": [{"name": "Beihang University", "source_name": "Beihang University", "street_adddress": "\u5317\u4eac\u822a\u7a7a\u822a\u5929\u5927\u5b66, 37, \u5b66\u9662\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100083, \u4e2d\u56fd", "lat": "39.98083330", "lng": "116.34101249", "type": "edu", "country": "China"}, {"name": "University of Texas at Arlington", "source_name": "University of Texas at Arlington", "street_adddress": "University of Texas at Arlington, South Nedderman Drive, Arlington, Tarrant County, Texas, 76010, USA", "lat": "32.72836830", "lng": "-97.11201835", "type": "edu", "country": "United States"}, {"name": "Xidian University", "source_name": "Xidian University", "street_adddress": "Xidian University (New Campus), 266\u53f7, \u94f6\u674f\u5927\u9053, \u5357\u96f7\u6751, \u957f\u5b89\u533a (Chang'an), \u897f\u5b89\u5e02, \u9655\u897f\u7701, 710126, \u4e2d\u56fd", "lat": "34.12358250", "lng": "108.83546000", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf", "http://see.xidian.edu.cn/faculty/chdeng/Welcome%20to%20Cheng%20Deng's%20Homepage_files/Papers/Conference/CVPR2018_Miao.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8578627", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00529", "http://doi.org/10.1109/CVPR.2018.00529"]}, {"id": "0eac652139f7ab44ff1051584b59f2dc1757f53b", "title": "Efficient Branching Cascaded Regression for Face Alignment under Significant Head Rotation", "addresses": [{"name": "University of Wisconsin Madison", "source_name": "University of Wisconsin Madison", "street_adddress": "University of Wisconsin-Madison, Marsh Lane, Madison, Dane County, Wisconsin, 53705-2221, USA", "lat": "43.07982815", "lng": "-89.43066425", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1611.01584.pdf"], "doi": []}, {"id": "b07582d1a59a9c6f029d0d8328414c7bef64dca0", "title": "Employing Fusion of Learned and Handcrafted Features for Unconstrained Ear Recognition", "addresses": [{"name": "Federal University of Bahia", "source_name": "Federal University of Bahia, Salvador, Bahia, Brazil", "street_adddress": "Av. Adhemar de Barros, s/n\u00ba - Ondina, Salvador - BA, 40170-110, Brazil", "lat": "-13.00246020", "lng": "-38.50897520", "type": "edu", "country": "Brazil"}, {"name": "University of South Florida", "source_name": "University of South Florida", "street_adddress": "University of South Florida, Leroy Collins Boulevard, Tampa, Hillsborough County, Florida, 33620, USA", "lat": "28.05999990", "lng": "-82.41383619", "type": "edu", "country": "United States"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1710.07662.pdf"], "doi": []}, {"id": "a40f8881a36bc01f3ae356b3e57eac84e989eef0", "title": "End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks", "addresses": [{"name": "Autonomous University of Barcelona", "source_name": "Computer Vision Center, Autonomous University of Barcelona, Edifici O, 08193 Bellaterra, Barcelona, Spain", "street_adddress": "Campus UAB, Edifici O, s/n, 08193 Cerdanyola del Vall\u00e8s, Barcelona, Spain", "lat": "41.50089570", "lng": "2.11155300", "type": "edu", "country": "Spain"}, {"name": "Radboud University Nijmegen", "source_name": "Radboud University Nijmegen, Nijmegen, The Netherlands", "street_adddress": "Houtlaan 4, 6525 XZ Nijmegen, Netherlands", "lat": "51.81670100", "lng": "5.86527200", "type": "edu", "country": "Netherlands"}, {"name": "Universitat Oberta de Catalunya", "source_name": "Universitat Oberta de Catalunya", "street_adddress": "Universitat Oberta de Catalunya, 156, Rambla del Poblenou, Proven\u00e7als del Poblenou, Sant Mart\u00ed, Barcelona, BCN, CAT, 08018, Espa\u00f1a", "lat": "41.40657415", "lng": "2.19453410", "type": "edu", "country": "Spain"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1703.03305.pdf"], "doi": []}, {"id": "49258cc3979103681848284470056956b77caf80", "title": "EPAT: Euclidean Perturbation Analysis and Transform - An Agnostic Data Adaptation Framework for Improving Facial Landmark Detectors", "addresses": [{"name": "University of Southern California", "source_name": "University of Southern California", "street_adddress": "University of Southern California, Watt Way, Saint James Park, LA, Los Angeles County, California, 90089, USA", "lat": "34.02241490", "lng": "-118.28634407", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://5443dcab-a-62cb3a1a-s-sites.googlegroups.com/site/tuftsyuewu/epat-euclidean-perturbation.pdf?attachauth=ANoY7crlk9caZscfn0KRjed81DVoV-Ec6ZHI7txQrJiM_NBic36WKIg-ODwefcBtfgfKdS1iX28MlSXNyB7pE0D7opPjlGqxBVVa1UuIiydhFOgkXlXGfrYqSPS6749JeYWDkfvwWraRfB_CK8bu77jAEA2sIVNgaVRa_7zvmzwnstLwSUowbYC1LRc5yDt8ieT_jdEb_TuhMgR2j03BdHgyUkVjl0TXRukYHWglDOxzHAKwj0vsb4U%3D&attredirects=0"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7961745", "http://doi.ieeecomputersociety.org/10.1109/FG.2017.36", "http://doi.org/10.1109/FG.2017.36"]}, {"id": "992ebd81eb448d1eef846bfc416fc929beb7d28b", "title": "Exemplar-Based Face Parsing Supplementary Material", "addresses": [{"name": "Adobe", "source_name": "Adobe2", "street_adddress": "345 Park Ave, San Jose, CA 95110, USA", "lat": "37.33077030", "lng": "-121.89409510", "type": "company", "country": "United States"}, {"name": "University of Wisconsin Madison", "source_name": "University of Wisconsin Madison", "street_adddress": "University of Wisconsin-Madison, Marsh Lane, Madison, Dane County, Wisconsin, 53705-2221, USA", "lat": "43.07982815", "lng": "-89.43066425", "type": "edu", "country": "United States"}], "year": "2013", "pdf": ["https://pdfs.semanticscholar.org/992e/bd81eb448d1eef846bfc416fc929beb7d28b.pdf"], "doi": []}, {"id": "1a8ccc23ed73db64748e31c61c69fe23c48a2bb1", "title": "Extensive Facial Landmark Localization with Coarse-to-Fine Convolutional Network Cascade", "addresses": [{"name": "Megvii Inc. (Face++), China", "source_name": "Megvii Inc. (Face++), China", "street_adddress": "China", "lat": "35.86166000", "lng": "104.19539700", "type": "company", "country": "China"}], "year": "2013", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Zhou_Extensive_Facial_Landmark_2013_ICCV_paper.pdf", "http://www.faceplusplus.com/wp-content/uploads/FacialLandmarkpaper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755923"]}, {"id": "6d8c9a1759e7204eacb4eeb06567ad0ef4229f93", "title": "Face Alignment Robust to Pose, Expressions and Occlusions", "addresses": [{"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}, {"name": "Michigan State University", "source_name": "Michigan State University", "street_adddress": "Michigan State University, Farm Lane, East Lansing, Ingham County, Michigan, 48824, USA", "lat": "42.71856800", "lng": "-84.47791571", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1707.05938.pdf"], "doi": []}, {"id": "eb48a58b873295d719827e746d51b110f5716d6c", "title": "Face Alignment Using K-Cluster Regression Forests With Weighted Splitting", "addresses": [{"name": "Warsaw University of Technology", "source_name": "Warsaw University of Technology", "street_adddress": "Politechnika Warszawska, 1, Plac Politechniki, VIII, \u015ar\u00f3dmie\u015bcie, Warszawa, mazowieckie, 00-661, RP", "lat": "52.22165395", "lng": "21.00735776", "type": "edu", "country": "Poland"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1706.01820.pdf"], "doi": []}, {"id": "31e57fa83ac60c03d884774d2b515813493977b9", "title": "Face Alignment with Cascaded Semi-Parametric Deep Greedy Neural Forests", "addresses": [{"name": "Sorbonne Universit\u00e9s, Paris, France", "source_name": "Sorbonne Universit\u00e9s, Paris, France", "street_adddress": "15-21 Rue de l'\u00c9cole de M\u00e9decine, 75006 Paris, France", "lat": "48.85076030", "lng": "2.34127570", "type": "edu", "country": "France"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1703.01597.pdf"], "doi": []}, {"id": "9207671d9e2b668c065e06d9f58f597601039e5e", "title": "Face Detection Using a 3D Model on Face Keypoints", "addresses": [{"name": "Florida State University", "source_name": "Florida State University", "street_adddress": "Florida State University, 600, West College Avenue, Tallahassee, Leon County, Florida, 32306-1058, USA", "lat": "30.44235995", "lng": "-84.29747867", "type": "edu", "country": "United States"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/9207/671d9e2b668c065e06d9f58f597601039e5e.pdf"], "doi": []}, {"id": "bc704680b5032eadf78c4e49f548ba14040965bf", "title": "Face Normals \"In-the-Wild\" Using Fully Convolutional Networks", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University College London", "source_name": "University College London", "street_adddress": "UCL Institute of Education, 20, Bedford Way, Holborn, Bloomsbury, London Borough of Camden, London, Greater London, England, WC1H 0AL, UK", "lat": "51.52316070", "lng": "-0.12820370", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2017/papers/Trigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/normal_estimation__cvpr_2017_-4.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8099527", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2017.44", "http://doi.org/10.1109/CVPR.2017.44"]}, {"id": "a4ce0f8cfa7d9aa343cb30b0792bb379e20ef41b", "title": "Facial Landmark Machines: A Backbone-Branches Architecture with Progressive Representation Learning", "addresses": [{"name": "Sun Yat-Sen University", "source_name": "Sun Yat-Sen University", "street_adddress": "\u4e2d\u5927, \u65b0\u6e2f\u897f\u8def, \u9f99\u8239\u6ed8, \u5eb7\u4e50, \u6d77\u73e0\u533a (Haizhu), \u5e7f\u5dde\u5e02, \u5e7f\u4e1c\u7701, 510105, \u4e2d\u56fd", "lat": "23.09461185", "lng": "113.28788994", "type": "edu", "country": "China"}, {"name": "University of Hong Kong", "source_name": "University of Hong Kong", "street_adddress": "\u6d77\u6d0b\u79d1\u5b78\u7814\u7a76\u6240 The Swire Institute of Marine Science, \u9db4\u5480\u9053 Cape D'Aguilar Road, \u9db4\u5480\u4f4e\u96fb\u53f0 Cape D'Aguilar Low-Level Radio Station, \u77f3\u6fb3 Shek O, \u82bd\u83dc\u5751\u6751 Nga Choy Hang Tsuen, \u5357\u5340 Southern District, \u9999\u6e2f\u5cf6 Hong Kong Island, HK, \u4e2d\u56fd", "lat": "22.20814690", "lng": "114.25964115", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1812.03887.pdf"], "doi": []}, {"id": "e4f032ee301d4a4b3d598e6fa6cffbcdb9cdfdd1", "title": "Facial Landmark Point Localization using Coarse-to-Fine Deep Recurrent Neural Network", "addresses": [{"name": "Bar-Ilan University", "source_name": "Bar-Ilan University", "street_adddress": "\u05d0\u05d5\u05e0\u05d9\u05d1\u05e8\u05e1\u05d9\u05d8\u05ea \u05d1\u05e8 \u05d0\u05d9\u05dc\u05df, \u05db\u05d1\u05d9\u05e9 \u05d2\u05d4\u05d4, \u05d2\u05d1\u05e2\u05ea \u05e9\u05de\u05d5\u05d0\u05dc, \u05e7\u05e8\u05d9\u05d9\u05ea \u05de\u05d8\u05dc\u05d5\u05df, \u05d2\u05d1\u05e2\u05ea \u05e9\u05de\u05d5\u05d0\u05dc, \u05de\u05d7\u05d5\u05d6 \u05ea\u05dc \u05d0\u05d1\u05d9\u05d1, NO, \u05d9\u05e9\u05e8\u05d0\u05dc", "lat": "32.06932925", "lng": "34.84334339", "type": "edu", "country": "Israel"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1805.01760.pdf"], "doi": []}, {"id": "ebedc841a2c1b3a9ab7357de833101648281ff0e", "title": "Facial landmarking for in-the-wild images with local inference based on global appearance", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2015", "pdf": ["http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf"], "doi": ["http://doi.org/10.1016/j.imavis.2015.01.004"]}, {"id": "375435fb0da220a65ac9e82275a880e1b9f0a557", "title": "From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2015", "pdf": ["http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf", "http://ibug.doc.ic.ac.uk/media/uploads/documents/tpami_alignment.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919301", "http://doi.org/10.1109/TPAMI.2014.2362142", "https://www.ncbi.nlm.nih.gov/pubmed/26357352", "https://www.wikidata.org/entity/Q50856272"]}, {"id": "37381718559f767fc496cc34ceb98ff18bc7d3e1", "title": "Harnessing Synthesized Abstraction Images to Improve Facial Attribute Recognition", "addresses": [{"name": "Fudan University", "source_name": "Fudan University", "street_adddress": "\u590d\u65e6\u5927\u5b66, 220, \u90af\u90f8\u8def, \u4e94\u89d2\u573a\u8857\u9053, \u6768\u6d66\u533a, \u4e0a\u6d77\u5e02, 200433, \u4e2d\u56fd", "lat": "31.30104395", "lng": "121.50045497", "type": "edu", "country": "China"}, {"name": "Jiaotong University", "source_name": "Jiaotong University, China", "street_adddress": "Jiaotong University, China, 200000", "lat": "31.19884000", "lng": "121.43256700", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://pdfs.semanticscholar.org/3738/1718559f767fc496cc34ceb98ff18bc7d3e1.pdf"], "doi": []}, {"id": "8c0a47c61143ceb5bbabef403923e4bf92fb854d", "title": "Improved Strategies for HPE Employing Learning-by-Synthesis Approaches", "addresses": [{"name": "Public University of Navarra", "source_name": "Public University of Navarra, Pamplona, Spain", "street_adddress": "Campus de Arrosadia, s/n, 31006 Pamplona, Navarra, Spain", "lat": "42.79726300", "lng": "-1.63215180", "type": "edu", "country": "Spain"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Larumbe_Improved_Strategies_for_ICCV_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8265392", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2017.182", "http://doi.org/10.1109/ICCVW.2017.182"]}, {"id": "3352426a67eabe3516812cb66a77aeb8b4df4d1b", "title": "Joint Multi-view Face Alignment in the Wild", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1708.06023.pdf"], "doi": []}, {"id": "390f3d7cdf1ce127ecca65afa2e24c563e9db93b", "title": "Learning and Transferring Multi-task Deep Representation for Face Alignment", "addresses": [{"name": "Chinese University of Hong Kong", "source_name": "Chinese University of Hong Kong", "street_adddress": "Hong Kong, \u99ac\u6599\u6c34\u6c60\u65c1\u8def", "lat": "22.41626320", "lng": "114.21093180", "type": "edu", "country": "China"}], "year": "2014", "pdf": ["https://pdfs.semanticscholar.org/6e80/a3558f9170f97c103137ea2e18ddd782e8d7.pdf"], "doi": []}, {"id": "df80fed59ffdf751a20af317f265848fe6bfb9c9", "title": "Learning Deep Sharable and Structural Detectors for Face Alignment", "addresses": [{"name": "Tsinghua University", "source_name": "Tsinghua University", "street_adddress": "\u6e05\u534e\u5927\u5b66, 30, \u53cc\u6e05\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100084, \u4e2d\u56fd", "lat": "40.00229045", "lng": "116.32098908", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["http://ivg.au.tsinghua.edu.cn/paper/2017_Learning%20deep%20sharable%20and%20structural%20detectors%20for%20face%20alignment.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7829264", "http://doi.org/10.1109/TIP.2017.2657118", "https://www.ncbi.nlm.nih.gov/pubmed/28129155", "https://www.wikidata.org/entity/Q38381655"]}, {"id": "d9deafd9d9e60657a7f34df5f494edff546c4fb8", "title": "Learning the Multilinear Structure of Visual Data", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_Learning_the_Multilinear_CVPR_2017_paper.pdf", "https://ibug.doc.ic.ac.uk/media/uploads/documents/1914_(1).pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8100124", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2017.641", "http://doi.org/10.1109/CVPR.2017.641"]}, {"id": "4f77a37753c03886ca9c9349723ec3bbfe4ee967", "title": "Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models", "addresses": [{"name": "Polytechnique Montr\u00e9al", "source_name": "Laboratoire d\u2019interpr\u00e9tation et de traitement d\u2019images et vid\u00e9o, Polytechnique Montr\u00e9al, Montreal, Canada", "street_adddress": "2900 Boulevard Edouard-Montpetit, Montr\u00e9al, QC H3T 1J4, Canada", "lat": "45.50438400", "lng": "-73.61288290", "type": "edu", "country": "Canada"}, {"name": "University of Toronto", "source_name": "University of Toronto", "street_adddress": "University of Toronto, St. George Street, Bloor Street Culture Corridor, Old Toronto, Toronto, Ontario, M5S 1A5, Canada", "lat": "43.66333345", "lng": "-79.39769975", "type": "edu", "country": "Canada"}], "year": "2013", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Hasan_Localizing_Facial_Keypoints_2013_ICCV_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755920"]}, {"id": "e7265c560b3f10013bf70aacbbf0eb4631b7e2aa", "title": "Look at Boundary: A Boundary-Aware Face Alignment Algorithm", "addresses": [{"name": "Amazon", "source_name": "Amazon, USA", "street_adddress": "Montrose Road, Gardner, KS 66030, USA", "lat": "38.77681060", "lng": "-94.94429820", "type": "company", "country": "United States"}, {"name": "SenseTime", "source_name": "SenseTime", "street_adddress": "China, Beijing Shi, Haidian Qu, WuDaoKou, Zhongguancun E Rd, 1\u53f7-7", "lat": "39.99300800", "lng": "116.32988200", "type": "company", "country": "China"}, {"name": "Tsinghua University", "source_name": "Tsinghua University", "street_adddress": "\u6e05\u534e\u5927\u5b66, 30, \u53cc\u6e05\u8def, \u4e94\u9053\u53e3, \u540e\u516b\u5bb6, \u6d77\u6dc0\u533a, 100084, \u4e2d\u56fd", "lat": "40.00229045", "lng": "116.32098908", "type": "edu", "country": "China"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1805.10483.pdf"], "doi": []}, {"id": "1b0a071450c419138432c033f722027ec88846ea", "title": "Looking at faces in a vehicle: A deep CNN based approach and evaluation", "addresses": [{"name": "University of California, San Diego", "source_name": "University of California, San Diego", "street_adddress": "UCSD, 9500, Gilman Drive, Sixth College, University City, San Diego, San Diego County, California, 92093, USA", "lat": "32.87935255", "lng": "-117.23110049", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["http://cvrr.ucsd.edu/publications/2016/YuenMartinTrivediITSC2016.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7795622", "http://doi.org/10.1109/ITSC.2016.7795622"]}, {"id": "6f5ce5570dc2960b8b0e4a0a50eab84b7f6af5cb", "title": "Low Resolution Face Recognition Using a Two-Branch Deep Convolutional Neural Network Architecture", "addresses": [{"name": "Amirkabir University of Technology", "source_name": "Amirkabir University of Technology", "street_adddress": "\u062f\u0627\u0646\u0634\u06af\u0627\u0647 \u0635\u0646\u0639\u062a\u06cc \u0627\u0645\u06cc\u0631\u06a9\u0628\u06cc\u0631, \u0648\u0644\u06cc \u0639\u0635\u0631, \u0645\u06cc\u062f\u0627\u0646 \u0648\u0644\u06cc\u0639\u0635\u0631, \u0645\u0646\u0637\u0642\u0647 \u06f6 \u0634\u0647\u0631 \u062a\u0647\u0631\u0627\u0646, \u062a\u0647\u0631\u0627\u0646, \u0628\u062e\u0634 \u0645\u0631\u06a9\u0632\u06cc \u0634\u0647\u0631\u0633\u062a\u0627\u0646 \u062a\u0647\u0631\u0627\u0646, \u0634\u0647\u0631\u0633\u062a\u0627\u0646 \u062a\u0647\u0631\u0627\u0646, \u0627\u0633\u062a\u0627\u0646 \u062a\u0647\u0631\u0627\u0646, \u0646\u0628\u0634 \u0628\u0631\u0627\u062f\u0631\u0627\u0646 \u0645\u0638\u0641\u0631, \u200f\u0627\u06cc\u0631\u0627\u0646\u200e", "lat": "35.70451400", "lng": "51.40972058", "type": "edu", "country": "Iran"}, {"name": "MIT", "source_name": "Massachusetts Institute", "street_adddress": "MIT, Amherst Street, Cambridgeport, Cambridge, Middlesex County, Massachusetts, 02238, USA", "lat": "42.35839610", "lng": "-71.09567788", "type": "edu", "country": "United States"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1706.06247.pdf"], "doi": []}, {"id": "47e8db3d9adb79a87c8c02b88f432f911eb45dc5", "title": "MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1509.05715.pdf"], "doi": []}, {"id": "daa4cfde41d37b2ab497458e331556d13dd14d0b", "title": "Multi-view Constrained Local Models for Large Head Angle Facial Tracking", "addresses": [{"name": "University of Manchester", "source_name": "University of Manchester", "street_adddress": "University of Manchester - Main Campus, Brunswick Street, Curry Mile, Ardwick, Manchester, Greater Manchester, North West England, England, M13 9NR, UK", "lat": "53.46600455", "lng": "-2.23300881", "type": "edu", "country": "United Kingdom"}], "year": "2015", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Rajamanoharan_Multi-View_Constrained_Local_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406477", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2015.128", "http://doi.org/10.1109/ICCVW.2015.128"]}, {"id": "d03265ea9200a993af857b473c6bf12a095ca178", "title": "Multiple deep convolutional neural networks averaging for face alignment", "addresses": [{"name": "Huazhong University of Science and Technology", "source_name": "Huazhong University of Science and Technology", "street_adddress": "\u534e\u4e2d\u5927, \u73de\u55bb\u8def, \u4e1c\u6e56\u65b0\u6280\u672f\u5f00\u53d1\u533a, \u5173\u4e1c\u8857\u9053, \u4e1c\u6e56\u65b0\u6280\u672f\u5f00\u53d1\u533a\uff08\u6258\u7ba1\uff09, \u6d2a\u5c71\u533a (Hongshan), \u6b66\u6c49\u5e02, \u6e56\u5317\u7701, 430074, \u4e2d\u56fd", "lat": "30.50975370", "lng": "114.40628810", "type": "edu", "country": "China"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/d032/65ea9200a993af857b473c6bf12a095ca178.pdf"], "doi": []}, {"id": "0a6a25ee84fc0bf7284f41eaa6fefaa58b5b329a", "title": "Neural Networks Regularization Through Representation Learning", "addresses": [{"name": "INSA Rouen, France", "source_name": "Laboratoire d'Informatique, de Traitement de l'Information et des Systemes, INSA Rouen, Avenue de l'Universite, 76800, Saint-Etienne-du-Rouvray, France", "street_adddress": "685 Avenue de l'Universit\u00e9, 76800 Saint-\u00c9tienne-du-Rouvray, France", "lat": "49.38497570", "lng": "1.06832570", "type": "edu", "country": "France"}, {"name": "LITIS, Universit\u00e9 de Rouen, Rouen, France", "source_name": "LITIS, Université de Rouen, Rouen, France", "street_adddress": "1 Rue Thomas Becket, 76130 Mont-Saint-Aignan, France", "lat": "49.45830470", "lng": "1.06888920", "type": "edu", "country": "France"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1807.05292.pdf"], "doi": []}, {"id": "ef52f1e2b52fd84a7e22226ed67132c6ce47b829", "title": "Online Eye Status Detection in the Wild with Convolutional Neural Networks", "addresses": [{"name": "University of Central Lancashire", "source_name": "ADSIP Research Centre, University of Central Lancashire, Preston, PR1 2HE, U.K.", "street_adddress": "Fylde Rd, Preston PR1 2HE, UK", "lat": "53.76413780", "lng": "-2.70924530", "type": "edu", "country": "United Kingdom"}], "year": "2017", "pdf": ["https://pdfs.semanticscholar.org/ef52/f1e2b52fd84a7e22226ed67132c6ce47b829.pdf"], "doi": []}, {"id": "2fda461869f84a9298a0e93ef280f79b9fb76f94", "title": "OpenFace: An open source facial behavior analysis toolkit", "addresses": [{"name": "Cambridge University", "source_name": "Cambridge University", "street_adddress": "University, Cambridge Road, Old Portsmouth, Portsmouth, South East, England, PO1 2HB, UK", "lat": "50.79440260", "lng": "-1.09717480", "type": "edu", "country": "United Kingdom"}, {"name": "Carnegie Mellon University", "source_name": "Carnegie Mellon University Pittsburgh, PA - 15213, USA", "street_adddress": "Carnegie Mellon University, Forbes Avenue, Squirrel Hill North, PGH, Allegheny County, Pennsylvania, 15213, USA", "lat": "40.44416190", "lng": "-79.94272826", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf", "http://www.cl.cam.ac.uk/research/rainbow/projects/openface/wacv2016.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7477553", "http://doi.ieeecomputersociety.org/10.1109/WACV.2016.7477553", "http://doi.org/10.1109/WACV.2016.7477553"]}, {"id": "12d8730da5aab242795bdff17b30b6e0bac82998", "title": "Persistent Evidence of Local Image Properties in Generic ConvNets", "addresses": [{"name": "KTH Royal Institute of Technology, Stockholm", "source_name": "KTH Royal Institute of Technology, Stockholm", "street_adddress": "KTH, Teknikringen, L\u00e4rkstaden, Norra Djurg\u00e5rden, \u00d6stermalms stadsdelsomr\u00e5de, Sthlm, Stockholm, Stockholms l\u00e4n, Svealand, 114 28, Sverige", "lat": "59.34986645", "lng": "18.07063213", "type": "edu", "country": "Sweden"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1411.6509.pdf"], "doi": []}, {"id": "5711400c59a162112c57e9f899147d457537f701", "title": "Recognizing and Segmenting Objects in the Presence of Occlusion and Clutter", "addresses": [{"name": "UC Irvine", "source_name": "UC Irvine", "street_adddress": "Irvine, CA 92697, USA", "lat": "33.64049520", "lng": "-117.84429620", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/5711/400c59a162112c57e9f899147d457537f701.pdf"], "doi": []}, {"id": "ac5d0705a9ddba29151fd539c668ba2c0d16deb6", "title": "RED-Net: A Recurrent Encoder\u2013Decoder Network for Video-Based Face Alignment", "addresses": [{"name": "IBM Research T. J. Watson Center", "source_name": "IBM Research T. J. Watson Center, U.S.A", "street_adddress": "1101 Kitchawan Rd, Yorktown Heights, NY 10598, USA", "lat": "41.20975160", "lng": "-73.80264670", "type": "company", "country": "United States"}, {"name": "Rutgers University", "source_name": "Rutgers University", "street_adddress": "Rutgers Cook Campus - North, Biel Road, New Brunswick, Middlesex County, New Jersey, 08901, USA", "lat": "40.47913175", "lng": "-74.43168868", "type": "edu", "country": "United States"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1801.06066.pdf"], "doi": []}, {"id": "2bfccbf6f4e88a92a7b1f2b5c588b68c5fa45a92", "title": "ReenactGAN: Learning to Reenact Faces via Boundary Transfer", "addresses": [{"name": "Nanyang Technological University", "source_name": "Nanyang Technological University", "street_adddress": "NTU, Faculty Avenue, Jurong West, Southwest, 637460, Singapore", "lat": "1.34841040", "lng": "103.68297965", "type": "edu", "country": "Singapore"}, {"name": "SenseTime", "source_name": "SenseTime", "street_adddress": "China, Beijing Shi, Haidian Qu, WuDaoKou, Zhongguancun E Rd, 1\u53f7-7", "lat": "39.99300800", "lng": "116.32988200", "type": "company", "country": "China"}], "year": "2018", "pdf": ["https://arxiv.org/pdf/1807.11079.pdf"], "doi": []}, {"id": "f61829274cfe64b94361e54351f01a0376cd1253", "title": "Regressing a 3D Face Shape from a Single Image", "addresses": [{"name": "University of Trento", "source_name": "University of Trento", "street_adddress": "University of Trento, Via Giuseppe Verdi, Piedicastello, Trento, Territorio Val d'Adige, TN, TAA, 38122, Italia", "lat": "46.06588360", "lng": "11.11598940", "type": "edu", "country": "Italy"}], "year": "2015", "pdf": ["http://openaccess.thecvf.com/content_iccv_2015/papers/Tulyakov_Regressing_a_3D_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410784", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2015.427", "http://doi.org/10.1109/ICCV.2015.427"]}, {"id": "4d23bb65c6772cb374fc05b1f10dedf9b43e63cf", "title": "Robust face alignment and partial face recognition", "addresses": [{"name": "Nanyang Technological University", "source_name": "Nanyang Technological University", "street_adddress": "NTU, Faculty Avenue, Jurong West, Southwest, 637460, Singapore", "lat": "1.34841040", "lng": "103.68297965", "type": "edu", "country": "Singapore"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/4d23/bb65c6772cb374fc05b1f10dedf9b43e63cf.pdf"], "doi": []}, {"id": "2724ba85ec4a66de18da33925e537f3902f21249", "title": "Robust Face Landmark Estimation under Occlusion", "addresses": [{"name": "California Institute of Technology", "source_name": "California Institute of Technology", "street_adddress": "California Institute of Technology, San Pasqual Walk, Madison Heights, Pasadena, Los Angeles County, California, 91126, USA", "lat": "34.13710185", "lng": "-118.12527487", "type": "edu", "country": "United States"}, {"name": "Microsoft", "source_name": "Microsoft Corporation, Redmond, WA, USA", "street_adddress": "One Microsoft Way, Redmond, WA 98052, USA", "lat": "47.64233180", "lng": "-122.13693020", "type": "company", "country": "United States"}], "year": "2013", "pdf": [], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6751298"]}, {"id": "1c1f957d85b59d23163583c421755869f248ceef", "title": "Robust Facial Landmark Detection Under Significant Head Poses and Occlusion", "addresses": [{"name": "Rensselaer Polytechnic Institute", "source_name": "Rensselaer Polytechnic Institute", "street_adddress": "Rensselaer Polytechnic Institute, Sage Avenue, Downtown, City of Troy, Rensselaer County, New York, 12180, USA", "lat": "42.72984590", "lng": "-73.67950216", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["https://arxiv.org/pdf/1709.08127.pdf"], "doi": []}, {"id": "1121873326ab0c9f324b004aa0970a31d4f83eb8", "title": "Robust Facial Landmark Detection via a Fully-Convolutional Local-Global Context Network", "addresses": [{"name": "Technical University of Munich", "source_name": "Computer Aided Medical Procedures, Technical University of Munich, Garching, Germany", "street_adddress": "Boltzmannstra\u00dfe 3, 85748 Garching bei M\u00fcnchen, Germany", "lat": "48.26301100", "lng": "11.66685700", "type": "edu", "country": "Germany"}], "year": "2018", "pdf": ["http://openaccess.thecvf.com/content_cvpr_2018/papers/Merget_Robust_Facial_Landmark_CVPR_2018_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8578186", "http://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00088", "http://doi.org/10.1109/CVPR.2018.00088"]}, {"id": "c3d3d2229500c555c7a7150a8b126ef874cbee1c", "title": "Shape Augmented Regression Method for Face Alignment", "addresses": [{"name": "Rensselaer Polytechnic Institute", "source_name": "Rensselaer Polytechnic Institute", "street_adddress": "Rensselaer Polytechnic Institute, Sage Avenue, Downtown, City of Troy, Rensselaer County, New York, 12180, USA", "lat": "42.72984590", "lng": "-73.67950216", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Wu_Shape_Augmented_Regression_ICCV_2015_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406478", "http://doi.ieeecomputersociety.org/10.1109/ICCVW.2015.129", "http://doi.org/10.1109/ICCVW.2015.129"]}, {"id": "33ae696546eed070717192d393f75a1583cd8e2c", "title": "Subspace selection to suppress confounding source domain information in AAM transfer learning", "addresses": [{"name": "University of Toronto", "source_name": "University of Toronto", "street_adddress": "University of Toronto, St. George Street, Bloor Street Culture Corridor, Old Toronto, Toronto, Ontario, M5S 1A5, Canada", "lat": "43.66333345", "lng": "-79.39769975", "type": "edu", "country": "Canada"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1708.08508.pdf"], "doi": []}, {"id": "f3745aa4a723d791d3a04ddf7a5546e411226459", "title": "The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking", "addresses": [{"name": "University of Oulu", "source_name": "University of Oulu", "street_adddress": "Oulun yliopisto, Biologintie, Linnanmaa, Oulu, Oulun seutukunta, Pohjois-Pohjanmaa, Pohjois-Suomen aluehallintovirasto, Pohjois-Suomi, Manner-Suomi, 90540, Suomi", "lat": "65.05921570", "lng": "25.46632601", "type": "edu", "country": "Finland"}, {"name": "Middlesex University", "source_name": "Middlesex University", "street_adddress": "Middlesex University, Greyhound Hill, Hendon, The Hyde, London Borough of Barnet, London, Greater London, England, NW4 4JP, UK", "lat": "51.59029705", "lng": "-0.22963221", "type": "edu", "country": "United Kingdom"}, {"name": "University of Exeter", "source_name": "University of Exeter", "street_adddress": "University of Exeter, Stocker Road, Exwick, Exeter, Devon, South West England, England, EX4 4QN, UK", "lat": "50.73693020", "lng": "-3.53647672", "type": "edu", "country": "United Kingdom"}, {"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}], "year": "2018", "pdf": [], "doi": ["http://doi.org/10.1007/s11263-018-1134-y"]}, {"id": "50ccc98d9ce06160cdf92aaf470b8f4edbd8b899", "title": "Towards robust cascaded regression for face alignment in the wild", "addresses": [{"name": "Fraunhofer", "source_name": "Fraunhofer IOSB, Fraunhoferstrasse 1, 76131 Karlsruhe, Germany", "street_adddress": "Fraunhoferstra\u00dfe 1, 76131 Karlsruhe, Germany", "lat": "49.01546000", "lng": "8.42579990", "type": "company", "country": "Germany"}, {"name": "Karlsruhe Institute of Technology", "source_name": "Karlsruhe Institute of Technology", "street_adddress": "KIT, Leopoldshafener Allee, Linkenheim, Linkenheim-Hochstetten, Landkreis Karlsruhe, Regierungsbezirk Karlsruhe, Baden-W\u00fcrttemberg, 76351, Deutschland", "lat": "49.10184375", "lng": "8.43312560", "type": "edu", "country": "Germany"}, {"name": "\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne", "source_name": "\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Switzerland", "street_adddress": "Biblioth\u00e8que de l'EPFL, Route des Noyerettes, Ecublens, District de l'Ouest lausannois, Vaud, 1024, Schweiz/Suisse/Svizzera/Svizra", "lat": "46.51841210", "lng": "6.56846540", "type": "edu", "country": "Switzerland"}], "year": "2015", "pdf": ["http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301348", "http://doi.ieeecomputersociety.org/10.1109/CVPRW.2015.7301348", "http://doi.org/10.1109/CVPRW.2015.7301348"]}, {"id": "e52272f92fa553687f1ac068605f1de929efafc2", "title": "Using a Probabilistic Neural Network for lip-based biometric verification", "addresses": [{"name": "Warsaw University of Technology", "source_name": "Warsaw University of Technology", "street_adddress": "Politechnika Warszawska, 1, Plac Politechniki, VIII, \u015ar\u00f3dmie\u015bcie, Warszawa, mazowieckie, 00-661, RP", "lat": "52.22165395", "lng": "21.00735776", "type": "edu", "country": "Poland"}], "year": "2017", "pdf": ["https://repo.pw.edu.pl/docstore/download/WUT8aeb20bbb6964b7da1cfefbf2e370139/1-s2.0-S0952197617301227-main.pdf"], "doi": ["http://doi.org/10.1016/j.engappai.2017.06.003"]}, {"id": "397085122a5cade71ef6c19f657c609f0a4f7473", "title": "Using Segmentation to Predict the Absence of Occluded Parts", "addresses": [{"name": "UC Irvine", "source_name": "UC Irvine", "street_adddress": "Irvine, CA 92697, USA", "lat": "33.64049520", "lng": "-117.84429620", "type": "edu", "country": "United States"}], "year": "2015", "pdf": ["https://pdfs.semanticscholar.org/db11/4901d09a07ab66bffa6986bc81303e133ae1.pdf"], "doi": []}, {"id": "708f4787bec9d7563f4bb8b33834de445147133b", "title": "Wavelet-SRNet: A Wavelet-Based CNN for Multi-scale Face Super Resolution", "addresses": [{"name": "CASIA, China", "source_name": "CASIA, China", "street_adddress": "Haidian, China, 100080", "lat": "39.98019600", "lng": "116.33330500", "type": "edu", "country": "China"}, {"name": "Chinese Academy of Sciences", "source_name": "Chinese Academy of Sciences", "street_adddress": "\u4e2d\u56fd\u79d1\u5b66\u9662\u5fc3\u7406\u7814\u7a76\u6240, 16, \u6797\u8403\u8def, \u671d\u9633\u533a / Chaoyang, \u5317\u4eac\u5e02, 100101, \u4e2d\u56fd", "lat": "40.00447950", "lng": "116.37023800", "type": "edu", "country": "China"}], "year": "2017", "pdf": ["http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.pdf"], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8237449", "http://doi.ieeecomputersociety.org/10.1109/ICCV.2017.187", "http://doi.org/10.1109/ICCV.2017.187"]}, {"id": "044d9a8c61383312cdafbcc44b9d00d650b21c70", "title": "300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge", "addresses": [{"name": "Imperial College London", "source_name": "Imperial College London", "street_adddress": "Imperial College London, Exhibition Road, Brompton, Royal Borough of Kensington and Chelsea, London, Greater London, England, SW7 2AZ, UK", "lat": "51.49887085", "lng": "-0.17560797", "type": "edu", "country": "United Kingdom"}, {"name": "University of Lincoln", "source_name": "University of Lincoln", "street_adddress": "University of Lincoln, Brayford Way, Whitton Park, New Boultham, Lincoln, Lincolnshire, East Midlands, England, LN6 7TS, UK", "lat": "53.22853665", "lng": "-0.54873472", "type": "edu", "country": "United Kingdom"}, {"name": "University of Twente", "source_name": "University of Twente", "street_adddress": "University of Twente, De Achterhorst;Hallenweg, Enschede, Regio Twente, Overijssel, Nederland, 7522NH, Nederland", "lat": "52.23801390", "lng": "6.85667610", "type": "edu", "country": "Netherlands"}], "year": "2013", "pdf": [], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755925"]}, {"id": "b48d3694a8342b6efc18c9c9124c62406e6bf3b3", "title": "Recurrent Convolutional Shape Regression", "addresses": [{"name": "University of Trento", "source_name": "University of Trento", "street_adddress": "University of Trento, Via Giuseppe Verdi, Piedicastello, Trento, Territorio Val d'Adige, TN, TAA, 38122, Italia", "lat": "46.06588360", "lng": "11.11598940", "type": "edu", "country": "Italy"}, {"name": "Snapchat Research, Venice, CA", "source_name": "Snap Research, Venice, CA, United States", "street_adddress": "Venice, Los Angeles, CA, USA", "lat": "33.98504690", "lng": "-118.46948320", "type": "company", "country": "United States"}], "year": "2018", "pdf": [], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8305545", "http://doi.ieeecomputersociety.org/10.1109/TPAMI.2018.2810881", "http://doi.org/10.1109/TPAMI.2018.2810881", "https://www.ncbi.nlm.nih.gov/pubmed/29994580"]}, {"id": "523db6dee0e60a2d513759fa04aa96f2fed40ff4", "title": "Study of Mechanisms of Social Interaction Stimulation in Autism Spectrum Disorder by Assisted Humanoid Robot", "addresses": [{"name": "National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy", "source_name": "National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy", "street_adddress": "73100 Lecce, Province of Lecce, Italy", "lat": "40.35151550", "lng": "18.17501610", "type": "edu", "country": "Italy"}, {"name": "National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Messina, Italy", "source_name": "National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Messina, Italy", "street_adddress": "Messina, Province of Messina, Italy", "lat": "38.19373350", "lng": "15.55420570", "type": "edu", "country": "Italy"}], "year": "2018", "pdf": [], "doi": ["http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8207589", "http://doi.org/10.1109/TCDS.2017.2783684"]}, {"id": "95f12d27c3b4914e0668a268360948bce92f7db3", "title": "Interactive Facial Feature Localization", "addresses": [{"name": "Adobe", "source_name": "Adobe2", "street_adddress": "345 Park Ave, San Jose, CA 95110, USA", "lat": "37.33077030", "lng": "-121.89409510", "type": "company", "country": "United States"}, {"name": "Facebook", "source_name": "Facebook", "street_adddress": "250 Bryant St, Mountain View, CA 94041, USA", "lat": "37.39367170", "lng": "-122.08072620", "type": "company", "country": "United States"}, {"name": "University of Illinois, Urbana-Champaign", "source_name": "University of Illinois, Urbana-Champaign", "street_adddress": "B-3, South Mathews Avenue, Urbana, Champaign County, Illinois, 61801, USA", "lat": "40.11116745", "lng": "-88.22587665", "type": "edu", "country": "United States"}], "year": "2012", "pdf": ["https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf"], "doi": []}, {"id": "0a34fe39e9938ae8c813a81ae6d2d3a325600e5c", "title": "FacePoseNet: Making a Case for Landmark-Free Face Alignment", "addresses": [{"name": "USC", "source_name": "University of Southern California and USC Institute for Creative Technologies", "street_adddress": "12015 E Waterfront Dr, Los Angeles, CA 90094, USA", "lat": "33.98325260", "lng": "-118.40417000", "type": "edu", "country": "United States"}, {"name": "Open University of Israel", "source_name": "Open University of Israel", "street_adddress": "\u05d4\u05d0\u05d5\u05e0\u05d9\u05d1\u05e8\u05e1\u05d9\u05d8\u05d4 \u05d4\u05e4\u05ea\u05d5\u05d7\u05d4, 15, \u05d0\u05d1\u05d0 \u05d7\u05d5\u05e9\u05d9, \u05d7\u05d9\u05e4\u05d4, \u05d2\u05d1\u05e2\u05ea \u05d3\u05d0\u05d5\u05e0\u05e1, \u05d7\u05d9\u05e4\u05d4, \u05de\u05d7\u05d5\u05d6 \u05d7\u05d9\u05e4\u05d4, NO, \u05d9\u05e9\u05e8\u05d0\u05dc", "lat": "32.77824165", "lng": "34.99565673", "type": "edu", "country": "Israel"}], "year": "2017", "pdf": ["https://arxiv.org/pdf/1708.07517.pdf"], "doi": []}, {"id": "c46a4db7247d26aceafed3e4f38ce52d54361817", "title": "A CNN Cascade for Landmark Guided Semantic Part Segmentation", "addresses": [{"name": "University of Nottingham", "source_name": "University of Nottingham", "street_adddress": "University of Nottingham, Lenton Abbey, Wollaton, City of Nottingham, East Midlands, England, UK", "lat": "52.93874280", "lng": "-1.20029569", "type": "edu", "country": "United Kingdom"}], "year": "2016", "pdf": ["https://arxiv.org/pdf/1609.09642.pdf"], "doi": []}, {"id": "59b6e9320a4e1de9216c6fc49b4b0309211b17e8", "title": "Robust Representations for unconstrained Face Recognition and its Applications", "addresses": [{"name": "Maryland Univ., College Park, MD, USA", "source_name": "Maryland Univ., College Park, MD, USA", "street_adddress": "College Park, MD 20742, USA", "lat": "38.98691830", "lng": "-76.94255430", "type": "edu", "country": "United States"}], "year": "2016", "pdf": ["https://pdfs.semanticscholar.org/59b6/e9320a4e1de9216c6fc49b4b0309211b17e8.pdf"], "doi": []}]}
\ No newline at end of file diff --git a/site/datasets/verified/adience.csv b/site/datasets/verified/adience.csv index deadc399..f6e229b6 100644 --- a/site/datasets/verified/adience.csv +++ b/site/datasets/verified/adience.csv @@ -1,2 +1,140 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,Adience,adience,0.0,0.0,,,,main,,Age and Gender Estimation of Unfiltered Faces,2014 +1,China,Adience,adience,39.993008,116.329882,SenseTime,company,c72a2ea819df9b0e8cd267eebcc6528b8741e03d,citation,https://arxiv.org/pdf/1708.09687.pdf,Quantifying Facial Age by Posterior of Age Comparisons,2017 +2,China,Adience,adience,22.4162632,114.2109318,Chinese University of Hong Kong,edu,c72a2ea819df9b0e8cd267eebcc6528b8741e03d,citation,https://arxiv.org/pdf/1708.09687.pdf,Quantifying Facial Age by Posterior of Age Comparisons,2017 +3,United States,Adience,adience,38.9869183,-76.9425543,"Maryland Univ., College Park, MD, USA",edu,59b6e9320a4e1de9216c6fc49b4b0309211b17e8,citation,https://pdfs.semanticscholar.org/59b6/e9320a4e1de9216c6fc49b4b0309211b17e8.pdf,Robust Representations for unconstrained Face Recognition and its Applications,2016 +4,United Kingdom,Adience,adience,51.49887085,-0.17560797,Imperial College London,edu,d818568838433a6d6831adde49a58cef05e0c89f,citation,http://eprints.mdx.ac.uk/22044/1/agedb_kotsia.pdf,"AgeDB: The First Manually Collected, In-the-Wild Age Database",2017 +5,China,Adience,adience,28.2290209,112.99483204,"National University of Defense Technology, China",mil,4f37f71517420c93c6841beb33ca0926354fa11d,citation,http://www.cs.newpaltz.edu/~lik/publications/Mingxing-Duan-NC-2017.pdf,A hybrid deep learning CNN-ELM for age and gender classification,2018 +6,China,Adience,adience,26.88111275,112.62850666,Hunan University,edu,4f37f71517420c93c6841beb33ca0926354fa11d,citation,http://www.cs.newpaltz.edu/~lik/publications/Mingxing-Duan-NC-2017.pdf,A hybrid deep learning CNN-ELM for age and gender classification,2018 +7,Italy,Adience,adience,46.0658836,11.1159894,University of Trento,edu,cb43519894258b125624dc0df655ab5357b1e42f,citation,https://arxiv.org/pdf/1802.00237.pdf,Face Aging with Contextual Generative Adversarial Nets,2017 +8,China,Adience,adience,39.9041999,116.4073963,"Qihoo 360 AI Institute, Beijing, China",edu,cb43519894258b125624dc0df655ab5357b1e42f,citation,https://arxiv.org/pdf/1802.00237.pdf,Face Aging with Contextual Generative Adversarial Nets,2017 +9,Singapore,Adience,adience,1.2962018,103.77689944,National University of Singapore,edu,cb43519894258b125624dc0df655ab5357b1e42f,citation,https://arxiv.org/pdf/1802.00237.pdf,Face Aging with Contextual Generative Adversarial Nets,2017 +10,United States,Adience,adience,40.51865195,-74.44099801,State University of New Jersey,edu,d00e9a6339e34c613053d3b2c132fccbde547b56,citation,http://www.rci.rutgers.edu/~vmp93/Conference_pub/btas_age_2016_cameraready.pdf,A cascaded convolutional neural network for age estimation of unconstrained faces,2016 +11,United States,Adience,adience,39.2899685,-76.62196103,University of Maryland,edu,d00e9a6339e34c613053d3b2c132fccbde547b56,citation,http://www.rci.rutgers.edu/~vmp93/Conference_pub/btas_age_2016_cameraready.pdf,A cascaded convolutional neural network for age estimation of unconstrained faces,2016 +12,Bangladesh,Adience,adience,23.7289899,90.3982682,Institute of Information Technology,edu,6e177341d4412f9c9a639e33e6096344ef930202,citation,https://pdfs.semanticscholar.org/2e58/ec57d71b2b2a3e71086234dd7037559cc17e.pdf,A Gender Recognition System from Facial Image,2018 +13,Bangladesh,Adience,adience,23.7316957,90.3965275,University of Dhaka,edu,6e177341d4412f9c9a639e33e6096344ef930202,citation,https://pdfs.semanticscholar.org/2e58/ec57d71b2b2a3e71086234dd7037559cc17e.pdf,A Gender Recognition System from Facial Image,2018 +14,Canada,Adience,adience,43.7743911,-79.50481085,York University,edu,ffe4bb47ec15f768e1744bdf530d5796ba56cfc1,citation,https://arxiv.org/pdf/1706.04277.pdf,AFIF4: Deep Gender Classification based on AdaBoost-based Fusion of Isolated Facial Features and Foggy Faces,2017 +15,Egypt,Adience,adience,27.18794105,31.17009498,Assiut University,edu,ffe4bb47ec15f768e1744bdf530d5796ba56cfc1,citation,https://arxiv.org/pdf/1706.04277.pdf,AFIF4: Deep Gender Classification based on AdaBoost-based Fusion of Isolated Facial Features and Foggy Faces,2017 +16,Switzerland,Adience,adience,47.376313,8.5476699,ETH Zurich,edu,10195a163ab6348eef37213a46f60a3d87f289c5,citation,http://www.vision.ee.ethz.ch/en/publications/papers/articles/eth_biwi_01299.pdf,Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks,2016 +17,United Kingdom,Adience,adience,51.49887085,-0.17560797,Imperial College London,edu,7f30a36a3faab044c095814d0ce17ea2b6638213,citation,https://arxiv.org/pdf/1802.04636.pdf,Modeling of facial aging and kinship: A survey,2018 +18,United Kingdom,Adience,adience,51.59029705,-0.22963221,Middlesex University,edu,7f30a36a3faab044c095814d0ce17ea2b6638213,citation,https://arxiv.org/pdf/1802.04636.pdf,Modeling of facial aging and kinship: A survey,2018 +19,United States,Adience,adience,40.9153196,-73.1270626,Stony Brook University,edu,25bf288b2d896f3c9dab7e7c3e9f9302e7d6806b,citation,https://arxiv.org/pdf/1608.06557.pdf,Neural Networks with Smooth Adaptive Activation Functions for Regression,2016 +20,United States,Adience,adience,35.93006535,-84.31240032,Oak Ridge National Laboratory,edu,25bf288b2d896f3c9dab7e7c3e9f9302e7d6806b,citation,https://arxiv.org/pdf/1608.06557.pdf,Neural Networks with Smooth Adaptive Activation Functions for Regression,2016 +21,Canada,Adience,adience,49.2767454,-122.91777375,Simon Fraser University,edu,880b4be9afc4d5ef75b5d77f51eadb557acbf251,citation,http://www.cs.umanitoba.ca/~ywang/papers/mmsp18.pdf,Privacy-Preserving Age Estimation for Content Rating,2018 +22,Canada,Adience,adience,49.8091536,-97.13304179,University of Manitoba,edu,880b4be9afc4d5ef75b5d77f51eadb557acbf251,citation,http://www.cs.umanitoba.ca/~ywang/papers/mmsp18.pdf,Privacy-Preserving Age Estimation for Content Rating,2018 +23,United States,Adience,adience,32.9820799,-96.7566278,University of Texas at Dallas,edu,e49d124a3d7eba42b0e3e79c1dd7537e6611602d,citation,https://arxiv.org/pdf/1803.05719.pdf,"SAF- BAGE: Salient Approach for Facial Soft-Biometric Classification - Age, Gender, and Facial Expression",2018 +24,India,Adience,adience,23.0378743,72.55180046,Ahmedabad University,edu,e49d124a3d7eba42b0e3e79c1dd7537e6611602d,citation,https://arxiv.org/pdf/1803.05719.pdf,"SAF- BAGE: Salient Approach for Facial Soft-Biometric Classification - Age, Gender, and Facial Expression",2018 +25,United States,Adience,adience,40.9153196,-73.1270626,Stony Brook University,edu,0ba402af3b8682e2aa89f76bd823ddffdf89fa0a,citation,https://arxiv.org/pdf/1611.05916.pdf,Squared Earth Mover's Distance-based Loss for Training Deep Neural Networks,2016 +26,United States,Adience,adience,42.36782045,-71.12666653,Harvard University,edu,0ba402af3b8682e2aa89f76bd823ddffdf89fa0a,citation,https://arxiv.org/pdf/1611.05916.pdf,Squared Earth Mover's Distance-based Loss for Training Deep Neural Networks,2016 +27,United States,Adience,adience,39.2899685,-76.62196103,University of Maryland,edu,31f1e711fcf82c855f27396f181bf5e565a2f58d,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w11/papers/Ranjan_Unconstrained_Age_Estimation_ICCV_2015_paper.pdf,Unconstrained Age Estimation with Deep Convolutional Neural Networks,2015 +28,United States,Adience,adience,40.47913175,-74.43168868,Rutgers University,edu,31f1e711fcf82c855f27396f181bf5e565a2f58d,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w11/papers/Ranjan_Unconstrained_Age_Estimation_ICCV_2015_paper.pdf,Unconstrained Age Estimation with Deep Convolutional Neural Networks,2015 +29,Finland,Adience,adience,65.0592157,25.46632601,University of Oulu,edu,1fe121925668743762ce9f6e157081e087171f4c,citation,http://www.ee.oulu.fi/~jkannala/publications/cvprw2015.pdf,Unsupervised learning of overcomplete face descriptors,2015 +30,United States,Adience,adience,39.65404635,-79.96475355,West Virginia University,edu,7a65fc9e78eff3ab6062707deaadde024d2fad40,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w11/papers/Zhu_A_Study_on_ICCV_2015_paper.pdf,A Study on Apparent Age Estimation,2015 +31,Italy,Adience,adience,45.47567215,9.23336232,Università degli Studi di Milano,edu,717ffde99c0d6b58675d44b4c66acedce0ca86e8,citation,https://air.unimi.it/retrieve/handle/2434/527428/913482/cisda17.pdf,Age estimation based on face images and pre-trained convolutional neural networks,2017 +32,United States,Adience,adience,34.0224149,-118.28634407,University of Southern California,edu,eb6ee56e085ebf473da990d032a4249437a3e462,citation,http://www-scf.usc.edu/~chuntinh/doc/Age_Gender_Classification_APSIPA_2017.pdf,Age/gender classification with whole-component convolutional neural networks (WC-CNN),2017 +33,United States,Adience,adience,47.00646895,-120.5367304,Central Washington University,edu,9d6e60d49e92361f8f558013065dfa67043dd337,citation,https://pdfs.semanticscholar.org/9d6e/60d49e92361f8f558013065dfa67043dd337.pdf,Applications of Computational Geometry and Computer Vision,2016 +34,United States,Adience,adience,41.1664858,-73.1920564,University of Bridgeport,edu,0cece7b8989352e16a2fab8c0a0b1911c286906a,citation,https://pdfs.semanticscholar.org/0cec/e7b8989352e16a2fab8c0a0b1911c286906a.pdf,AUTOMATIC AGE ESTIMATION FROM REAL-WORLD AND WILD FACE IMAGES BY USING DEEP NEURAL NETWORKS,2017 +35,United States,Adience,adience,47.00646895,-120.5367304,Central Washington University,edu,56c2fb2438f32529aec604e6fc3b06a595ddbfcc,citation,https://pdfs.semanticscholar.org/60dc/35a42ac758c5372c44f3791c951374658609.pdf,Comparison of Recent Machine Learning Techniques for Gender Recognition from Facial Images,2016 +36,United States,Adience,adience,37.43131385,-122.16936535,Stanford University,edu,16d6737b50f969247339a6860da2109a8664198a,citation,https://pdfs.semanticscholar.org/16d6/737b50f969247339a6860da2109a8664198a.pdf,Convolutional Neural Networks for Age and Gender Classification,2016 +37,United States,Adience,adience,42.357757,-83.06286711,Wayne State University,edu,4f1249369127cc2e2894f6b2f1052d399794919a,citation,http://www.cs.wayne.edu/~mdong/tmm18.pdf,Deep Age Estimation: From Classification to Ranking,2018 +38,United States,Adience,adience,41.1664858,-73.1920564,University of Bridgeport,edu,ac9a331327cceda4e23f9873f387c9fd161fad76,citation,https://arxiv.org/pdf/1709.01664.pdf,Deep Convolutional Neural Network for Age Estimation based on VGG-Face Model,2017 +39,Netherlands,Adience,adience,53.21967825,6.56251482,University of Groningen,edu,4ff4c27e47b0aa80d6383427642bb8ee9d01c0ac,citation,http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/IEEE_SSCI_2015/data/7560a188.pdf,Deep Convolutional Neural Networks and Support Vector Machines for Gender Recognition,2015 +40,United Kingdom,Adience,adience,51.5217668,-0.13019072,University of London,edu,31ea88f29e7f01a9801648d808f90862e066f9ea,citation,https://arxiv.org/pdf/1605.06391.pdf,Deep Multi-task Representation Learning: A Tensor Factorisation Approach,2016 +41,Netherlands,Adience,adience,53.21967825,6.56251482,University of Groningen,edu,361c9ba853c7d69058ddc0f32cdbe94fbc2166d5,citation,https://pdfs.semanticscholar.org/361c/9ba853c7d69058ddc0f32cdbe94fbc2166d5.pdf,Deep Reinforcement Learning of Video Games,2017 +42,South Korea,Adience,adience,37.26728,126.9841151,Seoul National University,edu,282503fa0285240ef42b5b4c74ae0590fe169211,citation,https://arxiv.org/pdf/1801.07848.pdf,Feeding Hand-Crafted Features for Enhancing the Performance of Convolutional Neural Networks,2018 +43,Italy,Adience,adience,45.518383,9.213452,University of Milano-Bicocca,edu,305346d01298edeb5c6dc8b55679e8f60ba97efb,citation,https://pdfs.semanticscholar.org/3053/46d01298edeb5c6dc8b55679e8f60ba97efb.pdf,Fine-Grained Face Annotation Using Deep Multi-Task CNN,2018 +44,Canada,Adience,adience,45.5039761,-73.5749687,McGill University,edu,a760d33a21d2ab338f59d32ac7f96023bbfaa248,citation,https://arxiv.org/pdf/1803.08134.pdf,Fisher Pruning of Deep Nets for Facial Trait Classification,2018 +45,Taiwan,Adience,adience,25.0410728,121.6147562,Institute of Information Science,edu,0951f42abbf649bb564a21d4ff5dddf9a5ea54d9,citation,https://arxiv.org/pdf/1806.02023.pdf,Joint Estimation of Age and Gender from Unconstrained Face Images Using Lightweight Multi-Task CNN for Mobile Applications,2018 +46,China,Adience,adience,25.055125,102.696888,Yunnan Normal University,edu,99c57ec53f2598d63c010f791adbca386b276919,citation,https://pdfs.semanticscholar.org/99c5/7ec53f2598d63c010f791adbca386b276919.pdf,Landmark-Guided Local Deep Neural Networks for Age and Gender Classification,2018 +47,South Korea,Adience,adience,37.5600406,126.9369248,Yonsei University,edu,ba0d9e1e8bc798656429fe7121afee672dddb380,citation,https://arxiv.org/pdf/1809.01990.pdf,Multi-Expert Gender Classification on Age Group by Integrating Deep Neural Networks,2018 +48,Canada,Adience,adience,45.42580475,-75.68740118,University of Ottawa,edu,16820ccfb626dcdc893cc7735784aed9f63cbb70,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W12/papers/Azarmehr_Real-Time_Embedded_Age_2015_CVPR_paper.pdf,Real-time embedded age and gender classification in unconstrained video,2015 +49,United States,Adience,adience,40.9153196,-73.1270626,Stony Brook University,edu,1190cba0cae3c8bb81bf80d6a0a83ae8c41240bc,citation,https://pdfs.semanticscholar.org/1190/cba0cae3c8bb81bf80d6a0a83ae8c41240bc.pdf,Squared Earth Mover ’ s Distance Loss for Training Deep Neural Networks on Ordered-Classes,2017 +50,Singapore,Adience,adience,1.340216,103.965089,Singapore University of Technology and Design,edu,00823e6c0b6f1cf22897b8d0b2596743723ec51c,citation,https://arxiv.org/pdf/1708.07689.pdf,Understanding and Comparing Deep Neural Networks for Age and Gender Classification,2017 +51,United States,Adience,adience,42.357757,-83.06286711,Wayne State University,edu,28d99dc2d673d62118658f8375b414e5192eac6f,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Chen_Using_Ranking-CNN_for_CVPR_2017_paper.pdf,Using Ranking-CNN for Age Estimation,2017 +52,Israel,Adience,adience,32.77824165,34.99565673,Open University of Israel,edu,2cbb4a2f8fd2ddac86f8804fd7ffacd830a66b58,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Levi_Age_and_Gender_2015_CVPR_paper.pdf,Age and gender classification using convolutional neural networks,2015 +53,Australia,Adience,adience,-35.2776999,149.118527,Australian National University,edu,ac2e3a889fc46ca72f9a2cdedbdd6f3d4e9e2627,citation,https://pdfs.semanticscholar.org/ac2e/3a889fc46ca72f9a2cdedbdd6f3d4e9e2627.pdf,Age detection from a single image using multitask neural networks : An overview and design proposal,2016 +54,Australia,Adience,adience,-35.2776999,149.118527,CSIRO,edu,ac2e3a889fc46ca72f9a2cdedbdd6f3d4e9e2627,citation,https://pdfs.semanticscholar.org/ac2e/3a889fc46ca72f9a2cdedbdd6f3d4e9e2627.pdf,Age detection from a single image using multitask neural networks : An overview and design proposal,2016 +55,Australia,Adience,adience,-35.2776999,149.118527,"Data61, CSIRO, Canberra, Australia",edu,ac2e3a889fc46ca72f9a2cdedbdd6f3d4e9e2627,citation,https://pdfs.semanticscholar.org/ac2e/3a889fc46ca72f9a2cdedbdd6f3d4e9e2627.pdf,Age detection from a single image using multitask neural networks : An overview and design proposal,2016 +56,China,Adience,adience,38.8760446,115.4973873,North China Electric Power University,edu,50ff21e595e0ebe51ae808a2da3b7940549f4035,citation,https://arxiv.org/pdf/1710.02985.pdf,Age Group and Gender Estimation in the Wild With Deep RoR Architecture,2017 +57,United States,Adience,adience,38.9403808,-92.3277375,University of Missouri Columbia,edu,50ff21e595e0ebe51ae808a2da3b7940549f4035,citation,https://arxiv.org/pdf/1710.02985.pdf,Age Group and Gender Estimation in the Wild With Deep RoR Architecture,2017 +58,China,Adience,adience,26.88111275,112.62850666,Hunan University,edu,ec0104286c96707f57df26b4f0a4f49b774c486b,citation,http://www.cs.newpaltz.edu/~lik/publications/Mingxing-Duan-IEEE-TIFS-2018.pdf,An Ensemble CNN2ELM for Age Estimation,2018 +59,China,Adience,adience,28.2290209,112.99483204,"National University of Defense Technology, China",mil,ec0104286c96707f57df26b4f0a4f49b774c486b,citation,http://www.cs.newpaltz.edu/~lik/publications/Mingxing-Duan-IEEE-TIFS-2018.pdf,An Ensemble CNN2ELM for Age Estimation,2018 +60,United States,Adience,adience,42.6480516,-73.749576,State University of New York,edu,ec0104286c96707f57df26b4f0a4f49b774c486b,citation,http://www.cs.newpaltz.edu/~lik/publications/Mingxing-Duan-IEEE-TIFS-2018.pdf,An Ensemble CNN2ELM for Age Estimation,2018 +61,United States,Adience,adience,37.3706254,-121.9671894,NVIDIA,company,81e628a23e434762b1208045919af48dceb6c4d2,citation,https://arxiv.org/pdf/1807.07320.pdf,Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery,2018 +62,Spain,Adience,adience,41.5019255,2.1048538,"UAB, Barcelona, Spain",edu,81e628a23e434762b1208045919af48dceb6c4d2,citation,https://arxiv.org/pdf/1807.07320.pdf,Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery,2018 +63,Brazil,Adience,adience,-27.5953995,-48.6154218,University of Campinas,edu,bc749f0e81eafe9e32d56336750782f45d82609d,citation,https://pdfs.semanticscholar.org/bc74/9f0e81eafe9e32d56336750782f45d82609d.pdf,Combination of Texture and Geometric Features for Age Estimation in Face Images,2018 +64,China,Adience,adience,22.4162632,114.2109318,Chinese University of Hong Kong,edu,29db16efc3b378c50511f743e5197a4c0b9e902f,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w11/papers/Kuang_Deeply_Learned_Rich_ICCV_2015_paper.pdf,Deeply Learned Rich Coding for Cross-Dataset Facial Age Estimation,2015 +65,China,Adience,adience,39.993008,116.329882,SenseTime,company,29db16efc3b378c50511f743e5197a4c0b9e902f,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w11/papers/Kuang_Deeply_Learned_Rich_ICCV_2015_paper.pdf,Deeply Learned Rich Coding for Cross-Dataset Facial Age Estimation,2015 +66,Israel,Adience,adience,32.77824165,34.99565673,Open University of Israel,edu,0dccc881cb9b474186a01fd60eb3a3e061fa6546,citation,https://arxiv.org/pdf/1411.7964.pdf,Effective face frontalization in unconstrained images,2015 +67,Russia,Adience,adience,56.3244285,44.0286291,Nizhny Novgorod State Linguistic University,edu,efb56e7488148d52d3b8a2dae9f8880b273f4226,citation,https://arxiv.org/pdf/1807.07718.pdf,"Efficient Facial Representations for Age, Gender and Identity Recognition in Organizing Photo Albums using Multi-output CNN",2018 +68,Russia,Adience,adience,55.694797,37.564332,"Samsung-PDMI Joint AI Center, Steklov Institute of Mathematics",company,efb56e7488148d52d3b8a2dae9f8880b273f4226,citation,https://arxiv.org/pdf/1807.07718.pdf,"Efficient Facial Representations for Age, Gender and Identity Recognition in Organizing Photo Albums using Multi-output CNN",2018 +69,Ireland,Adience,adience,53.308244,-6.2241652,University College Dublin,edu,cc45fb67772898c36519de565c9bd0d1d11f1435,citation,https://forensicsandsecurity.com/papers/EvaluatingFacialAgeEstimation.pdf,Evaluating Automated Facial Age Estimation Techniques for Digital Forensics,2018 +70,China,Adience,adience,37.8956594,114.9042208,"Hebei, China",edu,fca6df7d36f449d48a8d1e48a78c860d52e3baf8,citation,https://arxiv.org/pdf/1805.10445.pdf,Fine-Grained Age Estimation in the wild with Attention LSTM Networks,2018 +71,China,Adience,adience,38.8760446,115.4973873,North China Electric Power University,edu,fca6df7d36f449d48a8d1e48a78c860d52e3baf8,citation,https://arxiv.org/pdf/1805.10445.pdf,Fine-Grained Age Estimation in the wild with Attention LSTM Networks,2018 +72,United States,Adience,adience,38.9403808,-92.3277375,University of Missouri-Columbia,edu,fca6df7d36f449d48a8d1e48a78c860d52e3baf8,citation,https://arxiv.org/pdf/1805.10445.pdf,Fine-Grained Age Estimation in the wild with Attention LSTM Networks,2018 +73,India,Adience,adience,28.5456282,77.2731505,"IIIT Delhi, India",edu,af6e351d58dba0962d6eb1baf4c9a776eb73533f,citation,https://arxiv.org/pdf/1612.07454.pdf,How to Train Your Deep Neural Network with Dictionary Learning,2016 +74,Turkey,Adience,adience,41.10427915,29.02231159,Istanbul Technical University,edu,9755554b13103df634f9b1ef50a147dd02eab02f,citation,https://arxiv.org/pdf/1610.00134.pdf,How Transferable Are CNN-Based Features for Age and Gender Classification?,2016 +75,United States,Adience,adience,42.3619407,-71.0904378,MIT CSAIL,edu,9755554b13103df634f9b1ef50a147dd02eab02f,citation,https://arxiv.org/pdf/1610.00134.pdf,How Transferable Are CNN-Based Features for Age and Gender Classification?,2016 +76,Turkey,Adience,adience,38.029533,32.506051,"Mevlana Universitesi, Konya, Turkey",edu,eb4151eebd0b7451ca990b242cef8357bfa9db92,citation,https://pdfs.semanticscholar.org/eb41/51eebd0b7451ca990b242cef8357bfa9db92.pdf,Human Gender Prediction on Facial Images Taken by Mobile Phone using Convolutional Neural Networks,2018 +77,United Kingdom,Adience,adience,53.405936,-2.9655722,Liverpool University,edu,c95d8b9bddd76b8c83c8745747e8a33feedf3941,citation,https://arxiv.org/pdf/1805.02901.pdf,Image Ordinal Classification and Understanding: Grid Dropout with Masking Label,2018 +78,China,Adience,adience,28.874513,105.431827,"Sichuan Police College, Luzhou, China",gov,c95d8b9bddd76b8c83c8745747e8a33feedf3941,citation,https://arxiv.org/pdf/1805.02901.pdf,Image Ordinal Classification and Understanding: Grid Dropout with Masking Label,2018 +79,China,Adience,adience,30.672721,104.098806,University of Electronic Science and Technology of China,edu,c95d8b9bddd76b8c83c8745747e8a33feedf3941,citation,https://arxiv.org/pdf/1805.02901.pdf,Image Ordinal Classification and Understanding: Grid Dropout with Masking Label,2018 +80,India,Adience,adience,19.1334302,72.9132679,"Indian Institute of Technology Bombay, Mumbai, India",edu,bb33376961f6663df848ae9bf055c9afd9182443,citation,https://arxiv.org/pdf/1901.01151.pdf,Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision,2019 +81,United States,Adience,adience,47.6423318,-122.1369302,Microsoft,company,bb33376961f6663df848ae9bf055c9afd9182443,citation,https://arxiv.org/pdf/1901.01151.pdf,Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision,2019 +82,United States,Adience,adience,42.3889785,-72.5286987,University of Massachusetts,edu,bb33376961f6663df848ae9bf055c9afd9182443,citation,https://arxiv.org/pdf/1901.01151.pdf,Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision,2019 +83,India,Adience,adience,19.1334302,72.9132679,"Indian Institute of Technology Bombay, Mumbai, India",edu,d278e020be85a1ccd90aa366b70c43884dd3f798,citation,https://arxiv.org/pdf/1805.11191.pdf,Learning From Less Data: Diversified Subset Selection and Active Learning in Image Classification Tasks,2018 +84,Germany,Adience,adience,53.1013476,8.8611632,"Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Robotics Innovation Center, 28359 Bremen, Germany",edu,0cfca73806f443188632266513bac6aaf6923fa8,citation,https://arxiv.org/pdf/1805.04756.pdf,Predictive Uncertainty in Large Scale Classification using Dropout - Stochastic Gradient Hamiltonian Monte Carlo,2018 +85,Chile,Adience,adience,-33.4411279,-70.6407933,Universidad Catolica de Chile,edu,0cfca73806f443188632266513bac6aaf6923fa8,citation,https://arxiv.org/pdf/1805.04756.pdf,Predictive Uncertainty in Large Scale Classification using Dropout - Stochastic Gradient Hamiltonian Monte Carlo,2018 +86,Italy,Adience,adience,43.7776426,11.259765,University of Florence,edu,e5563a0d6a2312c614834dc784b5cc7594362bff,citation,https://pdfs.semanticscholar.org/e556/3a0d6a2312c614834dc784b5cc7594362bff.pdf,Real-Time Demographic Profiling from Face Imagery with Fisher Vectors,2018 +87,India,Adience,adience,20.1438995,85.6762033,Indian Institute of Technology Bhubaneswar,edu,b46d49cb7aade5ab7be51bd7a0ce3aa6f7c6b9ed,citation,https://arxiv.org/pdf/1712.01661.pdf,Recognizing Gender from Human Facial Regions using Genetic Algorithm,2018 +88,India,Adience,adience,29.8542626,77.8880002,"Indian institute of Technology Roorkee, India",edu,b46d49cb7aade5ab7be51bd7a0ce3aa6f7c6b9ed,citation,https://arxiv.org/pdf/1712.01661.pdf,Recognizing Gender from Human Facial Regions using Genetic Algorithm,2018 +89,India,Adience,adience,22.714846,88.4161884,"Institute of Engineering & Management, Kolkata",edu,b46d49cb7aade5ab7be51bd7a0ce3aa6f7c6b9ed,citation,https://arxiv.org/pdf/1712.01661.pdf,Recognizing Gender from Human Facial Regions using Genetic Algorithm,2018 +90,United Kingdom,Adience,adience,51.49887085,-0.17560797,Imperial College London,edu,7173871866fc7e555e9123d1d7133d20577054e8,citation,https://arxiv.org/pdf/1807.08108.pdf,Simultaneous Adversarial Training - Learn from Others Mistakes,2018 +91,India,Adience,adience,28.5456282,77.2731505,"IIIT Delhi, India",edu,5e39deb4bff7b887c8f3a44dfe1352fbcde8a0bd,citation,https://arxiv.org/pdf/1810.06221.pdf,Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!,2018 +92,India,Adience,adience,30.7649646,76.7750066,"Infosys Ltd., Chandigarh, India",company,5e39deb4bff7b887c8f3a44dfe1352fbcde8a0bd,citation,https://arxiv.org/pdf/1810.06221.pdf,Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!,2018 +93,United States,Adience,adience,30.6108365,-96.352128,Texas A&M University,edu,5e39deb4bff7b887c8f3a44dfe1352fbcde8a0bd,citation,https://arxiv.org/pdf/1810.06221.pdf,Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!,2018 +94,China,Adience,adience,45.7413921,126.62552755,Harbin Institute of Technology,edu,c5fff7adc5084d69390918daf09e832ec191144b,citation,,Deep learning application based on embedded GPU,2017 +95,China,Adience,adience,26.085573,119.372442,Fujian University of Technology,edu,c5fff7adc5084d69390918daf09e832ec191144b,citation,,Deep learning application based on embedded GPU,2017 +96,China,Adience,adience,25.28164,110.337304,Guilin University of Electronic Technology,edu,c5fff7adc5084d69390918daf09e832ec191144b,citation,,Deep learning application based on embedded GPU,2017 +97,United States,Adience,adience,32.8536333,-117.2035286,Kyung Hee University,edu,854b1f0581f5d3340f15eb79452363cbf38c04c8,citation,,Directional Age-Primitive Pattern (DAPP) for Human Age Group Recognition and Age Estimation,2017 +98,Saudi Arabia,Adience,adience,24.7246403,46.62335012,King Saud University,edu,854b1f0581f5d3340f15eb79452363cbf38c04c8,citation,,Directional Age-Primitive Pattern (DAPP) for Human Age Group Recognition and Age Estimation,2017 +99,Bangladesh,Adience,adience,23.7289899,90.3982682,Institute of Information Technology,edu,854b1f0581f5d3340f15eb79452363cbf38c04c8,citation,,Directional Age-Primitive Pattern (DAPP) for Human Age Group Recognition and Age Estimation,2017 +100,Turkey,Adience,adience,38.6747649,39.1866925,"Firat Üniversitesi Elaziğ, Türkiye",edu,ecfb93de88394a244896bfe6ee7bf39fb250b820,citation,,Gender recognition from face images with deep learning,2017 +101,Turkey,Adience,adience,38.6812759,39.196083,"Enerji Sistemleri Müh. Bölümü Teknoloji Fakültesi, Firat Üniversitesi Elaziğ, Türkiye",edu,ecfb93de88394a244896bfe6ee7bf39fb250b820,citation,,Gender recognition from face images with deep learning,2017 +102,Turkey,Adience,adience,38.6774755,39.2030121,"Enformatik Bölümü Firat Üniversitesi Elaziğ, Türkiye",edu,ecfb93de88394a244896bfe6ee7bf39fb250b820,citation,,Gender recognition from face images with deep learning,2017 +103,Canada,Adience,adience,49.2593879,-122.9151893,"AltumView Systems Inc., Burnaby, BC, Canada",company,b44f03b5fa8c6275238c2d13345652e6ff7e6ea9,citation,,Lapped convolutional neural networks for embedded systems,2017 +104,Korea,Adience,adience,37.2830003,127.04548469,Ajou University,edu,24286ef164f0e12c3e9590ec7f636871ba253026,citation,,Age and gender classification using wide convolutional neural network and Gabor filter,2018 +105,South Korea,Adience,adience,37.26728,126.9841151,Seoul National University,edu,24286ef164f0e12c3e9590ec7f636871ba253026,citation,,Age and gender classification using wide convolutional neural network and Gabor filter,2018 +106,China,Adience,adience,23.143197,113.34009651,South China Normal University,edu,dc6ad30c7a4bc79bb06b4725b16e202d3d7d8935,citation,,Age classification with deep learning face representation,2017 +107,China,Adience,adience,23.0502042,113.39880323,South China University of Technology,edu,dc6ad30c7a4bc79bb06b4725b16e202d3d7d8935,citation,,Age classification with deep learning face representation,2017 +108,China,Adience,adience,30.724051,104.026606,Sichuan Film and Television University,edu,9215d36c501d6ee57d74c1eeb1475efd800d92d3,citation,,An optimization framework of video advertising: using deep learning algorithm based on global image information,2018 +109,China,Adience,adience,30.578908,104.27712,Sichuan Tourism University,edu,9215d36c501d6ee57d74c1eeb1475efd800d92d3,citation,,An optimization framework of video advertising: using deep learning algorithm based on global image information,2018 +110,Romania,Adience,adience,46.7677955,23.5912762,Babes Bolyai University,edu,7aa32e0639e0750e9eee3ce16e51e9f94241ae88,citation,,Automatic gender recognition for “in the wild” facial images using convolutional neural networks,2017 +111,Romania,Adience,adience,46.7723581,23.5852075,Technical University,edu,7aa32e0639e0750e9eee3ce16e51e9f94241ae88,citation,,Automatic gender recognition for “in the wild” facial images using convolutional neural networks,2017 +112,United States,Adience,adience,38.926761,-92.29193783,University of Missouri,edu,0e71d712f771196189b01f0088cc3497d174493b,citation,,Fine-Grained Age Group Classification in the wild,2018 +113,China,Adience,adience,38.8760446,115.4973873,North China Electric Power University,edu,0e71d712f771196189b01f0088cc3497d174493b,citation,,Fine-Grained Age Group Classification in the wild,2018 +114,China,Adience,adience,22.304572,114.17976285,Hong Kong Polytechnic University,edu,dc2f16f967eac710cb9b7553093e9c977e5b761d,citation,,Learning a lightweight deep convolutional network for joint age and gender recognition,2016 +115,China,Adience,adience,23.09461185,113.28788994,Sun Yat-Sen University,edu,dc2f16f967eac710cb9b7553093e9c977e5b761d,citation,,Learning a lightweight deep convolutional network for joint age and gender recognition,2016 +116,South Korea,Adience,adience,36.3721427,127.36039,KAIST,edu,92d051d4680eb41eb172d23cb8c93eed7677af56,citation,,Adversarial Spatial Frequency Domain Critic Learning for Age and Gender Classification,2018 +117,Thailand,Adience,adience,14.0785,100.6140362,"Asian Institute of Technology (AIT), Pathum Thani 12120, Thailand",edu,984edce0b961418d81203ec477b9bfa5a8197ba3,citation,,Customer and target individual face analysis for retail analytics,2018 +118,China,Adience,adience,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,d80159bbe1d576d147ca9adbc9339a05fe3bab28,citation,,"Demographic Analysis from Biometric Data: Achievements, Challenges, and New Frontiers",2018 +119,South Korea,Adience,adience,36.383765,127.36694,"Electronics and Telecommunications Research Institute, Daejeon, South Korea",edu,771e27c4b53f58622e6f03788b5102e5e70b1e49,citation,,Facial Attribute Recognition by Recurrent Learning With Visual Fixation,2018 +120,China,Adience,adience,28.874513,105.431827,"Sichuan Police College, Luzhou, China",gov,7587a09d924cab41822a07cd1a988068b74baabb,citation,,Image scoring: Patch based CNN model for small or medium dataset,2017 +121,Brazil,Adience,adience,-22.8148374,-47.0647708,University of Campinas (UNICAMP),edu,b161d261fabb507803a9e5834571d56a3b87d147,citation,http://www.smc2017.org/SMC2017_Papers/media/files/1077.pdf,Gender recognition from face images using a geometric descriptor,2017 +122,China,Adience,adience,40.00229045,116.32098908,Tsinghua University,edu,2149d49c84a83848d6051867290d9c8bfcef0edb,citation,,Label-Sensitive Deep Metric Learning for Facial Age Estimation,2018 +123,United States,Adience,adience,40.4319722,-86.92389368,Purdue University,edu,07a1e6d26028b28185b7a3eee86752c240a24261,citation,,MODE: automated neural network model debugging via state differential analysis and input selection,2018 +124,Germany,Adience,adience,-35.4354218,-71.6199998,"Geospatial Laboratory, Universidad Católica del Maule, Talca, Chile",edu,3a05415356bd574cad1a9f1be21214e428bbc81b,citation,,Multinomial Naive Bayes for real-time gender recognition,2016 +125,Cyprus,Adience,adience,34.67567405,33.04577648,Cyprus University of Technology,edu,9f3c9e41f46df9c94d714b1f080dafad6b4de1de,citation,,On the detection of images containing child-pornographic material,2017 +126,United States,Adience,adience,32.8536333,-117.2035286,Kyung Hee University,edu,9d4692e243e25eb465a0480376beb60a5d2f0f13,citation,,Positional Ternary Pattern (PTP): An edge based image descriptor for human age recognition,2016 +127,Italy,Adience,adience,43.7192522,10.4239948,"Istituto di Informatica e Telematica, Consiglio Nazionale delle Ricerche, Pisa, Italy",edu,17de5a9ce09f4834629cd76b8526071a956c9c6d,citation,,Smart Parental Advisory: A Usage Control and Deep Learning-Based Framework for Dynamic Parental Control on Smart TV,2017 +128,Vietnam,Adience,adience,20.8368539,106.6942087,Vietnam Maritime University,edu,d38b32d91d56b01c77ef4dd7d625ce5217c6950b,citation,,Unconstrained gender classification by multi-resolution LPQ and SIFT,2016 +129,Poland,Adience,adience,50.0657033,19.91895867,AGH University of Science and Technology,edu,cca476114c48871d05537abb303061de5ab010d6,citation,,A compact deep convolutional neural network architecture for video based age and gender estimation,2016 +130,Singapore,Adience,adience,1.3483099,103.6831347,"NTU, Singapore",edu,8a917903b0a1d47f24bc7776ab0bd00aa8ec88f3,citation,,A Constrained Deep Neural Network for Ordinal Regression,2018 +131,Spain,Adience,adience,41.5008957,2.111553,Autonomous University of Barcelona,edu,c9c2de3628be7e249722b12911bebad84b567ce6,citation,,Age and gender recognition in the wild with deep attention,2017 +132,China,Adience,adience,38.8637191,115.5148326,"Hebei Information Engineering School, Baoding, China",edu,ea227e47b8a1e8f55983c34a17a81e5d3fa11cfd,citation,,Age group classification in the wild with deep RoR architecture,2017 +133,China,Adience,adience,38.8760446,115.4973873,North China Electric Power University,edu,ea227e47b8a1e8f55983c34a17a81e5d3fa11cfd,citation,,Age group classification in the wild with deep RoR architecture,2017 +134,United States,Adience,adience,38.9403808,-92.3277375,University of Missouri Columbia,edu,ea227e47b8a1e8f55983c34a17a81e5d3fa11cfd,citation,,Age group classification in the wild with deep RoR architecture,2017 +135,India,Adience,adience,29.8542626,77.8880002,"Indian institute of Technology Roorkee, India",edu,f4003cbbff3b3d008aa64c76fed163c10d9c68bd,citation,,Compass local binary patterns for gender recognition of facial photographs and sketches,2016 +136,Malaysia,Adience,adience,3.12267405,101.65356103,"University of Malaya, Kuala Lumpur",edu,d4d1ac1cfb2ca703c4db8cc9a1c7c7531fa940f9,citation,,"Gender estimation based on supervised HOG, Action Units and unsupervised CNN feature extraction",2017 +137,United Kingdom,Adience,adience,51.5247272,-0.03931035,Queen Mary University of London,edu,d7fd3dedb6b260702ed5e4b9175127815286e8da,citation,,Knowledge sharing: From atomic to parametrised context and shallow to deep models,2017 +138,Taiwan,Adience,adience,25.0421852,121.6145477,"Academia Sinica, Taipei, Taiwan",edu,aa6f7c3daed31d331ef626758e990cbc04632852,citation,,Merging Deep Neural Networks for Mobile Devices,2018 diff --git a/site/datasets/verified/brainwash.csv b/site/datasets/verified/brainwash.csv index 628ca090..8b70de6e 100644 --- a/site/datasets/verified/brainwash.csv +++ b/site/datasets/verified/brainwash.csv @@ -3,3 +3,13 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,t 1,China,Brainwash,brainwash,39.9922379,116.30393816,Peking University,edu,7e915bb8e4ada4f8d261bc855a4f587ea97764ca,citation,,People detection in crowded scenes via regional-based convolutional network,2016 2,China,Brainwash,brainwash,28.2290209,112.99483204,"National University of Defense Technology, China",mil,591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b,citation,https://pdfs.semanticscholar.org/591a/4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b.pdf,A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering,2017 3,China,Brainwash,brainwash,28.2290209,112.99483204,"National University of Defense Technology, China",mil,b02d31c640b0a31fb18c4f170d841d8e21ffb66c,citation,,Localized region context and object feature fusion for people head detection,2016 +4,United States,Brainwash,brainwash,37.43131385,-122.16936535,Stanford University,edu,81ba5202424906f64b77f68afca063658139fbb2,citation,https://arxiv.org/pdf/1611.09078.pdf,Social Scene Understanding: End-to-End Multi-person Action Localization and Collective Activity Recognition,2017 +5,Switzerland,Brainwash,brainwash,46.109237,7.08453549,IDIAP Research Institute,edu,81ba5202424906f64b77f68afca063658139fbb2,citation,https://arxiv.org/pdf/1611.09078.pdf,Social Scene Understanding: End-to-End Multi-person Action Localization and Collective Activity Recognition,2017 +6,China,Brainwash,brainwash,35.86166,104.195397,"Megvii Inc. (Face++), China",company,03a65d274dc6caea94f6ab344e0b4969575327e3,citation,https://arxiv.org/pdf/1805.00123.pdf,CrowdHuman: A Benchmark for Detecting Human in a Crowd,2018 +7,China,Brainwash,brainwash,23.0502042,113.39880323,South China University of Technology,edu,2f02c1d4858d9c0f5d16099eb090560d5fa4f23f,citation,,Detecting Heads using Feature Refine Net and Cascaded Multi-scale Architecture,2018 +8,China,Brainwash,brainwash,23.0502042,113.39880323,South China University of Technology,edu,cab97d4dc67919f965cc884d80e4d6b743a256eb,citation,,Scale Mapping and Dynamic Re-Detecting in Dense Head Detection,2018 +9,China,Brainwash,brainwash,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,d78b190f98f9630cab261eabc399733af052f05c,citation,https://arxiv.org/pdf/1802.03269.pdf,Unsupervised Deep Domain Adaptation for Pedestrian Detection,2016 +10,Netherlands,Brainwash,brainwash,52.2380139,6.8566761,University of Twente,edu,d78b190f98f9630cab261eabc399733af052f05c,citation,https://arxiv.org/pdf/1802.03269.pdf,Unsupervised Deep Domain Adaptation for Pedestrian Detection,2016 +11,India,Brainwash,brainwash,12.9914929,80.2336907,"IIT Madras, India",edu,d488dad9fa81817c85a284b09ebf198bf6b640f9,citation,https://arxiv.org/pdf/1809.08766.pdf,FCHD: A fast and accurate head detector,2018 +12,Canada,Brainwash,brainwash,45.42580475,-75.68740118,University of Ottawa,edu,68ea88440fc48d59c7407e71a193ff1973f9ba7c,citation,https://pdfs.semanticscholar.org/68ea/88440fc48d59c7407e71a193ff1973f9ba7c.pdf,Shoulder Keypoint-Detection from Object Detection,2018 +13,Netherlands,Brainwash,brainwash,51.99882735,4.37396037,Delft University of Technology,edu,9043df1de4f6e181875011c1379d1a7f68a28d6c,citation,https://pdfs.semanticscholar.org/9043/df1de4f6e181875011c1379d1a7f68a28d6c.pdf,People Detection from Overhead Cameras,2018 diff --git a/site/datasets/verified/duke_mtmc.csv b/site/datasets/verified/duke_mtmc.csv index 929b84c1..b85d9458 100644 --- a/site/datasets/verified/duke_mtmc.csv +++ b/site/datasets/verified/duke_mtmc.csv @@ -45,137 +45,181 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,t 43,Australia,Duke MTMC,duke_mtmc,-35.2776999,149.118527,Australian National University,edu,f8f92624c8794d54e08b3a8f94910952ae03cade,citation,,CamStyle: A Novel Data Augmentation Method for Person Re-Identification,2019 44,China,Duke MTMC,duke_mtmc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,08d2a558ea2deb117dd8066e864612bf2899905b,citation,https://arxiv.org/pdf/1807.09975.pdf,Person Re-identification with Deep Similarity-Guided Graph Neural Network,2018 45,China,Duke MTMC,duke_mtmc,39.993008,116.329882,SenseTime,company,08d2a558ea2deb117dd8066e864612bf2899905b,citation,https://arxiv.org/pdf/1807.09975.pdf,Person Re-identification with Deep Similarity-Guided Graph Neural Network,2018 -46,United States,Duke MTMC,duke_mtmc,37.8718992,-122.2585399,University of California,edu,fefa8f07d998f8f4a6c85a7da781b19bf6b78d7d,citation,https://arxiv.org/pdf/1902.00749.pdf,Online Multi-Object Tracking with Dual Matching Attention Networks,2018 -47,China,Duke MTMC,duke_mtmc,39.9808333,116.34101249,Beihang University,edu,7bfc5bbad852f9e6bea3b86c25179d81e2e7fff6,citation,,Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology,2018 -48,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,be79ad118d0524d9b493f4a14a662c8184e6405a,citation,,Attend and Align: Improving Deep Representations with Feature Alignment Layer for Person Retrieval,2018 -49,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,13ea9a2ed134a9e238d33024fba34d3dd6a010e0,citation,https://arxiv.org/pdf/1703.05693.pdf,SVDNet for Pedestrian Retrieval,2017 -50,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,13ea9a2ed134a9e238d33024fba34d3dd6a010e0,citation,https://arxiv.org/pdf/1703.05693.pdf,SVDNet for Pedestrian Retrieval,2017 -51,China,Duke MTMC,duke_mtmc,30.19331415,120.11930822,Zhejiang University,edu,608dede56161fd5f76bcf9228b4dd8c639d65b02,citation,https://arxiv.org/pdf/1807.00537.pdf,SphereReID: Deep Hypersphere Manifold Embedding for Person Re-Identification,2018 -52,United States,Duke MTMC,duke_mtmc,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,24d6d3adf2176516ef0de2e943ce2084e27c4f94,citation,https://arxiv.org/pdf/1811.07487.pdf,Re-Identification with Consistent Attentive Siamese Networks,2018 -53,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,15e1af79939dbf90790b03d8aa02477783fb1d0f,citation,https://arxiv.org/pdf/1701.07717.pdf,Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro,2017 -54,China,Duke MTMC,duke_mtmc,30.778621,103.961236,XiHua University,edu,ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b,citation,https://pdfs.semanticscholar.org/ec9c/20ed6cce15e9b63ac96bb5a6d55e69661e0b.pdf,Robust Pedestrian Detection for Semi-automatic Construction of a Crowded Person Re-Identification Dataset,2018 -55,United Kingdom,Duke MTMC,duke_mtmc,51.24303255,-0.59001382,University of Surrey,edu,ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b,citation,https://pdfs.semanticscholar.org/ec9c/20ed6cce15e9b63ac96bb5a6d55e69661e0b.pdf,Robust Pedestrian Detection for Semi-automatic Construction of a Crowded Person Re-Identification Dataset,2018 -56,China,Duke MTMC,duke_mtmc,31.4854255,120.2739581,Jiangnan University,edu,ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b,citation,https://pdfs.semanticscholar.org/ec9c/20ed6cce15e9b63ac96bb5a6d55e69661e0b.pdf,Robust Pedestrian Detection for Semi-automatic Construction of a Crowded Person Re-Identification Dataset,2018 -57,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,fa3fb32fe0cd392960549b0adb7a535eb3656abd,citation,https://arxiv.org/pdf/1711.08106.pdf,The Devil is in the Middle: Exploiting Mid-level Representations for Cross-Domain Instance Matching,2017 -58,United Kingdom,Duke MTMC,duke_mtmc,55.94951105,-3.19534913,University of Edinburgh,edu,fa3fb32fe0cd392960549b0adb7a535eb3656abd,citation,https://arxiv.org/pdf/1711.08106.pdf,The Devil is in the Middle: Exploiting Mid-level Representations for Cross-Domain Instance Matching,2017 -59,United States,Duke MTMC,duke_mtmc,40.1019523,-88.2271615,UIUC,edu,54c28bf64debbdb21c246795182f97d4f7917b74,citation,https://arxiv.org/pdf/1811.04129.pdf,STA: Spatial-Temporal Attention for Large-Scale Video-based Person Re-Identification,2018 -60,United States,Duke MTMC,duke_mtmc,34.0803829,-118.3909947,Tencent,company,3b311a1ce30f9c0f3dc1d9c0cf25f13127a5e48c,citation,https://arxiv.org/pdf/1810.12193.pdf,A Coarse-to-fine Pyramidal Model for Person Re-identification via Multi-Loss Dynamic Training,2018 -61,United States,Duke MTMC,duke_mtmc,37.3860784,-121.9877807,Google and Hewlett-Packard Labs,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 -62,United States,Duke MTMC,duke_mtmc,37.3860784,-121.9877807,Hewlett-Packard Labs,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 -63,United States,Duke MTMC,duke_mtmc,39.6321923,-76.3038146,LinkedIn and Hewlett-Packard Labs,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 -64,United States,Duke MTMC,duke_mtmc,34.0224149,-118.28634407,University of Southern California,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 -65,Canada,Duke MTMC,duke_mtmc,49.2767454,-122.91777375,Simon Fraser University,edu,5137ca9f0a7cf4c61f2254d4a252a0c56e5dcfcc,citation,https://arxiv.org/pdf/1811.07130.pdf,Batch Feature Erasing for Person Re-identification and Beyond,2018 -66,China,Duke MTMC,duke_mtmc,32.0565957,118.77408833,Nanjing University,edu,c37c3853ab428725f13906bb0ff4936ffe15d6af,citation,https://arxiv.org/pdf/1809.02874.pdf,Unsupervised Person Re-identification by Deep Learning Tracklet Association,2018 -67,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,c37c3853ab428725f13906bb0ff4936ffe15d6af,citation,https://arxiv.org/pdf/1809.02874.pdf,Unsupervised Person Re-identification by Deep Learning Tracklet Association,2018 -68,United States,Duke MTMC,duke_mtmc,37.8687126,-122.25586815,"University of California, Berkeley",edu,a8d665fa7357f696dcfd188b91fda88da47b964e,citation,https://arxiv.org/pdf/1809.02318.pdf,Scaling Video Analytics Systems to Large Camera Deployments,2018 -69,United States,Duke MTMC,duke_mtmc,47.6423318,-122.1369302,Microsoft,company,a8d665fa7357f696dcfd188b91fda88da47b964e,citation,https://arxiv.org/pdf/1809.02318.pdf,Scaling Video Analytics Systems to Large Camera Deployments,2018 -70,United States,Duke MTMC,duke_mtmc,41.78468745,-87.60074933,University of Chicago,edu,a8d665fa7357f696dcfd188b91fda88da47b964e,citation,https://arxiv.org/pdf/1809.02318.pdf,Scaling Video Analytics Systems to Large Camera Deployments,2018 -71,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,dda0b381c162695f21b8d1149aab22188b3c2bc0,citation,https://arxiv.org/pdf/1804.02792.pdf,Occluded Person Re-Identification,2018 -72,China,Duke MTMC,duke_mtmc,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,33f358f1d2b54042c524d69b20e80d98dde3dacd,citation,https://arxiv.org/pdf/1811.11405.pdf,Spectral Feature Transformation for Person Re-identification,2018 -73,United States,Duke MTMC,duke_mtmc,32.8734455,-117.2065636,TuSimple,edu,33f358f1d2b54042c524d69b20e80d98dde3dacd,citation,https://arxiv.org/pdf/1811.11405.pdf,Spectral Feature Transformation for Person Re-identification,2018 -74,China,Duke MTMC,duke_mtmc,30.672721,104.098806,University of Electronic Science and Technology of China,edu,8ffc49aead99fdacb0b180468a36984759f2fc1e,citation,https://arxiv.org/pdf/1809.04976.pdf,Sparse Label Smoothing for Semi-supervised Person Re-Identification,2018 -75,Germany,Duke MTMC,duke_mtmc,50.7791703,6.06728733,RWTH Aachen University,edu,10b36c003542545f1e2d73e8897e022c0c260c32,citation,https://arxiv.org/pdf/1705.04608.pdf,Towards a Principled Integration of Multi-camera Re-identification and Tracking Through Optimal Bayes Filters,2017 -76,United Kingdom,Duke MTMC,duke_mtmc,51.7534538,-1.25400997,University of Oxford,edu,94ed6dc44842368b457851b43023c23fd78d5390,citation,https://arxiv.org/pdf/1806.01794.pdf,"Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects",2018 -77,China,Duke MTMC,duke_mtmc,39.9041999,116.4073963,"Beijing, China",edu,280976bbb41d2948a5c0208f86605977397181cd,citation,https://arxiv.org/pdf/1811.08073.pdf,Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models,2018 -78,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,280976bbb41d2948a5c0208f86605977397181cd,citation,https://arxiv.org/pdf/1811.08073.pdf,Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models,2018 -79,China,Duke MTMC,duke_mtmc,39.9922379,116.30393816,Peking University,edu,014e249422b6bd6ff32b3f7d385b5a0e8c4c9fcf,citation,https://arxiv.org/pdf/1810.05866.pdf,Attention driven person re-identification,2019 -80,Singapore,Duke MTMC,duke_mtmc,1.3484104,103.68297965,Nanyang Technological University,edu,014e249422b6bd6ff32b3f7d385b5a0e8c4c9fcf,citation,https://arxiv.org/pdf/1810.05866.pdf,Attention driven person re-identification,2019 -81,China,Duke MTMC,duke_mtmc,39.9808333,116.34101249,Beihang University,edu,e9d549989926f36abfa5dc7348ae3d79a567bf30,citation,,Orientation-Guided Similarity Learning for Person Re-identification,2018 -82,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,95bdd45fed0392418e0e5d3e51d34714917e3c87,citation,https://arxiv.org/pdf/1812.03282.pdf,Spatial-Temporal Person Re-identification,2019 -83,China,Duke MTMC,duke_mtmc,31.30104395,121.50045497,Fudan University,edu,00e3957212517a252258baef833833921dd308d4,citation,,Adaptively Weighted Multi-task Deep Network for Person Attribute Classification,2017 -84,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,705073015bb8ae97212532a30488c05d50894bec,citation,https://arxiv.org/pdf/1803.09786.pdf,Transferable Joint Attribute-Identity Deep Learning for Unsupervised Person Re-identification,2018 -85,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,9e644b1e33dd9367be167eb9d832174004840400,citation,https://users.cs.duke.edu/~tomasi/papers/ristani/ristaniTCAS16.pdf,Tracking Social Groups Within and Across Cameras,2017 -86,Italy,Duke MTMC,duke_mtmc,44.6451046,10.9279268,University of Modena,edu,9e644b1e33dd9367be167eb9d832174004840400,citation,https://users.cs.duke.edu/~tomasi/papers/ristani/ristaniTCAS16.pdf,Tracking Social Groups Within and Across Cameras,2017 -87,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,27a2fad58dd8727e280f97036e0d2bc55ef5424c,citation,https://arxiv.org/pdf/1609.01775.pdf,"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking",2016 -88,Switzerland,Duke MTMC,duke_mtmc,46.5190557,6.5667576,EPFL,edu,4e4e3ddb55607e127a4abdef45d92adf1ff78de2,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf,Non-Markovian Globally Consistent Multi-object Tracking,2017 -89,Switzerland,Duke MTMC,duke_mtmc,46.109237,7.08453549,IDIAP Research Institute,edu,4e4e3ddb55607e127a4abdef45d92adf1ff78de2,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf,Non-Markovian Globally Consistent Multi-object Tracking,2017 -90,United States,Duke MTMC,duke_mtmc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,4e4e3ddb55607e127a4abdef45d92adf1ff78de2,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf,Non-Markovian Globally Consistent Multi-object Tracking,2017 -91,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,fc26fc2340a863d6da0b427cd924fb4cb101051b,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w37/Chen_Person_Re-Identification_by_ICCV_2017_paper.pdf,Person Re-identification by Deep Learning Multi-scale Representations,2017 -92,United Kingdom,Duke MTMC,duke_mtmc,55.378051,-3.435973,"Vision Semantics Ltd, UK",edu,fc26fc2340a863d6da0b427cd924fb4cb101051b,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w37/Chen_Person_Re-Identification_by_ICCV_2017_paper.pdf,Person Re-identification by Deep Learning Multi-scale Representations,2017 -93,Canada,Duke MTMC,duke_mtmc,43.4983503,-80.5478382,"Senstar Corporation, Waterloo, Canada",company,8e42568c2b3feaafd1e442e1e861ec50a4ac144f,citation,https://arxiv.org/pdf/1805.06086.pdf,An Evaluation of Deep CNN Baselines for Scene-Independent Person Re-identification,2018 -94,Italy,Duke MTMC,duke_mtmc,45.4377672,12.321807,University Iuav of Venice,edu,eddb1a126eafecad2cead01c6c3bb4b88120d78a,citation,https://arxiv.org/pdf/1802.02181.pdf,Applications of a Graph Theoretic Based Clustering Framework in Computer Vision and Pattern Recognition,2018 -95,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,fc068f7f8a3b2921ec4f3246e9b6c6015165df9a,citation,https://arxiv.org/pdf/1711.09349.pdf,Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline),2018 -96,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,fc068f7f8a3b2921ec4f3246e9b6c6015165df9a,citation,https://arxiv.org/pdf/1711.09349.pdf,Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline),2018 -97,United States,Duke MTMC,duke_mtmc,29.58333105,-98.61944505,University of Texas at San Antonio,edu,fc068f7f8a3b2921ec4f3246e9b6c6015165df9a,citation,https://arxiv.org/pdf/1711.09349.pdf,Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline),2018 -98,United States,Duke MTMC,duke_mtmc,43.0008093,-78.7889697,University at Buffalo,edu,fdd1bde7066c7e9c7515f330546e0b3a8de8a4a6,citation,https://arxiv.org/pdf/1811.06582.pdf,CAN: Composite Appearance Network and a Novel Evaluation Metric for Person Tracking,2018 -99,United States,Duke MTMC,duke_mtmc,43.0008093,-78.7889697,University at Buffalo,edu,3144c9b3bedb6e3895dcd36998bcb0903271841d,citation,https://arxiv.org/pdf/1811.06582.pdf,CAN: Composite Appearance Network and a Novel Evaluation Metric for Person Tracking,2018 -100,China,Duke MTMC,duke_mtmc,29.1416432,119.7889248,"Alibaba Group, Zhejiang, People’s Republic of China",edu,f4e65ab81a0f4ffa50d0c9bc308d7365e012cc75,citation,https://arxiv.org/pdf/1812.05785.pdf,Deep Active Learning for Video-based Person Re-identification,2018 -101,China,Duke MTMC,duke_mtmc,30.19331415,120.11930822,Zhejiang University,edu,f4e65ab81a0f4ffa50d0c9bc308d7365e012cc75,citation,https://arxiv.org/pdf/1812.05785.pdf,Deep Active Learning for Video-based Person Re-identification,2018 -102,China,Duke MTMC,duke_mtmc,38.88140235,121.52281098,Dalian University of Technology,edu,5be74c6fa7f890ea530e427685dadf0d0a371fc1,citation,https://arxiv.org/pdf/1804.11027.pdf,Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification,2018 -103,Australia,Duke MTMC,duke_mtmc,-27.49741805,153.01316956,University of Queensland,edu,5be74c6fa7f890ea530e427685dadf0d0a371fc1,citation,https://arxiv.org/pdf/1804.11027.pdf,Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification,2018 -104,Australia,Duke MTMC,duke_mtmc,-33.88890695,151.18943366,University of Sydney,edu,5be74c6fa7f890ea530e427685dadf0d0a371fc1,citation,https://arxiv.org/pdf/1804.11027.pdf,Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification,2018 -105,Switzerland,Duke MTMC,duke_mtmc,46.5184121,6.5684654,École Polytechnique Fédérale de Lausanne,edu,0f3eb3719b6f6f544b766e0bfeb8f962c9bd59f4,citation,https://arxiv.org/pdf/1811.10984.pdf,Eliminating Exposure Bias and Loss-Evaluation Mismatch in Multiple Object Tracking,2018 -106,Italy,Duke MTMC,duke_mtmc,45.434532,12.326197,"DAIS, Università Ca’ Foscari, Venice, Italy",edu,6dce5866ebc46355a35b8667c1e04a4790c2289b,citation,https://pdfs.semanticscholar.org/6dce/5866ebc46355a35b8667c1e04a4790c2289b.pdf,Extensions of dominant sets and their applications in computer vision,2018 -107,United States,Duke MTMC,duke_mtmc,42.3383668,-71.08793524,Northeastern University,edu,8abe89ab85250fd7a8117da32bc339a71c67dc21,citation,https://arxiv.org/pdf/1709.07065.pdf,Multi-camera Multi-Object Tracking,2017 -108,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,b856c0eb039effce7da9ff45c3f5987f18928bef,citation,https://arxiv.org/pdf/1707.00408.pdf,Pedestrian Alignment Network for Large-scale Person Re-identification,2017 -109,Germany,Duke MTMC,duke_mtmc,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,bab66082d01b393e6b9e841e5e06782a6c61ec88,citation,https://arxiv.org/pdf/1803.08709.pdf,Pose-Driven Deep Models for Person Re-Identification,2018 -110,China,Duke MTMC,duke_mtmc,31.30104395,121.50045497,Fudan University,edu,e6d8f332ae26e9983d5b42af4466ff95b55f2341,citation,https://arxiv.org/pdf/1712.02225.pdf,Pose-Normalized Image Generation for Person Re-identification,2018 -111,Japan,Duke MTMC,duke_mtmc,34.7321121,135.7328585,Nara Institute of Science and Technology,edu,e6d8f332ae26e9983d5b42af4466ff95b55f2341,citation,https://arxiv.org/pdf/1712.02225.pdf,Pose-Normalized Image Generation for Person Re-identification,2018 -112,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,e6d8f332ae26e9983d5b42af4466ff95b55f2341,citation,https://arxiv.org/pdf/1712.02225.pdf,Pose-Normalized Image Generation for Person Re-identification,2018 -113,China,Duke MTMC,duke_mtmc,22.8376,108.289839,Guangxi University,edu,4a91be40e6b382c3ddf3385ac44062b2399336a8,citation,https://arxiv.org/pdf/1809.09970.pdf,Random Occlusion-recovery for Person Re-identification,2018 -114,China,Duke MTMC,duke_mtmc,31.28473925,121.49694909,Tongji University,edu,4a91be40e6b382c3ddf3385ac44062b2399336a8,citation,https://arxiv.org/pdf/1809.09970.pdf,Random Occlusion-recovery for Person Re-identification,2018 -115,France,Duke MTMC,duke_mtmc,45.2173989,5.7921349,"Naver Labs Europe, Meylan, France",edu,4d8347a69e77cc02c1e1aba3a8b6646eac1a0b3d,citation,https://arxiv.org/pdf/1801.05339.pdf,Re-ID done right: towards good practices for person re-identification.,2018 -116,United States,Duke MTMC,duke_mtmc,28.59899755,-81.19712501,University of Central Florida,edu,a1e97c4043d5cc9896dc60ae7ca135782d89e5fc,citation,https://arxiv.org/pdf/1612.02155.pdf,"Re-identification of Humans in Crowds using Personal, Social and Environmental Constraints",2016 -117,China,Duke MTMC,duke_mtmc,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,0e36bf238d2db6c970ade0b5f68811ed6debc4e8,citation,https://arxiv.org/pdf/1810.07399.pdf,Recognizing Partial Biometric Patterns,2018 -118,United States,Duke MTMC,duke_mtmc,42.4505507,-76.4783513,Cornell University,edu,6d76eefecdcaa130a000d1d6c93cf57166ebd18e,citation,https://arxiv.org/pdf/1805.08805.pdf,Resource Aware Person Re-identification Across Multiple Resolutions,2018 -119,China,Duke MTMC,duke_mtmc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,6d76eefecdcaa130a000d1d6c93cf57166ebd18e,citation,https://arxiv.org/pdf/1805.08805.pdf,Resource Aware Person Re-identification Across Multiple Resolutions,2018 -120,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,6d76eefecdcaa130a000d1d6c93cf57166ebd18e,citation,https://arxiv.org/pdf/1805.08805.pdf,Resource Aware Person Re-identification Across Multiple Resolutions,2018 -121,China,Duke MTMC,duke_mtmc,31.846918,117.29053367,Hefei University of Technology,edu,42dc432f58adfaa7bf6af07e5faf9e75fea29122,citation,https://arxiv.org/pdf/1811.08115.pdf,Sequence-based Person Attribute Recognition with Joint CTC-Attention Model,2018 -122,China,Duke MTMC,duke_mtmc,31.1675446,121.3974873,"Tencent, Shanghai, China",company,42dc432f58adfaa7bf6af07e5faf9e75fea29122,citation,https://arxiv.org/pdf/1811.08115.pdf,Sequence-based Person Attribute Recognition with Joint CTC-Attention Model,2018 -123,United States,Duke MTMC,duke_mtmc,47.6423318,-122.1369302,Microsoft,company,8a77025bde5479a1366bb93c6f2366b5a6293720,citation,https://arxiv.org/pdf/1805.02336.pdf,Sharp Attention Network via Adaptive Sampling for Person Re-identification,2018 -124,United States,Duke MTMC,duke_mtmc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,8a77025bde5479a1366bb93c6f2366b5a6293720,citation,https://arxiv.org/pdf/1805.02336.pdf,Sharp Attention Network via Adaptive Sampling for Person Re-identification,2018 -125,China,Duke MTMC,duke_mtmc,30.19331415,120.11930822,Zhejiang University,edu,8a77025bde5479a1366bb93c6f2366b5a6293720,citation,https://arxiv.org/pdf/1805.02336.pdf,Sharp Attention Network via Adaptive Sampling for Person Re-identification,2018 -126,Australia,Duke MTMC,duke_mtmc,-35.2776999,149.118527,Australian National University,edu,304196021200067a838c06002d9e96d6a12a1e46,citation,https://arxiv.org/pdf/1811.10551.pdf,Similarity-preserving Image-image Domain Adaptation for Person Re-identification,2018 -127,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,304196021200067a838c06002d9e96d6a12a1e46,citation,https://arxiv.org/pdf/1811.10551.pdf,Similarity-preserving Image-image Domain Adaptation for Person Re-identification,2018 -128,China,Duke MTMC,duke_mtmc,28.2290209,112.99483204,"National University of Defense Technology, China",mil,e90816e1a0e14ea1e7039e0b2782260999aef786,citation,https://arxiv.org/pdf/1809.03137.pdf,Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers,2018 -129,United Kingdom,Duke MTMC,duke_mtmc,51.5231607,-0.1282037,University College London,edu,e90816e1a0e14ea1e7039e0b2782260999aef786,citation,https://arxiv.org/pdf/1809.03137.pdf,Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers,2018 -130,United States,Duke MTMC,duke_mtmc,37.2283843,-80.4234167,Virginia Tech,edu,e278218ba1ff1b85d06680e99b08e817d0962dab,citation,https://arxiv.org/pdf/1710.02139.pdf,Tracking Persons-of-Interest via Unsupervised Representation Adaptation,2017 -131,China,Duke MTMC,duke_mtmc,34.250803,108.983693,Xi’an Jiaotong University,edu,e278218ba1ff1b85d06680e99b08e817d0962dab,citation,https://arxiv.org/pdf/1710.02139.pdf,Tracking Persons-of-Interest via Unsupervised Representation Adaptation,2017 -132,China,Duke MTMC,duke_mtmc,30.508964,114.410577,"Huazhong Univ. of Science and Technology, China",edu,42656cf2b75dccc7f8f224f7a86c2ea4de1ae671,citation,https://arxiv.org/pdf/1807.11334.pdf,Unsupervised Domain Adaptive Re-Identification: Theory and Practice,2018 -133,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,788ab52d4f7fedb4b79347bb81822c4f3c430d80,citation,https://arxiv.org/pdf/1901.10177.pdf,Unsupervised Person Re-identification by Deep Asymmetric Metric Embedding,2018 -134,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,31da1da2d4e7254dd8f2a4578d887c57e0678438,citation,https://arxiv.org/pdf/1705.10444.pdf,Unsupervised Person Re-identification: Clustering and Fine-tuning,2018 -135,United Kingdom,Duke MTMC,duke_mtmc,54.6141723,-5.9002151,Queen's University Belfast,edu,1e146982a7b088e7a3790d2683484944c3b9dcf7,citation,https://pdfs.semanticscholar.org/1e14/6982a7b088e7a3790d2683484944c3b9dcf7.pdf,Video Person Re-Identification for Wide Area Tracking based on Recurrent Neural Networks,2017 -136,Germany,Duke MTMC,duke_mtmc,49.01546,8.4257999,Fraunhofer,company,978716708762dab46e91059e170d43551be74732,citation,,A Pose-Sensitive Embedding for Person Re-identification with Expanded Cross Neighborhood Re-ranking,2018 -137,Germany,Duke MTMC,duke_mtmc,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,978716708762dab46e91059e170d43551be74732,citation,,A Pose-Sensitive Embedding for Person Re-identification with Expanded Cross Neighborhood Re-ranking,2018 -138,Taiwan,Duke MTMC,duke_mtmc,25.01682835,121.53846924,National Taiwan University,edu,d9216cc2a3c03659cb2392b7cc8509feb7829579,citation,,Adaptation and Re-identification Network: An Unsupervised Deep Transfer Learning Approach to Person Re-identification,2018 -139,China,Duke MTMC,duke_mtmc,39.979203,116.33287,"CRIPAC & NLPR, CASIA",edu,1bfe59be5b42d6b7257da4b35a408239c01ab79d,citation,,Adversarially Occluded Samples for Person Re-identification,2018 -140,China,Duke MTMC,duke_mtmc,40.0044795,116.370238,Chinese Academy of Sciences,edu,1bfe59be5b42d6b7257da4b35a408239c01ab79d,citation,,Adversarially Occluded Samples for Person Re-identification,2018 -141,China,Duke MTMC,duke_mtmc,22.543096,114.057865,"SenseNets Corporation, Shenzhen, China",company,14ce502bc19b225466126b256511f9c05cadcb6e,citation,,Attention-Aware Compositional Network for Person Re-identification,2018 -142,China,Duke MTMC,duke_mtmc,39.993008,116.329882,SenseTime,company,14ce502bc19b225466126b256511f9c05cadcb6e,citation,,Attention-Aware Compositional Network for Person Re-identification,2018 -143,Australia,Duke MTMC,duke_mtmc,-33.88890695,151.18943366,University of Sydney,edu,14ce502bc19b225466126b256511f9c05cadcb6e,citation,,Attention-Aware Compositional Network for Person Re-identification,2018 -144,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,1822ca8db58b0382b0c64f310840f0f875ea02c0,citation,,Camera Style Adaptation for Person Re-identification,2018 -145,China,Duke MTMC,duke_mtmc,24.4399419,118.09301781,Xiamen University,edu,1822ca8db58b0382b0c64f310840f0f875ea02c0,citation,,Camera Style Adaptation for Person Re-identification,2018 -146,China,Duke MTMC,duke_mtmc,36.16161795,120.49355276,Ocean University of China,edu,38259235a1c7b2c68ca09f3bc0930987ae99cf00,citation,,Deep Feature Ranking for Person Re-Identification,2019 -147,South Korea,Duke MTMC,duke_mtmc,35.84658875,127.1350133,Chonbuk National University,edu,c635564fe2f7d91b578bd6959904982aaa61234d,citation,,Deep Multi-Task Network for Learning Person Identity and Attributes,2018 -148,China,Duke MTMC,duke_mtmc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,947954cafdefd471b75da8c3bb4c21b9e6d57838,citation,,End-to-End Deep Kronecker-Product Matching for Person Re-identification,2018 -149,China,Duke MTMC,duke_mtmc,39.993008,116.329882,SenseTime,company,947954cafdefd471b75da8c3bb4c21b9e6d57838,citation,,End-to-End Deep Kronecker-Product Matching for Person Re-identification,2018 -150,China,Duke MTMC,duke_mtmc,23.0502042,113.39880323,South China University of Technology,edu,cb68c60ac046a0ec1c7f67487f14b999037313e1,citation,,Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,2018 -151,Australia,Duke MTMC,duke_mtmc,-33.88890695,151.18943366,University of Sydney,edu,cb68c60ac046a0ec1c7f67487f14b999037313e1,citation,,Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,2018 -152,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,cb68c60ac046a0ec1c7f67487f14b999037313e1,citation,,Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,2018 -153,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,c0f01b8174a632448c20eb5472cd9d5b2c595e39,citation,,Features for Multi-target Multi-camera Tracking and Re-identification,2018 -154,China,Duke MTMC,duke_mtmc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,308a13fd1d2847d98930a8e5542f773a9651a0ae,citation,,Group Consistent Similarity Learning via Deep CRF for Person Re-identification,2018 -155,Italy,Duke MTMC,duke_mtmc,46.0658836,11.1159894,University of Trento,edu,308a13fd1d2847d98930a8e5542f773a9651a0ae,citation,,Group Consistent Similarity Learning via Deep CRF for Person Re-identification,2018 -156,China,Duke MTMC,duke_mtmc,34.250803,108.983693,Xi’an Jiaotong University,edu,308a13fd1d2847d98930a8e5542f773a9651a0ae,citation,,Group Consistent Similarity Learning via Deep CRF for Person Re-identification,2018 -157,Turkey,Duke MTMC,duke_mtmc,41.10427915,29.02231159,Istanbul Technical University,edu,7ba225a614d77efd9bdf66bf74c80dd2da09229a,citation,,Human Semantic Parsing for Person Re-identification,2018 -158,United States,Duke MTMC,duke_mtmc,28.59899755,-81.19712501,University of Central Florida,edu,7ba225a614d77efd9bdf66bf74c80dd2da09229a,citation,,Human Semantic Parsing for Person Re-identification,2018 -159,Australia,Duke MTMC,duke_mtmc,-32.00686365,115.89691775,Curtin University,edu,292286c0024d6625fe606fb5b8a0df54ea3ffe91,citation,,Identity Adaptation for Person Re-Identification,2018 -160,United Kingdom,Duke MTMC,duke_mtmc,54.00975365,-2.78757491,Lancaster University,edu,292286c0024d6625fe606fb5b8a0df54ea3ffe91,citation,,Identity Adaptation for Person Re-Identification,2018 -161,Australia,Duke MTMC,duke_mtmc,-31.95040445,115.79790037,University of Western Australia,edu,292286c0024d6625fe606fb5b8a0df54ea3ffe91,citation,,Identity Adaptation for Person Re-Identification,2018 -162,China,Duke MTMC,duke_mtmc,40.0044795,116.370238,Chinese Academy of Sciences,edu,6cde93a5288e84671a7bee98cf6c94037f42da42,citation,,Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification,2018 -163,Singapore,Duke MTMC,duke_mtmc,1.340216,103.965089,Singapore University of Technology and Design,edu,6cde93a5288e84671a7bee98cf6c94037f42da42,citation,,Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification,2018 -164,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,6cde93a5288e84671a7bee98cf6c94037f42da42,citation,,Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification,2018 -165,China,Duke MTMC,duke_mtmc,39.0607286,117.1256421,Tianjin Normal University,edu,67289bd3b7c9406429c6012eb7292305e50dff0b,citation,,Integration Convolutional Neural Network for Person Re-Identification in Camera Networks,2018 -166,China,Duke MTMC,duke_mtmc,32.05765485,118.7550004,HoHai University,edu,fedb656c45aa332cfc373b413f3000b6228eee08,citation,,Joint Learning of Body and Part Representation for Person Re-Identification,2018 -167,China,Duke MTMC,duke_mtmc,33.5491006,119.035706,"Huaiyin Institute of Technology, Huaian, China",edu,fedb656c45aa332cfc373b413f3000b6228eee08,citation,,Joint Learning of Body and Part Representation for Person Re-Identification,2018 -168,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,fedb656c45aa332cfc373b413f3000b6228eee08,citation,,Joint Learning of Body and Part Representation for Person Re-Identification,2018 -169,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,b37538f9364252eec4182bdbb80ef1e4614c3acd,citation,,Learning a Semantically Discriminative Joint Space for Attribute Based Person Re-identification,2017 -170,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,004acfec16c36649408c561faa102dd9de76f085,citation,,Multi-level Factorisation Net for Person Re-identification,2018 -171,United Kingdom,Duke MTMC,duke_mtmc,55.94951105,-3.19534913,University of Edinburgh,edu,004acfec16c36649408c561faa102dd9de76f085,citation,,Multi-level Factorisation Net for Person Re-identification,2018 -172,China,Duke MTMC,duke_mtmc,39.0607286,117.1256421,Tianjin Normal University,edu,a80d8506fa28334c947989ca153b70aafc63ac7f,citation,,Pedestrian Retrieval via Part-Based Gradation Regularization in Sensor Networks,2018 -173,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,96e77135e745385e87fdd0f7ced951bf1fe9a756,citation,,People Tracking and Re-Identification from Multiple Cameras,2018 -174,China,Duke MTMC,duke_mtmc,30.274084,120.15507,Alibaba,company,90c18409b7a3be2cd6da599d02accba4c769e94e,citation,,Person Re-identification with Cascaded Pairwise Convolutions,2018 -175,China,Duke MTMC,duke_mtmc,31.83907195,117.26420748,University of Science and Technology of China,edu,90c18409b7a3be2cd6da599d02accba4c769e94e,citation,,Person Re-identification with Cascaded Pairwise Convolutions,2018 -176,China,Duke MTMC,duke_mtmc,30.5360485,114.3643219,"Wuhan Univeristy, Wuhan, China",edu,90c18409b7a3be2cd6da599d02accba4c769e94e,citation,,Person Re-identification with Cascaded Pairwise Convolutions,2018 -177,China,Duke MTMC,duke_mtmc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,df4ed9983f7114ca4f0ab71f1476c0bf7521e317,citation,,Pose Transferrable Person Re-identification,2018 -178,United States,Duke MTMC,duke_mtmc,40.4441619,-79.94272826,Carnegie Mellon University,edu,e307c6635472d3d1e512af6e20f2e56c95937bb7,citation,,Semi-Supervised Bayesian Attribute Learning for Person Re-Identification,2018 -179,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,e307c6635472d3d1e512af6e20f2e56c95937bb7,citation,,Semi-Supervised Bayesian Attribute Learning for Person Re-Identification,2018 +46,China,Duke MTMC,duke_mtmc,39.9808333,116.34101249,Beihang University,edu,7bfc5bbad852f9e6bea3b86c25179d81e2e7fff6,citation,,Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology,2018 +47,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,be79ad118d0524d9b493f4a14a662c8184e6405a,citation,,Attend and Align: Improving Deep Representations with Feature Alignment Layer for Person Retrieval,2018 +48,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,13ea9a2ed134a9e238d33024fba34d3dd6a010e0,citation,https://arxiv.org/pdf/1703.05693.pdf,SVDNet for Pedestrian Retrieval,2017 +49,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,13ea9a2ed134a9e238d33024fba34d3dd6a010e0,citation,https://arxiv.org/pdf/1703.05693.pdf,SVDNet for Pedestrian Retrieval,2017 +50,China,Duke MTMC,duke_mtmc,30.19331415,120.11930822,Zhejiang University,edu,608dede56161fd5f76bcf9228b4dd8c639d65b02,citation,https://arxiv.org/pdf/1807.00537.pdf,SphereReID: Deep Hypersphere Manifold Embedding for Person Re-Identification,2018 +51,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,15e1af79939dbf90790b03d8aa02477783fb1d0f,citation,https://arxiv.org/pdf/1701.07717.pdf,Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro,2017 +52,China,Duke MTMC,duke_mtmc,30.778621,103.961236,XiHua University,edu,ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b,citation,https://pdfs.semanticscholar.org/ec9c/20ed6cce15e9b63ac96bb5a6d55e69661e0b.pdf,Robust Pedestrian Detection for Semi-automatic Construction of a Crowded Person Re-Identification Dataset,2018 +53,United Kingdom,Duke MTMC,duke_mtmc,51.24303255,-0.59001382,University of Surrey,edu,ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b,citation,https://pdfs.semanticscholar.org/ec9c/20ed6cce15e9b63ac96bb5a6d55e69661e0b.pdf,Robust Pedestrian Detection for Semi-automatic Construction of a Crowded Person Re-Identification Dataset,2018 +54,China,Duke MTMC,duke_mtmc,31.4854255,120.2739581,Jiangnan University,edu,ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b,citation,https://pdfs.semanticscholar.org/ec9c/20ed6cce15e9b63ac96bb5a6d55e69661e0b.pdf,Robust Pedestrian Detection for Semi-automatic Construction of a Crowded Person Re-Identification Dataset,2018 +55,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,fa3fb32fe0cd392960549b0adb7a535eb3656abd,citation,https://arxiv.org/pdf/1711.08106.pdf,The Devil is in the Middle: Exploiting Mid-level Representations for Cross-Domain Instance Matching,2017 +56,United Kingdom,Duke MTMC,duke_mtmc,55.94951105,-3.19534913,University of Edinburgh,edu,fa3fb32fe0cd392960549b0adb7a535eb3656abd,citation,https://arxiv.org/pdf/1711.08106.pdf,The Devil is in the Middle: Exploiting Mid-level Representations for Cross-Domain Instance Matching,2017 +57,United States,Duke MTMC,duke_mtmc,40.1019523,-88.2271615,UIUC,edu,54c28bf64debbdb21c246795182f97d4f7917b74,citation,https://arxiv.org/pdf/1811.04129.pdf,STA: Spatial-Temporal Attention for Large-Scale Video-based Person Re-Identification,2018 +58,United States,Duke MTMC,duke_mtmc,22.5447154,113.9357164,Tencent,company,3b311a1ce30f9c0f3dc1d9c0cf25f13127a5e48c,citation,https://arxiv.org/pdf/1810.12193.pdf,A Coarse-to-fine Pyramidal Model for Person Re-identification via Multi-Loss Dynamic Training,2018 +59,United States,Duke MTMC,duke_mtmc,37.3860784,-121.9877807,Google and Hewlett-Packard Labs,company,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 +60,United States,Duke MTMC,duke_mtmc,37.3860784,-121.9877807,Hewlett-Packard Labs,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 +61,United States,Duke MTMC,duke_mtmc,39.6321923,-76.3038146,LinkedIn and Hewlett-Packard Labs,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 +62,United States,Duke MTMC,duke_mtmc,34.0224149,-118.28634407,University of Southern California,edu,4d799f6e09f442bde583a50a0a9f81131ef707bb,citation,,TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores,2018 +63,Canada,Duke MTMC,duke_mtmc,49.2767454,-122.91777375,Simon Fraser University,edu,5137ca9f0a7cf4c61f2254d4a252a0c56e5dcfcc,citation,https://arxiv.org/pdf/1811.07130.pdf,Batch Feature Erasing for Person Re-identification and Beyond,2018 +64,China,Duke MTMC,duke_mtmc,32.0565957,118.77408833,Nanjing University,edu,c37c3853ab428725f13906bb0ff4936ffe15d6af,citation,https://arxiv.org/pdf/1809.02874.pdf,Unsupervised Person Re-identification by Deep Learning Tracklet Association,2018 +65,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,c37c3853ab428725f13906bb0ff4936ffe15d6af,citation,https://arxiv.org/pdf/1809.02874.pdf,Unsupervised Person Re-identification by Deep Learning Tracklet Association,2018 +66,United States,Duke MTMC,duke_mtmc,37.8687126,-122.25586815,"University of California, Berkeley",edu,a8d665fa7357f696dcfd188b91fda88da47b964e,citation,https://arxiv.org/pdf/1809.02318.pdf,Scaling Video Analytics Systems to Large Camera Deployments,2018 +67,United States,Duke MTMC,duke_mtmc,47.6423318,-122.1369302,Microsoft,company,a8d665fa7357f696dcfd188b91fda88da47b964e,citation,https://arxiv.org/pdf/1809.02318.pdf,Scaling Video Analytics Systems to Large Camera Deployments,2018 +68,United States,Duke MTMC,duke_mtmc,41.78468745,-87.60074933,University of Chicago,edu,a8d665fa7357f696dcfd188b91fda88da47b964e,citation,https://arxiv.org/pdf/1809.02318.pdf,Scaling Video Analytics Systems to Large Camera Deployments,2018 +69,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,dda0b381c162695f21b8d1149aab22188b3c2bc0,citation,https://arxiv.org/pdf/1804.02792.pdf,Occluded Person Re-Identification,2018 +70,China,Duke MTMC,duke_mtmc,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,33f358f1d2b54042c524d69b20e80d98dde3dacd,citation,https://arxiv.org/pdf/1811.11405.pdf,Spectral Feature Transformation for Person Re-identification,2018 +71,United States,Duke MTMC,duke_mtmc,32.8734455,-117.2065636,TuSimple,edu,33f358f1d2b54042c524d69b20e80d98dde3dacd,citation,https://arxiv.org/pdf/1811.11405.pdf,Spectral Feature Transformation for Person Re-identification,2018 +72,China,Duke MTMC,duke_mtmc,30.672721,104.098806,University of Electronic Science and Technology of China,edu,8ffc49aead99fdacb0b180468a36984759f2fc1e,citation,https://arxiv.org/pdf/1809.04976.pdf,Sparse Label Smoothing for Semi-supervised Person Re-Identification,2018 +73,Germany,Duke MTMC,duke_mtmc,50.7791703,6.06728733,RWTH Aachen University,edu,10b36c003542545f1e2d73e8897e022c0c260c32,citation,https://arxiv.org/pdf/1705.04608.pdf,Towards a Principled Integration of Multi-camera Re-identification and Tracking Through Optimal Bayes Filters,2017 +74,United Kingdom,Duke MTMC,duke_mtmc,51.7534538,-1.25400997,University of Oxford,edu,94ed6dc44842368b457851b43023c23fd78d5390,citation,https://arxiv.org/pdf/1806.01794.pdf,"Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects",2018 +75,China,Duke MTMC,duke_mtmc,39.9041999,116.4073963,"Beijing, China",edu,280976bbb41d2948a5c0208f86605977397181cd,citation,https://arxiv.org/pdf/1811.08073.pdf,Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models,2018 +76,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,280976bbb41d2948a5c0208f86605977397181cd,citation,https://arxiv.org/pdf/1811.08073.pdf,Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models,2018 +77,China,Duke MTMC,duke_mtmc,39.9922379,116.30393816,Peking University,edu,014e249422b6bd6ff32b3f7d385b5a0e8c4c9fcf,citation,https://arxiv.org/pdf/1810.05866.pdf,Attention driven person re-identification,2019 +78,Singapore,Duke MTMC,duke_mtmc,1.3484104,103.68297965,Nanyang Technological University,edu,014e249422b6bd6ff32b3f7d385b5a0e8c4c9fcf,citation,https://arxiv.org/pdf/1810.05866.pdf,Attention driven person re-identification,2019 +79,China,Duke MTMC,duke_mtmc,39.9808333,116.34101249,Beihang University,edu,e9d549989926f36abfa5dc7348ae3d79a567bf30,citation,,Orientation-Guided Similarity Learning for Person Re-identification,2018 +80,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,95bdd45fed0392418e0e5d3e51d34714917e3c87,citation,https://arxiv.org/pdf/1812.03282.pdf,Spatial-Temporal Person Re-identification,2019 +81,China,Duke MTMC,duke_mtmc,31.30104395,121.50045497,Fudan University,edu,00e3957212517a252258baef833833921dd308d4,citation,,Adaptively Weighted Multi-task Deep Network for Person Attribute Classification,2017 +82,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,705073015bb8ae97212532a30488c05d50894bec,citation,https://arxiv.org/pdf/1803.09786.pdf,Transferable Joint Attribute-Identity Deep Learning for Unsupervised Person Re-identification,2018 +83,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,9e644b1e33dd9367be167eb9d832174004840400,citation,https://users.cs.duke.edu/~tomasi/papers/ristani/ristaniTCAS16.pdf,Tracking Social Groups Within and Across Cameras,2017 +84,Italy,Duke MTMC,duke_mtmc,44.6451046,10.9279268,University of Modena,edu,9e644b1e33dd9367be167eb9d832174004840400,citation,https://users.cs.duke.edu/~tomasi/papers/ristani/ristaniTCAS16.pdf,Tracking Social Groups Within and Across Cameras,2017 +85,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,27a2fad58dd8727e280f97036e0d2bc55ef5424c,citation,https://arxiv.org/pdf/1609.01775.pdf,"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking",2016 +86,Switzerland,Duke MTMC,duke_mtmc,46.5190557,6.5667576,EPFL,edu,4e4e3ddb55607e127a4abdef45d92adf1ff78de2,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf,Non-Markovian Globally Consistent Multi-object Tracking,2017 +87,Switzerland,Duke MTMC,duke_mtmc,46.109237,7.08453549,IDIAP Research Institute,edu,4e4e3ddb55607e127a4abdef45d92adf1ff78de2,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf,Non-Markovian Globally Consistent Multi-object Tracking,2017 +88,United States,Duke MTMC,duke_mtmc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,4e4e3ddb55607e127a4abdef45d92adf1ff78de2,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf,Non-Markovian Globally Consistent Multi-object Tracking,2017 +89,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,fc26fc2340a863d6da0b427cd924fb4cb101051b,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w37/Chen_Person_Re-Identification_by_ICCV_2017_paper.pdf,Person Re-identification by Deep Learning Multi-scale Representations,2017 +90,United Kingdom,Duke MTMC,duke_mtmc,55.378051,-3.435973,"Vision Semantics Ltd, UK",edu,fc26fc2340a863d6da0b427cd924fb4cb101051b,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w37/Chen_Person_Re-Identification_by_ICCV_2017_paper.pdf,Person Re-identification by Deep Learning Multi-scale Representations,2017 +91,Canada,Duke MTMC,duke_mtmc,43.4983503,-80.5478382,"Senstar Corporation, Waterloo, Canada",company,8e42568c2b3feaafd1e442e1e861ec50a4ac144f,citation,https://arxiv.org/pdf/1805.06086.pdf,An Evaluation of Deep CNN Baselines for Scene-Independent Person Re-identification,2018 +92,Italy,Duke MTMC,duke_mtmc,45.4377672,12.321807,University Iuav of Venice,edu,eddb1a126eafecad2cead01c6c3bb4b88120d78a,citation,https://arxiv.org/pdf/1802.02181.pdf,Applications of a Graph Theoretic Based Clustering Framework in Computer Vision and Pattern Recognition,2018 +93,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,fc068f7f8a3b2921ec4f3246e9b6c6015165df9a,citation,https://arxiv.org/pdf/1711.09349.pdf,Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline),2018 +94,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,fc068f7f8a3b2921ec4f3246e9b6c6015165df9a,citation,https://arxiv.org/pdf/1711.09349.pdf,Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline),2018 +95,United States,Duke MTMC,duke_mtmc,29.58333105,-98.61944505,University of Texas at San Antonio,edu,fc068f7f8a3b2921ec4f3246e9b6c6015165df9a,citation,https://arxiv.org/pdf/1711.09349.pdf,Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline),2018 +96,United States,Duke MTMC,duke_mtmc,43.0008093,-78.7889697,University at Buffalo,edu,fdd1bde7066c7e9c7515f330546e0b3a8de8a4a6,citation,https://arxiv.org/pdf/1811.06582.pdf,CAN: Composite Appearance Network and a Novel Evaluation Metric for Person Tracking,2018 +97,United States,Duke MTMC,duke_mtmc,43.0008093,-78.7889697,University at Buffalo,edu,3144c9b3bedb6e3895dcd36998bcb0903271841d,citation,https://arxiv.org/pdf/1811.06582.pdf,CAN: Composite Appearance Network and a Novel Evaluation Metric for Person Tracking,2018 +98,China,Duke MTMC,duke_mtmc,29.1416432,119.7889248,"Alibaba Group, Zhejiang, People’s Republic of China",edu,f4e65ab81a0f4ffa50d0c9bc308d7365e012cc75,citation,https://arxiv.org/pdf/1812.05785.pdf,Deep Active Learning for Video-based Person Re-identification,2018 +99,China,Duke MTMC,duke_mtmc,30.19331415,120.11930822,Zhejiang University,edu,f4e65ab81a0f4ffa50d0c9bc308d7365e012cc75,citation,https://arxiv.org/pdf/1812.05785.pdf,Deep Active Learning for Video-based Person Re-identification,2018 +100,China,Duke MTMC,duke_mtmc,38.88140235,121.52281098,Dalian University of Technology,edu,5be74c6fa7f890ea530e427685dadf0d0a371fc1,citation,https://arxiv.org/pdf/1804.11027.pdf,Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification,2018 +101,Australia,Duke MTMC,duke_mtmc,-27.49741805,153.01316956,University of Queensland,edu,5be74c6fa7f890ea530e427685dadf0d0a371fc1,citation,https://arxiv.org/pdf/1804.11027.pdf,Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification,2018 +102,Australia,Duke MTMC,duke_mtmc,-33.88890695,151.18943366,University of Sydney,edu,5be74c6fa7f890ea530e427685dadf0d0a371fc1,citation,https://arxiv.org/pdf/1804.11027.pdf,Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification,2018 +103,Switzerland,Duke MTMC,duke_mtmc,46.5184121,6.5684654,École Polytechnique Fédérale de Lausanne,edu,0f3eb3719b6f6f544b766e0bfeb8f962c9bd59f4,citation,https://arxiv.org/pdf/1811.10984.pdf,Eliminating Exposure Bias and Loss-Evaluation Mismatch in Multiple Object Tracking,2018 +104,Italy,Duke MTMC,duke_mtmc,45.434532,12.326197,"DAIS, Università Ca’ Foscari, Venice, Italy",edu,6dce5866ebc46355a35b8667c1e04a4790c2289b,citation,https://pdfs.semanticscholar.org/6dce/5866ebc46355a35b8667c1e04a4790c2289b.pdf,Extensions of dominant sets and their applications in computer vision,2018 +105,United States,Duke MTMC,duke_mtmc,42.3383668,-71.08793524,Northeastern University,edu,8abe89ab85250fd7a8117da32bc339a71c67dc21,citation,https://arxiv.org/pdf/1709.07065.pdf,Multi-camera Multi-Object Tracking,2017 +106,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,b856c0eb039effce7da9ff45c3f5987f18928bef,citation,https://arxiv.org/pdf/1707.00408.pdf,Pedestrian Alignment Network for Large-scale Person Re-identification,2017 +107,Germany,Duke MTMC,duke_mtmc,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,bab66082d01b393e6b9e841e5e06782a6c61ec88,citation,https://arxiv.org/pdf/1803.08709.pdf,Pose-Driven Deep Models for Person Re-Identification,2018 +108,China,Duke MTMC,duke_mtmc,31.30104395,121.50045497,Fudan University,edu,e6d8f332ae26e9983d5b42af4466ff95b55f2341,citation,https://arxiv.org/pdf/1712.02225.pdf,Pose-Normalized Image Generation for Person Re-identification,2018 +109,Japan,Duke MTMC,duke_mtmc,34.7321121,135.7328585,Nara Institute of Science and Technology,edu,e6d8f332ae26e9983d5b42af4466ff95b55f2341,citation,https://arxiv.org/pdf/1712.02225.pdf,Pose-Normalized Image Generation for Person Re-identification,2018 +110,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,e6d8f332ae26e9983d5b42af4466ff95b55f2341,citation,https://arxiv.org/pdf/1712.02225.pdf,Pose-Normalized Image Generation for Person Re-identification,2018 +111,China,Duke MTMC,duke_mtmc,22.8376,108.289839,Guangxi University,edu,4a91be40e6b382c3ddf3385ac44062b2399336a8,citation,https://arxiv.org/pdf/1809.09970.pdf,Random Occlusion-recovery for Person Re-identification,2018 +112,China,Duke MTMC,duke_mtmc,31.28473925,121.49694909,Tongji University,edu,4a91be40e6b382c3ddf3385ac44062b2399336a8,citation,https://arxiv.org/pdf/1809.09970.pdf,Random Occlusion-recovery for Person Re-identification,2018 +113,France,Duke MTMC,duke_mtmc,45.2173989,5.7921349,"Naver Labs Europe, Meylan, France",edu,4d8347a69e77cc02c1e1aba3a8b6646eac1a0b3d,citation,https://arxiv.org/pdf/1801.05339.pdf,Re-ID done right: towards good practices for person re-identification.,2018 +114,China,Duke MTMC,duke_mtmc,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,0e36bf238d2db6c970ade0b5f68811ed6debc4e8,citation,https://arxiv.org/pdf/1810.07399.pdf,Recognizing Partial Biometric Patterns,2018 +115,United States,Duke MTMC,duke_mtmc,42.4505507,-76.4783513,Cornell University,edu,6d76eefecdcaa130a000d1d6c93cf57166ebd18e,citation,https://arxiv.org/pdf/1805.08805.pdf,Resource Aware Person Re-identification Across Multiple Resolutions,2018 +116,China,Duke MTMC,duke_mtmc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,6d76eefecdcaa130a000d1d6c93cf57166ebd18e,citation,https://arxiv.org/pdf/1805.08805.pdf,Resource Aware Person Re-identification Across Multiple Resolutions,2018 +117,China,Duke MTMC,duke_mtmc,40.00229045,116.32098908,Tsinghua University,edu,6d76eefecdcaa130a000d1d6c93cf57166ebd18e,citation,https://arxiv.org/pdf/1805.08805.pdf,Resource Aware Person Re-identification Across Multiple Resolutions,2018 +118,China,Duke MTMC,duke_mtmc,31.846918,117.29053367,Hefei University of Technology,edu,42dc432f58adfaa7bf6af07e5faf9e75fea29122,citation,https://arxiv.org/pdf/1811.08115.pdf,Sequence-based Person Attribute Recognition with Joint CTC-Attention Model,2018 +119,China,Duke MTMC,duke_mtmc,22.5447154,113.9357164,"Tencent, Shanghai, China",company,42dc432f58adfaa7bf6af07e5faf9e75fea29122,citation,https://arxiv.org/pdf/1811.08115.pdf,Sequence-based Person Attribute Recognition with Joint CTC-Attention Model,2018 +120,United States,Duke MTMC,duke_mtmc,47.6423318,-122.1369302,Microsoft,company,8a77025bde5479a1366bb93c6f2366b5a6293720,citation,https://arxiv.org/pdf/1805.02336.pdf,Sharp Attention Network via Adaptive Sampling for Person Re-identification,2018 +121,United States,Duke MTMC,duke_mtmc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,8a77025bde5479a1366bb93c6f2366b5a6293720,citation,https://arxiv.org/pdf/1805.02336.pdf,Sharp Attention Network via Adaptive Sampling for Person Re-identification,2018 +122,China,Duke MTMC,duke_mtmc,30.19331415,120.11930822,Zhejiang University,edu,8a77025bde5479a1366bb93c6f2366b5a6293720,citation,https://arxiv.org/pdf/1805.02336.pdf,Sharp Attention Network via Adaptive Sampling for Person Re-identification,2018 +123,Australia,Duke MTMC,duke_mtmc,-35.2776999,149.118527,Australian National University,edu,304196021200067a838c06002d9e96d6a12a1e46,citation,https://arxiv.org/pdf/1811.10551.pdf,Similarity-preserving Image-image Domain Adaptation for Person Re-identification,2018 +124,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,304196021200067a838c06002d9e96d6a12a1e46,citation,https://arxiv.org/pdf/1811.10551.pdf,Similarity-preserving Image-image Domain Adaptation for Person Re-identification,2018 +125,China,Duke MTMC,duke_mtmc,28.2290209,112.99483204,"National University of Defense Technology, China",mil,e90816e1a0e14ea1e7039e0b2782260999aef786,citation,https://arxiv.org/pdf/1809.03137.pdf,Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers,2018 +126,United Kingdom,Duke MTMC,duke_mtmc,51.5231607,-0.1282037,University College London,edu,e90816e1a0e14ea1e7039e0b2782260999aef786,citation,https://arxiv.org/pdf/1809.03137.pdf,Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers,2018 +127,United States,Duke MTMC,duke_mtmc,37.2283843,-80.4234167,Virginia Tech,edu,e278218ba1ff1b85d06680e99b08e817d0962dab,citation,https://arxiv.org/pdf/1710.02139.pdf,Tracking Persons-of-Interest via Unsupervised Representation Adaptation,2017 +128,China,Duke MTMC,duke_mtmc,34.250803,108.983693,Xi’an Jiaotong University,edu,e278218ba1ff1b85d06680e99b08e817d0962dab,citation,https://arxiv.org/pdf/1710.02139.pdf,Tracking Persons-of-Interest via Unsupervised Representation Adaptation,2017 +129,China,Duke MTMC,duke_mtmc,30.508964,114.410577,"Huazhong Univ. of Science and Technology, China",edu,42656cf2b75dccc7f8f224f7a86c2ea4de1ae671,citation,https://arxiv.org/pdf/1807.11334.pdf,Unsupervised Domain Adaptive Re-Identification: Theory and Practice,2018 +130,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,788ab52d4f7fedb4b79347bb81822c4f3c430d80,citation,https://arxiv.org/pdf/1901.10177.pdf,Unsupervised Person Re-identification by Deep Asymmetric Metric Embedding,2018 +131,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,31da1da2d4e7254dd8f2a4578d887c57e0678438,citation,https://arxiv.org/pdf/1705.10444.pdf,Unsupervised Person Re-identification: Clustering and Fine-tuning,2018 +132,United Kingdom,Duke MTMC,duke_mtmc,54.6141723,-5.9002151,Queen's University Belfast,edu,1e146982a7b088e7a3790d2683484944c3b9dcf7,citation,https://pdfs.semanticscholar.org/1e14/6982a7b088e7a3790d2683484944c3b9dcf7.pdf,Video Person Re-Identification for Wide Area Tracking based on Recurrent Neural Networks,2017 +133,Germany,Duke MTMC,duke_mtmc,49.01546,8.4257999,Fraunhofer,company,978716708762dab46e91059e170d43551be74732,citation,,A Pose-Sensitive Embedding for Person Re-identification with Expanded Cross Neighborhood Re-ranking,2018 +134,Germany,Duke MTMC,duke_mtmc,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,978716708762dab46e91059e170d43551be74732,citation,,A Pose-Sensitive Embedding for Person Re-identification with Expanded Cross Neighborhood Re-ranking,2018 +135,Taiwan,Duke MTMC,duke_mtmc,25.01682835,121.53846924,National Taiwan University,edu,d9216cc2a3c03659cb2392b7cc8509feb7829579,citation,,Adaptation and Re-identification Network: An Unsupervised Deep Transfer Learning Approach to Person Re-identification,2018 +136,China,Duke MTMC,duke_mtmc,39.979203,116.33287,"CRIPAC & NLPR, CASIA",edu,1bfe59be5b42d6b7257da4b35a408239c01ab79d,citation,,Adversarially Occluded Samples for Person Re-identification,2018 +137,China,Duke MTMC,duke_mtmc,40.0044795,116.370238,Chinese Academy of Sciences,edu,1bfe59be5b42d6b7257da4b35a408239c01ab79d,citation,,Adversarially Occluded Samples for Person Re-identification,2018 +138,China,Duke MTMC,duke_mtmc,22.543096,114.057865,"SenseNets Corporation, Shenzhen, China",company,14ce502bc19b225466126b256511f9c05cadcb6e,citation,,Attention-Aware Compositional Network for Person Re-identification,2018 +139,China,Duke MTMC,duke_mtmc,39.993008,116.329882,SenseTime,company,14ce502bc19b225466126b256511f9c05cadcb6e,citation,,Attention-Aware Compositional Network for Person Re-identification,2018 +140,Australia,Duke MTMC,duke_mtmc,-33.88890695,151.18943366,University of Sydney,edu,14ce502bc19b225466126b256511f9c05cadcb6e,citation,,Attention-Aware Compositional Network for Person Re-identification,2018 +141,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,1822ca8db58b0382b0c64f310840f0f875ea02c0,citation,,Camera Style Adaptation for Person Re-identification,2018 +142,China,Duke MTMC,duke_mtmc,24.4399419,118.09301781,Xiamen University,edu,1822ca8db58b0382b0c64f310840f0f875ea02c0,citation,,Camera Style Adaptation for Person Re-identification,2018 +143,China,Duke MTMC,duke_mtmc,36.16161795,120.49355276,Ocean University of China,edu,38259235a1c7b2c68ca09f3bc0930987ae99cf00,citation,,Deep Feature Ranking for Person Re-Identification,2019 +144,South Korea,Duke MTMC,duke_mtmc,35.84658875,127.1350133,Chonbuk National University,edu,c635564fe2f7d91b578bd6959904982aaa61234d,citation,,Deep Multi-Task Network for Learning Person Identity and Attributes,2018 +145,China,Duke MTMC,duke_mtmc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,947954cafdefd471b75da8c3bb4c21b9e6d57838,citation,,End-to-End Deep Kronecker-Product Matching for Person Re-identification,2018 +146,China,Duke MTMC,duke_mtmc,39.993008,116.329882,SenseTime,company,947954cafdefd471b75da8c3bb4c21b9e6d57838,citation,,End-to-End Deep Kronecker-Product Matching for Person Re-identification,2018 +147,China,Duke MTMC,duke_mtmc,23.0502042,113.39880323,South China University of Technology,edu,cb68c60ac046a0ec1c7f67487f14b999037313e1,citation,,Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,2018 +148,Australia,Duke MTMC,duke_mtmc,-33.88890695,151.18943366,University of Sydney,edu,cb68c60ac046a0ec1c7f67487f14b999037313e1,citation,,Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,2018 +149,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,cb68c60ac046a0ec1c7f67487f14b999037313e1,citation,,Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,2018 +150,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,c0f01b8174a632448c20eb5472cd9d5b2c595e39,citation,,Features for Multi-target Multi-camera Tracking and Re-identification,2018 +151,China,Duke MTMC,duke_mtmc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,308a13fd1d2847d98930a8e5542f773a9651a0ae,citation,,Group Consistent Similarity Learning via Deep CRF for Person Re-identification,2018 +152,Italy,Duke MTMC,duke_mtmc,46.0658836,11.1159894,University of Trento,edu,308a13fd1d2847d98930a8e5542f773a9651a0ae,citation,,Group Consistent Similarity Learning via Deep CRF for Person Re-identification,2018 +153,China,Duke MTMC,duke_mtmc,34.250803,108.983693,Xi’an Jiaotong University,edu,308a13fd1d2847d98930a8e5542f773a9651a0ae,citation,,Group Consistent Similarity Learning via Deep CRF for Person Re-identification,2018 +154,Turkey,Duke MTMC,duke_mtmc,41.10427915,29.02231159,Istanbul Technical University,edu,7ba225a614d77efd9bdf66bf74c80dd2da09229a,citation,,Human Semantic Parsing for Person Re-identification,2018 +155,United States,Duke MTMC,duke_mtmc,28.59899755,-81.19712501,University of Central Florida,edu,7ba225a614d77efd9bdf66bf74c80dd2da09229a,citation,,Human Semantic Parsing for Person Re-identification,2018 +156,Australia,Duke MTMC,duke_mtmc,-32.00686365,115.89691775,Curtin University,edu,292286c0024d6625fe606fb5b8a0df54ea3ffe91,citation,,Identity Adaptation for Person Re-Identification,2018 +157,United Kingdom,Duke MTMC,duke_mtmc,54.00975365,-2.78757491,Lancaster University,edu,292286c0024d6625fe606fb5b8a0df54ea3ffe91,citation,,Identity Adaptation for Person Re-Identification,2018 +158,Australia,Duke MTMC,duke_mtmc,-31.95040445,115.79790037,University of Western Australia,edu,292286c0024d6625fe606fb5b8a0df54ea3ffe91,citation,,Identity Adaptation for Person Re-Identification,2018 +159,China,Duke MTMC,duke_mtmc,40.0044795,116.370238,Chinese Academy of Sciences,edu,6cde93a5288e84671a7bee98cf6c94037f42da42,citation,,Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification,2018 +160,Singapore,Duke MTMC,duke_mtmc,1.340216,103.965089,Singapore University of Technology and Design,edu,6cde93a5288e84671a7bee98cf6c94037f42da42,citation,,Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification,2018 +161,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,6cde93a5288e84671a7bee98cf6c94037f42da42,citation,,Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification,2018 +162,China,Duke MTMC,duke_mtmc,39.0607286,117.1256421,Tianjin Normal University,edu,67289bd3b7c9406429c6012eb7292305e50dff0b,citation,,Integration Convolutional Neural Network for Person Re-Identification in Camera Networks,2018 +163,China,Duke MTMC,duke_mtmc,32.05765485,118.7550004,HoHai University,edu,fedb656c45aa332cfc373b413f3000b6228eee08,citation,,Joint Learning of Body and Part Representation for Person Re-Identification,2018 +164,China,Duke MTMC,duke_mtmc,33.5491006,119.035706,"Huaiyin Institute of Technology, Huaian, China",edu,fedb656c45aa332cfc373b413f3000b6228eee08,citation,,Joint Learning of Body and Part Representation for Person Re-Identification,2018 +165,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,fedb656c45aa332cfc373b413f3000b6228eee08,citation,,Joint Learning of Body and Part Representation for Person Re-Identification,2018 +166,China,Duke MTMC,duke_mtmc,23.09461185,113.28788994,Sun Yat-Sen University,edu,b37538f9364252eec4182bdbb80ef1e4614c3acd,citation,,Learning a Semantically Discriminative Joint Space for Attribute Based Person Re-identification,2017 +167,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,004acfec16c36649408c561faa102dd9de76f085,citation,,Multi-level Factorisation Net for Person Re-identification,2018 +168,United Kingdom,Duke MTMC,duke_mtmc,55.94951105,-3.19534913,University of Edinburgh,edu,004acfec16c36649408c561faa102dd9de76f085,citation,,Multi-level Factorisation Net for Person Re-identification,2018 +169,China,Duke MTMC,duke_mtmc,39.0607286,117.1256421,Tianjin Normal University,edu,a80d8506fa28334c947989ca153b70aafc63ac7f,citation,,Pedestrian Retrieval via Part-Based Gradation Regularization in Sensor Networks,2018 +170,United States,Duke MTMC,duke_mtmc,35.9990522,-78.9290629,Duke University,edu,96e77135e745385e87fdd0f7ced951bf1fe9a756,citation,,People Tracking and Re-Identification from Multiple Cameras,2018 +171,China,Duke MTMC,duke_mtmc,30.274084,120.15507,Alibaba,company,90c18409b7a3be2cd6da599d02accba4c769e94e,citation,,Person Re-identification with Cascaded Pairwise Convolutions,2018 +172,China,Duke MTMC,duke_mtmc,31.83907195,117.26420748,University of Science and Technology of China,edu,90c18409b7a3be2cd6da599d02accba4c769e94e,citation,,Person Re-identification with Cascaded Pairwise Convolutions,2018 +173,China,Duke MTMC,duke_mtmc,30.5360485,114.3643219,"Wuhan Univeristy, Wuhan, China",edu,90c18409b7a3be2cd6da599d02accba4c769e94e,citation,,Person Re-identification with Cascaded Pairwise Convolutions,2018 +174,China,Duke MTMC,duke_mtmc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,df4ed9983f7114ca4f0ab71f1476c0bf7521e317,citation,,Pose Transferrable Person Re-identification,2018 +175,United States,Duke MTMC,duke_mtmc,40.4441619,-79.94272826,Carnegie Mellon University,edu,e307c6635472d3d1e512af6e20f2e56c95937bb7,citation,,Semi-Supervised Bayesian Attribute Learning for Person Re-Identification,2018 +176,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,e307c6635472d3d1e512af6e20f2e56c95937bb7,citation,,Semi-Supervised Bayesian Attribute Learning for Person Re-Identification,2018 +177,China,Duke MTMC,duke_mtmc,31.83907195,117.26420748,University of Science and Technology of China,edu,5b309f6d98c503efb679eda51bd898543fb746f9,citation,https://arxiv.org/pdf/1809.05864.pdf,In Defense of the Classification Loss for Person Re-Identification,2018 +178,United States,Duke MTMC,duke_mtmc,42.3614256,-71.0812092,Microsoft Research Asia,company,5b309f6d98c503efb679eda51bd898543fb746f9,citation,https://arxiv.org/pdf/1809.05864.pdf,In Defense of the Classification Loss for Person Re-Identification,2018 +179,United States,Duke MTMC,duke_mtmc,39.2899685,-76.62196103,University of Maryland,edu,fe3f8826f615cc5ada33b01777b9f9dc93e0023c,citation,https://arxiv.org/pdf/1901.07702.pdf,Exploring Uncertainty in Conditional Multi-Modal Retrieval Systems,2019 +180,China,Duke MTMC,duke_mtmc,24.4399419,118.09301781,Xiamen University,edu,d95ce873ed42b7c7facaa4c1e9c72b57b4e279f6,citation,https://pdfs.semanticscholar.org/d95c/e873ed42b7c7facaa4c1e9c72b57b4e279f6.pdf,Generalizing a Person Retrieval Model Hetero- and Homogeneously,2018 +181,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,d95ce873ed42b7c7facaa4c1e9c72b57b4e279f6,citation,https://pdfs.semanticscholar.org/d95c/e873ed42b7c7facaa4c1e9c72b57b4e279f6.pdf,Generalizing a Person Retrieval Model Hetero- and Homogeneously,2018 +182,Australia,Duke MTMC,duke_mtmc,-35.2776999,149.118527,Australian National University,edu,d95ce873ed42b7c7facaa4c1e9c72b57b4e279f6,citation,https://pdfs.semanticscholar.org/d95c/e873ed42b7c7facaa4c1e9c72b57b4e279f6.pdf,Generalizing a Person Retrieval Model Hetero- and Homogeneously,2018 +183,China,Duke MTMC,duke_mtmc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,927ec8dde9eb0e3bc5bf0b1a0ae57f9cf745fd9c,citation,https://arxiv.org/pdf/1804.01438.pdf,Learning Discriminative Features with Multiple Granularities for Person Re-Identification,2018 +184,China,Duke MTMC,duke_mtmc,31.83907195,117.26420748,University of Science and Technology of China,edu,04ca65f1454f1014ef5af5bfafb7aee576ee1be6,citation,https://arxiv.org/pdf/1812.08967.pdf,Densely Semantically Aligned Person Re-Identification,2018 +185,United States,Duke MTMC,duke_mtmc,42.3614256,-71.0812092,Microsoft Research Asia,company,04ca65f1454f1014ef5af5bfafb7aee576ee1be6,citation,https://arxiv.org/pdf/1812.08967.pdf,Densely Semantically Aligned Person Re-Identification,2018 +186,China,Duke MTMC,duke_mtmc,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,7daa2c0f76fd3bfc7feadf313d6ac7504d4ecd20,citation,https://arxiv.org/pdf/1803.09937.pdf,Dual Attention Matching Network for Context-Aware Feature Sequence Based Person Re-identification,2018 +187,Singapore,Duke MTMC,duke_mtmc,1.3484104,103.68297965,Nanyang Technological University,edu,7daa2c0f76fd3bfc7feadf313d6ac7504d4ecd20,citation,https://arxiv.org/pdf/1803.09937.pdf,Dual Attention Matching Network for Context-Aware Feature Sequence Based Person Re-identification,2018 +188,China,Duke MTMC,duke_mtmc,32.0565957,118.77408833,Nanjing University,edu,08b28a8f2699501d46d87956cbaa37255000daa3,citation,https://arxiv.org/pdf/1804.03864.pdf,MaskReID: A Mask Based Deep Ranking Neural Network for Person Re-identification,2018 +189,Australia,Duke MTMC,duke_mtmc,-34.40505545,150.87834655,University of Wollongong,edu,08b28a8f2699501d46d87956cbaa37255000daa3,citation,https://arxiv.org/pdf/1804.03864.pdf,MaskReID: A Mask Based Deep Ranking Neural Network for Person Re-identification,2018 +190,United Kingdom,Duke MTMC,duke_mtmc,51.5247272,-0.03931035,Queen Mary University of London,edu,baf5ab5e8972e9366951b7e66951e05e2a4b3e36,citation,https://arxiv.org/pdf/1802.08122.pdf,Harmonious Attention Network for Person Re-identification,2018 +191,United Kingdom,Duke MTMC,duke_mtmc,52.3793131,-1.5604252,University of Warwick,edu,124d60fae338b1f87455d1fc4ede5fcfd806da1a,citation,https://arxiv.org/pdf/1807.01440.pdf,Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification,2018 +192,Singapore,Duke MTMC,duke_mtmc,1.3484104,103.68297965,Nanyang Technological University,edu,124d60fae338b1f87455d1fc4ede5fcfd806da1a,citation,https://arxiv.org/pdf/1807.01440.pdf,Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification,2018 +193,Australia,Duke MTMC,duke_mtmc,-35.0636071,147.3552234,Charles Sturt University,edu,124d60fae338b1f87455d1fc4ede5fcfd806da1a,citation,https://arxiv.org/pdf/1807.01440.pdf,Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification,2018 +194,China,Duke MTMC,duke_mtmc,34.1235825,108.83546,Xidian University,edu,55355b0317f6e0c5218887441de71f05da4b42f6,citation,https://arxiv.org/pdf/1811.12150.pdf,Parameter-Free Spatial Attention Network for Person Re-Identification,2018 +195,Germany,Duke MTMC,duke_mtmc,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,55355b0317f6e0c5218887441de71f05da4b42f6,citation,https://arxiv.org/pdf/1811.12150.pdf,Parameter-Free Spatial Attention Network for Person Re-Identification,2018 +196,China,Duke MTMC,duke_mtmc,31.2284923,121.40211389,East China Normal University,edu,e1af55ad7bb26e5e1acde3ec6c5c43cffe884b04,citation,https://pdfs.semanticscholar.org/e1af/55ad7bb26e5e1acde3ec6c5c43cffe884b04.pdf,Person Re-identification by Mid-level Attribute and Part-based Identity Learning,2018 +197,Brazil,Duke MTMC,duke_mtmc,-27.5953995,-48.6154218,University of Campinas,edu,b986a535e45751cef684a30631a74476e911a749,citation,https://arxiv.org/pdf/1807.05618.pdf,Improved Person Re-Identification Based on Saliency and Semantic Parsing with Deep Neural Network Models,2018 +198,South Korea,Duke MTMC,duke_mtmc,37.26728,126.9841151,Seoul National University,edu,315df9b7dd354ae78ddf1049fb428b086eee632c,citation,https://arxiv.org/pdf/1804.07094.pdf,Part-Aligned Bilinear Representations for Person Re-identification,2018 +199,Germany,Duke MTMC,duke_mtmc,48.7468939,9.0805141,Max Planck Institute for Intelligent Systems,edu,315df9b7dd354ae78ddf1049fb428b086eee632c,citation,https://arxiv.org/pdf/1804.07094.pdf,Part-Aligned Bilinear Representations for Person Re-identification,2018 +200,United States,Duke MTMC,duke_mtmc,47.6423318,-122.1369302,Microsoft,company,315df9b7dd354ae78ddf1049fb428b086eee632c,citation,https://arxiv.org/pdf/1804.07094.pdf,Part-Aligned Bilinear Representations for Person Re-identification,2018 +201,United States,Duke MTMC,duke_mtmc,40.1019523,-88.2271615,UIUC,edu,cc78e3f1e531342f639e4a1fc8107a7a778ae1cf,citation,https://arxiv.org/pdf/1811.10144.pdf,One Shot Domain Adaptation for Person Re-Identification,2018 +202,China,Duke MTMC,duke_mtmc,22.053565,113.39913285,Jilin University,edu,4abf902cefca527f707e4f76dd4e14fcd5d47361,citation,https://arxiv.org/pdf/1811.11510.pdf,Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-identification,2018 +203,China,Duke MTMC,duke_mtmc,32.0565957,118.77408833,Nanjing University,edu,088e7b24bd1cf6e5922ae6c80d37439e05fadce9,citation,https://arxiv.org/pdf/1711.07155.pdf,Let Features Decide for Themselves: Feature Mask Network for Person Re-identification,2017 +204,China,Duke MTMC,duke_mtmc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 +205,China,Duke MTMC,duke_mtmc,39.993008,116.329882,SenseTime,company,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 +206,United States,Duke MTMC,duke_mtmc,39.3299013,-76.6205177,Johns Hopkins University,edu,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 +207,China,Duke MTMC,duke_mtmc,31.83907195,117.26420748,University of Science and Technology of China,edu,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 +208,China,Duke MTMC,duke_mtmc,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,c753521ba6fb06c12369d6fff814bb704c682ef5,citation,https://pdfs.semanticscholar.org/c753/521ba6fb06c12369d6fff814bb704c682ef5.pdf,Mancs: A Multi-task Attentional Network with Curriculum Sampling for Person Re-Identification,2018 +209,Canada,Duke MTMC,duke_mtmc,46.7817463,-71.2747424,Université Laval,edu,a743127b44397b7a017a65a7ad52d0d7ccb4db93,citation,https://arxiv.org/pdf/1804.10094.pdf,Domain Adaptation Through Synthesis for Unsupervised Person Re-identification,2018 +210,Australia,Duke MTMC,duke_mtmc,-35.2776999,149.118527,Australian National University,edu,12d62f1360587fdecee728e6c509acc378f38dc9,citation,https://arxiv.org/pdf/1805.06118.pdf,Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification,2018 +211,China,Duke MTMC,duke_mtmc,32.20541,118.726956,Nanjing University of Information Science & Technology,edu,12d62f1360587fdecee728e6c509acc378f38dc9,citation,https://arxiv.org/pdf/1805.06118.pdf,Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification,2018 +212,Australia,Duke MTMC,duke_mtmc,-33.8809651,151.20107299,University of Technology Sydney,edu,12d62f1360587fdecee728e6c509acc378f38dc9,citation,https://arxiv.org/pdf/1805.06118.pdf,Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification,2018 +213,China,Duke MTMC,duke_mtmc,40.0044795,116.370238,Chinese Academy of Sciences,edu,14b3a7aa61c15fd9cab0a4d8bc2a205a89fb572e,citation,https://arxiv.org/pdf/1807.11206.pdf,Hard-Aware Point-to-Set Deep Metric for Person Re-identification,2018 +214,China,Duke MTMC,duke_mtmc,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,14b3a7aa61c15fd9cab0a4d8bc2a205a89fb572e,citation,https://arxiv.org/pdf/1807.11206.pdf,Hard-Aware Point-to-Set Deep Metric for Person Re-identification,2018 +215,China,Duke MTMC,duke_mtmc,22.304572,114.17976285,Hong Kong Polytechnic University,edu,fea0895326b663bf72be89151a751362db8ae881,citation,https://arxiv.org/pdf/1804.08866.pdf,Homocentric Hypersphere Feature Embedding for Person Re-identification,2018 +216,China,Duke MTMC,duke_mtmc,30.209484,120.220912,"Hikvision Digital Technology Co., Ltd.",company,ed3991046e6dfba0c5cebdbbe914cc3aa06d0235,citation,https://arxiv.org/pdf/1812.06576.pdf,Learning Incremental Triplet Margin for Person Re-identification,2019 +217,China,Duke MTMC,duke_mtmc,24.4399419,118.09301781,Xiamen University,edu,e746447afc4898713a0bcf2bb560286eb4d20019,citation,https://arxiv.org/pdf/1811.02074.pdf,Leveraging Virtual and Real Person for Unsupervised Person Re-identification,2018 +218,Italy,Duke MTMC,duke_mtmc,45.434532,12.326197,"DAIS, Università Ca’ Foscari, Venice, Italy",edu,bee609ea6e71aba9b449731242efdb136d556222,citation,https://arxiv.org/pdf/1706.06196.pdf,Multi-Target Tracking in Multiple Non-Overlapping Cameras using Constrained Dominant Sets,2017 +219,Italy,Duke MTMC,duke_mtmc,45.4377672,12.321807,University Iuav of Venice,edu,bee609ea6e71aba9b449731242efdb136d556222,citation,https://arxiv.org/pdf/1706.06196.pdf,Multi-Target Tracking in Multiple Non-Overlapping Cameras using Constrained Dominant Sets,2017 +220,India,Duke MTMC,duke_mtmc,13.0222347,77.56718325,Indian Institute of Science Bangalore,edu,317f5a56519df95884cce81cfba180ee3adaf5a5,citation,https://arxiv.org/pdf/1807.07295.pdf,Operator-In-The-Loop Deep Sequential Multi-camera Feature Fusion for Person Re-identification,2018 +221,China,Duke MTMC,duke_mtmc,31.2284923,121.40211389,East China Normal University,edu,0353fe24ecd237f4d9ae4dbc277a6a67a69ce8ed,citation,https://pdfs.semanticscholar.org/0353/fe24ecd237f4d9ae4dbc277a6a67a69ce8ed.pdf,Discriminative Feature Representation for Person Re-identification by Batch-contrastive Loss,2018 +222,China,Duke MTMC,duke_mtmc,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,fd2bc4833c19a60d3646368952dcf35dbda007f3,citation,,Improving Person Re-Identification by Adaptive Hard Sample Mining,2018 +223,China,Duke MTMC,duke_mtmc,30.60903415,114.3514284,Wuhan University of Technology,edu,fd2bc4833c19a60d3646368952dcf35dbda007f3,citation,,Improving Person Re-Identification by Adaptive Hard Sample Mining,2018 diff --git a/site/datasets/verified/fiw_300.csv b/site/datasets/verified/fiw_300.csv index afcd74c1..c87a054d 100644 --- a/site/datasets/verified/fiw_300.csv +++ b/site/datasets/verified/fiw_300.csv @@ -1,2 +1,11 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,300-W,fiw_300,0.0,0.0,,,,main,,A Semi-automatic Methodology for Facial Landmark Annotation,2013 +1,United Kingdom,300-W,fiw_300,51.0267513,-1.3972576,"IBM Hursley Labs, UK",company,c335a560a315de7cfeadf5b0b0febca837116988,citation,http://eprints.mdx.ac.uk/23779/1/C26.pdf,Back to the future: A fully automatic method for robust age progression,2016 +2,United States,300-W,fiw_300,35.9042272,-78.85565763,"IBM Research, North Carolina",company,c335a560a315de7cfeadf5b0b0febca837116988,citation,http://eprints.mdx.ac.uk/23779/1/C26.pdf,Back to the future: A fully automatic method for robust age progression,2016 +3,United Kingdom,300-W,fiw_300,51.49887085,-0.17560797,Imperial College London,edu,c335a560a315de7cfeadf5b0b0febca837116988,citation,http://eprints.mdx.ac.uk/23779/1/C26.pdf,Back to the future: A fully automatic method for robust age progression,2016 +4,United States,300-W,fiw_300,37.3936717,-122.0807262,Facebook,company,dcd2ac544a8336d73e4d3d80b158477c783e1e50,citation,https://arxiv.org/pdf/1709.01591.pdf,Improving Landmark Localization with Semi-Supervised Learning,2018 +5,United States,300-W,fiw_300,37.3706254,-121.9671894,NVIDIA,company,dcd2ac544a8336d73e4d3d80b158477c783e1e50,citation,https://arxiv.org/pdf/1709.01591.pdf,Improving Landmark Localization with Semi-Supervised Learning,2018 +6,Canada,300-W,fiw_300,45.5010087,-73.6157778,University of Montreal,edu,dcd2ac544a8336d73e4d3d80b158477c783e1e50,citation,https://arxiv.org/pdf/1709.01591.pdf,Improving Landmark Localization with Semi-Supervised Learning,2018 +7,United States,300-W,fiw_300,38.7768106,-94.9442982,Amazon,company,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +8,China,300-W,fiw_300,39.993008,116.329882,SenseTime,company,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +9,China,300-W,fiw_300,40.00229045,116.32098908,Tsinghua University,edu,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 diff --git a/site/datasets/verified/geofaces.csv b/site/datasets/verified/geofaces.csv index 9331c186..02570e4c 100644 --- a/site/datasets/verified/geofaces.csv +++ b/site/datasets/verified/geofaces.csv @@ -1,2 +1,5 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,GeoFaces,geofaces,0.0,0.0,,,,main,,Exploring the geo-dependence of human face appearance,2014 +1,United States,GeoFaces,geofaces,38.0333742,-84.5017758,University of Kentucky,edu,68eb46d2920d2e7568d543de9fa2fc42cb8f5cbb,citation,http://cs.uky.edu/~jacobs/papers/face2gps.pdf,FACE2GPS: Estimating geographic location from facial features,2015 +2,United States,GeoFaces,geofaces,38.0333742,-84.5017758,University of Kentucky,edu,17b46e2dad927836c689d6787ddb3387c6159ece,citation,,GeoFaceExplorer: exploring the geo-dependence of facial attributes,2014 +3,United States,GeoFaces,geofaces,38.0333742,-84.5017758,University of Kentucky,edu,9b9bf5e623cb8af7407d2d2d857bc3f1b531c182,citation,,Who goes there?: approaches to mapping facial appearance diversity,2016 diff --git a/site/datasets/verified/helen.csv b/site/datasets/verified/helen.csv index a9f9a846..19fb12fb 100644 --- a/site/datasets/verified/helen.csv +++ b/site/datasets/verified/helen.csv @@ -1,2 +1,324 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,Helen,helen,0.0,0.0,,,,main,,Interactive Facial Feature Localization,2012 +1,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,bae86526b3b0197210b64cdd95cb5aca4209c98a,citation,https://arxiv.org/pdf/1802.01777.pdf,"Brute-Force Facial Landmark Analysis With a 140, 000-Way Classifier",2018 +2,China,Helen,helen,28.2290209,112.99483204,"National University of Defense Technology, China",mil,1b8541ec28564db66a08185510c8b300fa4dc793,citation,,Affine-Transformation Parameters Regression for Face Alignment,2016 +3,China,Helen,helen,31.83907195,117.26420748,University of Science and Technology of China,edu,084bd02d171e36458f108f07265386f22b34a1ae,citation,http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf,Face Alignment at 3000 FPS via Regressing Local Binary Features,2014 +4,United States,Helen,helen,47.6423318,-122.1369302,Microsoft,company,084bd02d171e36458f108f07265386f22b34a1ae,citation,http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf,Face Alignment at 3000 FPS via Regressing Local Binary Features,2014 +5,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,5bd3d08335bb4e444a86200c5e9f57fd9d719e14,citation,https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf,3 D Face Morphable Models “ Inthe-Wild ”,0 +6,United States,Helen,helen,38.7768106,-94.9442982,Amazon,company,5bd3d08335bb4e444a86200c5e9f57fd9d719e14,citation,https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf,3 D Face Morphable Models “ Inthe-Wild ”,0 +7,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,5bd3d08335bb4e444a86200c5e9f57fd9d719e14,citation,https://pdfs.semanticscholar.org/5bd3/d08335bb4e444a86200c5e9f57fd9d719e14.pdf,3 D Face Morphable Models “ Inthe-Wild ”,0 +8,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,12095f9b35ee88272dd5abc2d942a4f55804b31e,citation,https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf,DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild Rıza,0 +9,United States,Helen,helen,38.7768106,-94.9442982,Amazon,company,12095f9b35ee88272dd5abc2d942a4f55804b31e,citation,https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf,DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild Rıza,0 +10,United Kingdom,Helen,helen,51.5231607,-0.1282037,University College London,edu,12095f9b35ee88272dd5abc2d942a4f55804b31e,citation,https://pdfs.semanticscholar.org/1209/5f9b35ee88272dd5abc2d942a4f55804b31e.pdf,DenseReg : Fully Convolutional Dense Shape Regression Inthe-Wild Rıza,0 +11,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +12,United Kingdom,Helen,helen,56.1454119,-3.9205713,University of Stirling,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +13,China,Helen,helen,31.4854255,120.2739581,Jiangnan University,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +14,China,Helen,helen,30.642769,104.06751175,"Sichuan University, Chengdu",edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +15,Germany,Helen,helen,48.48187645,9.18682404,Reutlingen University,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +16,United States,Helen,helen,45.57022705,-122.63709346,Concordia University,edu,266ed43dcea2e7db9f968b164ca08897539ca8dd,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf,Beyond Principal Components: Deep Boltzmann Machines for face modeling,2015 +17,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,266ed43dcea2e7db9f968b164ca08897539ca8dd,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf,Beyond Principal Components: Deep Boltzmann Machines for face modeling,2015 +18,Germany,Helen,helen,52.5098686,13.3984513,"Amazon Research, Berlin",company,ba1c0600d3bdb8ed9d439e8aa736a96214156284,citation,http://www.eurasip.org/Proceedings/Eusipco/Eusipco2017/papers/1570347043.pdf,Complex representations for learning statistical shape priors,2017 +19,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,ba1c0600d3bdb8ed9d439e8aa736a96214156284,citation,http://www.eurasip.org/Proceedings/Eusipco/Eusipco2017/papers/1570347043.pdf,Complex representations for learning statistical shape priors,2017 +20,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,3b470b76045745c0ef5321e0f1e0e6a4b1821339,citation,https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf,Consensus of Regression for Occlusion-Robust Facial Feature Localization,2014 +21,United States,Helen,helen,37.3309307,-121.8940485,"Adobe Research, San Jose, CA",company,3b470b76045745c0ef5321e0f1e0e6a4b1821339,citation,https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf,Consensus of Regression for Occlusion-Robust Facial Feature Localization,2014 +22,Spain,Helen,helen,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +23,Spain,Helen,helen,41.386608,2.16402,Universitat de Barcelona,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +24,Thailand,Helen,helen,13.65450525,100.49423171,Robotics Institute,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +25,United States,Helen,helen,40.44415295,-79.96243993,University of Pittsburgh,edu,cc4fc9a309f300e711e09712701b1509045a8e04,citation,https://pdfs.semanticscholar.org/cea6/9010a2f75f7a057d56770e776dec206ed705.pdf,Continuous Supervised Descent Method for Facial Landmark Localisation,2016 +26,Canada,Helen,helen,43.0095971,-81.2737336,University of Western Ontario,edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +27,Canada,Helen,helen,42.960348,-81.226628,"London Healthcare Sciences Centre, Ontario, Canada",edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +28,United Kingdom,Helen,helen,55.0030632,-1.57463231,Northumbria University,edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +29,Canada,Helen,helen,43.0012953,-81.2550455,"St. Joseph's Health Care, Ontario, Canada",edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +30,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,2a4153655ad1169d482e22c468d67f3bc2c49f12,citation,http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf,Face Alignment Across Large Poses: A 3D Solution,2016 +31,United States,Helen,helen,42.718568,-84.47791571,Michigan State University,edu,2a4153655ad1169d482e22c468d67f3bc2c49f12,citation,http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf,Face Alignment Across Large Poses: A 3D Solution,2016 +32,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,655ad6ed99277b3bba1f2ea7e5da4709d6e6cf44,citation,https://arxiv.org/pdf/1803.06598.pdf,Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network,2018 +33,United States,Helen,helen,42.3614256,-71.0812092,Microsoft Research Asia,company,655ad6ed99277b3bba1f2ea7e5da4709d6e6cf44,citation,https://arxiv.org/pdf/1803.06598.pdf,Facial Landmarks Detection by Self-Iterative Regression Based Landmarks-Attention Network,2018 +34,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,232b6e2391c064d483546b9ee3aafe0ba48ca519,citation,http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf,Optimization Problems for Fast AAM Fitting in-the-Wild,2013 +35,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,232b6e2391c064d483546b9ee3aafe0ba48ca519,citation,http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf,Optimization Problems for Fast AAM Fitting in-the-Wild,2013 +36,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,75fd9acf5e5b7ed17c658cc84090c4659e5de01d,citation,http://eprints.nottingham.ac.uk/31442/1/tzimiro_CVPR15.pdf,Project-Out Cascaded Regression with an application to face alignment,2015 +37,Denmark,Helen,helen,57.01590275,9.97532827,Aalborg University,edu,087002ab569e35432cdeb8e63b2c94f1abc53ea9,citation,http://openaccess.thecvf.com/content_cvpr_workshops_2015/W09/papers/Irani_Spatiotemporal_Analysis_of_2015_CVPR_paper.pdf,Spatiotemporal analysis of RGB-D-T facial images for multimodal pain level recognition,2015 +38,Spain,Helen,helen,41.5008957,2.111553,"Computer Vision Center, UAB, Barcelona, Spain",edu,087002ab569e35432cdeb8e63b2c94f1abc53ea9,citation,http://openaccess.thecvf.com/content_cvpr_workshops_2015/W09/papers/Irani_Spatiotemporal_Analysis_of_2015_CVPR_paper.pdf,Spatiotemporal analysis of RGB-D-T facial images for multimodal pain level recognition,2015 +39,China,Helen,helen,39.9041999,116.4073963,Key Lab of Intelligent Information Processing of Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +40,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +41,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +42,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +43,Israel,Helen,helen,32.77824165,34.99565673,Open University of Israel,edu,62e913431bcef5983955e9ca160b91bb19d9de42,citation,https://arxiv.org/pdf/1511.04031.pdf,Facial Landmark Detection with Tweaked Convolutional Neural Networks,2018 +44,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,034b3f3bac663fb814336a69a9fd3514ca0082b9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/alabort_cvpr2015.pdf,Unifying holistic and Parts-Based Deformable Model fitting,2015 +45,China,Helen,helen,39.9808333,116.34101249,Beihang University,edu,86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd,citation,https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf,Attentional Alignment Networks,2018 +46,United States,Helen,helen,32.7283683,-97.11201835,University of Texas at Arlington,edu,86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd,citation,https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf,Attentional Alignment Networks,2018 +47,China,Helen,helen,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cd,citation,https://pdfs.semanticscholar.org/86b6/afc667bb14ff4d69e7a5e8bb2454a6bbd2cd.pdf,Attentional Alignment Networks,2018 +48,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,4068574b8678a117d9a434360e9c12fe6232dae0,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos_automatic_2014.pdf,Automatic Construction of Deformable Models In-the-Wild,2014 +49,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,1d0128b9f96f4c11c034d41581f23eb4b4dd7780,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/robust_spherical_harmonics.pdf,Automatic construction Of robust spherical harmonic subspaces,2015 +50,China,Helen,helen,39.9041999,116.4073963,Key Lab of Intelligent Information Processing of Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +51,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +52,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +53,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,ac6c3b3e92ff5fbcd8f7967696c7aae134bea209,citation,https://arxiv.org/pdf/1607.05046.pdf,Deep Cascaded Bi-Network for Face Hallucination,2016 +54,China,Helen,helen,22.59805605,113.98533784,Shenzhen Institutes of Advanced Technology,edu,ac6c3b3e92ff5fbcd8f7967696c7aae134bea209,citation,https://arxiv.org/pdf/1607.05046.pdf,Deep Cascaded Bi-Network for Face Hallucination,2016 +55,United States,Helen,helen,37.36566745,-120.42158888,"University of California, Merced",edu,ac6c3b3e92ff5fbcd8f7967696c7aae134bea209,citation,https://arxiv.org/pdf/1607.05046.pdf,Deep Cascaded Bi-Network for Face Hallucination,2016 +56,United States,Helen,helen,42.3614256,-71.0812092,Microsoft Research Asia,company,63d865c66faaba68018defee0daf201db8ca79ed,citation,https://arxiv.org/pdf/1409.5230.pdf,Deep Regression for Face Alignment,2014 +57,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,35f921def890210dda4b72247849ad7ba7d35250,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Zhou_Exemplar-Based_Graph_Matching_2013_ICCV_paper.pdf,Exemplar-Based Graph Matching for Robust Facial Landmark Localization,2013 +58,United States,Helen,helen,42.3614256,-71.0812092,Microsoft Research Asia,company,898ff1bafee2a6fb3c848ad07f6f292416b5f07d,citation,,Face Alignment via Regressing Local Binary Features,2016 +59,China,Helen,helen,31.83907195,117.26420748,University of Science and Technology of China,edu,898ff1bafee2a6fb3c848ad07f6f292416b5f07d,citation,,Face Alignment via Regressing Local Binary Features,2016 +60,United States,Helen,helen,47.6423318,-122.1369302,Microsoft,company,898ff1bafee2a6fb3c848ad07f6f292416b5f07d,citation,,Face Alignment via Regressing Local Binary Features,2016 +61,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +62,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +63,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +64,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,f095b5770f0ff13ba9670e3d480743c5e9ad1036,citation,http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf,Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images,2016 +65,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,f095b5770f0ff13ba9670e3d480743c5e9ad1036,citation,http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf,Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images,2016 +66,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,f095b5770f0ff13ba9670e3d480743c5e9ad1036,citation,http://doc.utwente.nl/103789/1/Pantic_Fast_Algorithms_for_Fitting_Active_Appearance_Models.pdf,Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images,2016 +67,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,624496296af19243d5f05e7505fd927db02fd0ce,citation,http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf,Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild,2014 +68,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,624496296af19243d5f05e7505fd927db02fd0ce,citation,http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf,Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild,2014 +69,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,6a4ebd91c4d380e21da0efb2dee276897f56467a,citation,http://eprints.nottingham.ac.uk/31441/1/tzimiroICIP14b.pdf,HOG active appearance models,2014 +70,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,696236fb6f986f6d5565abb01f402d09db68e5fa,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf,Learning adaptive receptive fields for deep image parsing networks,2017 +71,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,696236fb6f986f6d5565abb01f402d09db68e5fa,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf,Learning adaptive receptive fields for deep image parsing networks,2017 +72,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,696236fb6f986f6d5565abb01f402d09db68e5fa,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Learning_Adaptive_Receptive_CVPR_2017_paper.pdf,Learning adaptive receptive fields for deep image parsing networks,2017 +73,United Kingdom,Helen,helen,52.17638955,0.14308882,University of Cambridge,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +74,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +75,Germany,Helen,helen,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +76,United States,Helen,helen,45.55236,-122.9142988,Intel Corporation,company,9ef2b2db11ed117521424c275c3ce1b5c696b9b3,citation,https://arxiv.org/pdf/1511.04404.pdf,Robust Face Alignment Using a Mixture of Invariant Experts,2016 +77,Germany,Helen,helen,48.7863462,9.2380718,Daimler AG,company,3a8846ca16df5dfb2daadc189ed40c13d2ddc0c5,citation,https://arxiv.org/pdf/1901.10143.pdf,Validation loss for landmark detection,2019 +78,South Africa,Helen,helen,-33.95828745,18.45997349,University of Cape Town,edu,3bc376f29bc169279105d33f59642568de36f17f,citation,http://www.dip.ee.uct.ac.za/~nicolls/publish/sm14-visapp.pdf,Active shape models with SIFT descriptors and MARS,2014 +79,United States,Helen,helen,33.9832526,-118.40417,USC Institute for Creative Technologies,edu,0a6d344112b5af7d1abbd712f83c0d70105211d0,citation,http://ict.usc.edu/pubs/Constrained%20local%20neural%20fields%20for%20robust%20facial%20landmark%20detection%20in%20the%20wild.pdf,Constrained Local Neural Fields for Robust Facial Landmark Detection in the Wild,2013 +80,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,3be8f1f7501978287af8d7ebfac5963216698249,citation,https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf,Deep Cascaded Regression for Face Alignment,2015 +81,Singapore,Helen,helen,1.2962018,103.77689944,National University of Singapore,edu,3be8f1f7501978287af8d7ebfac5963216698249,citation,https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf,Deep Cascaded Regression for Face Alignment,2015 +82,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,329d58e8fb30f1bf09acb2f556c9c2f3e768b15c,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf,Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment,2017 +83,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,329d58e8fb30f1bf09acb2f556c9c2f3e768b15c,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf,Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment,2017 +84,France,Helen,helen,48.8407791,2.5873259,University of Paris-Est,edu,0293721d276856f0425d4417e22381de3350ac32,citation,https://hal-upec-upem.archives-ouvertes.fr/hal-01790317/file/RK_SSD_2018.pdf,Customer Satisfaction Measuring Based on the Most Significant Facial Emotion,2018 +85,Tunisia,Helen,helen,34.7361066,10.7427275,"University of Sfax, Tunisia",edu,0293721d276856f0425d4417e22381de3350ac32,citation,https://hal-upec-upem.archives-ouvertes.fr/hal-01790317/file/RK_SSD_2018.pdf,Customer Satisfaction Measuring Based on the Most Significant Facial Emotion,2018 +86,United States,Helen,helen,42.4505507,-76.4783513,Cornell University,edu,ce9e1dfa7705623bb67df3a91052062a0a0ca456,citation,https://arxiv.org/pdf/1611.05507.pdf,Deep Feature Interpolation for Image Content Changes,2017 +87,United States,Helen,helen,38.8997145,-77.0485992,George Washington University,edu,ce9e1dfa7705623bb67df3a91052062a0a0ca456,citation,https://arxiv.org/pdf/1611.05507.pdf,Deep Feature Interpolation for Image Content Changes,2017 +88,China,Helen,helen,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,2d294bde112b892068636f3a48300b3c033d98da,citation,https://arxiv.org/pdf/1808.01558.pdf,Deep Multi-Center Learning for Face Alignment,2018 +89,China,Helen,helen,31.2284923,121.40211389,East China Normal University,edu,2d294bde112b892068636f3a48300b3c033d98da,citation,https://arxiv.org/pdf/1808.01558.pdf,Deep Multi-Center Learning for Face Alignment,2018 +90,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,30cd39388b5c1aae7d8153c0ab9d54b61b474ffe,citation,https://arxiv.org/pdf/1510.09083.pdf,Deep Recurrent Regression for Facial Landmark Detection,2018 +91,Singapore,Helen,helen,1.2962018,103.77689944,National University of Singapore,edu,30cd39388b5c1aae7d8153c0ab9d54b61b474ffe,citation,https://arxiv.org/pdf/1510.09083.pdf,Deep Recurrent Regression for Facial Landmark Detection,2018 +92,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,0209389b8369aaa2a08830ac3b2036d4901ba1f1,citation,https://arxiv.org/pdf/1612.01202.pdf,DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild,2017 +93,United Kingdom,Helen,helen,51.5231607,-0.1282037,University College London,edu,0209389b8369aaa2a08830ac3b2036d4901ba1f1,citation,https://arxiv.org/pdf/1612.01202.pdf,DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild,2017 +94,United States,Helen,helen,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,191d30e7e7360d565b0c1e2814b5bcbd86a11d41,citation,http://homepages.rpi.edu/~wuy9/DiscriminativeDeepFaceShape/DiscriminativeDeepFaceShape_IJCV.pdf,Discriminative Deep Face Shape Model for Facial Point Detection,2014 +95,United States,Helen,helen,39.2899685,-76.62196103,University of Maryland,edu,ceeb67bf53ffab1395c36f1141b516f893bada27,citation,https://arxiv.org/pdf/1601.07950.pdf,Face Alignment by Local Deep Descriptor Regression,2016 +96,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,ceeb67bf53ffab1395c36f1141b516f893bada27,citation,https://arxiv.org/pdf/1601.07950.pdf,Face Alignment by Local Deep Descriptor Regression,2016 +97,United States,Helen,helen,43.1576969,-77.58829158,University of Rochester,edu,beb8d7c128ccbdc6b63959a763ebc505a5313c06,citation,https://arxiv.org/pdf/1812.03252.pdf,Face Completion with Semantic Knowledge and Collaborative Adversarial Learning,2018 +98,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,beb8d7c128ccbdc6b63959a763ebc505a5313c06,citation,https://arxiv.org/pdf/1812.03252.pdf,Face Completion with Semantic Knowledge and Collaborative Adversarial Learning,2018 +99,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,438e7999c937b94f0f6384dbeaa3febff6d283b6,citation,https://arxiv.org/pdf/1705.02402.pdf,"Face Detection, Bounding Box Aggregation and Pose Estimation for Robust Facial Landmark Localisation in the Wild",2017 +100,China,Helen,helen,31.4854255,120.2739581,Jiangnan University,edu,438e7999c937b94f0f6384dbeaa3febff6d283b6,citation,https://arxiv.org/pdf/1705.02402.pdf,"Face Detection, Bounding Box Aggregation and Pose Estimation for Robust Facial Landmark Localisation in the Wild",2017 +101,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,84e6669b47670f9f4f49c0085311dce0e178b685,citation,https://arxiv.org/pdf/1502.00852.pdf,Face frontalization for Alignment and Recognition,2015 +102,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,84e6669b47670f9f4f49c0085311dce0e178b685,citation,https://arxiv.org/pdf/1502.00852.pdf,Face frontalization for Alignment and Recognition,2015 +103,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,2f7aa942313b1eb12ebfab791af71d0a3830b24c,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf,Feature-Based Lucas–Kanade and Active Appearance Models,2015 +104,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,2f7aa942313b1eb12ebfab791af71d0a3830b24c,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf,Feature-Based Lucas–Kanade and Active Appearance Models,2015 +105,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,1c1a98df3d0d5e2034ea723994bdc85af45934db,citation,http://www.cs.nott.ac.uk/~pszmv/Documents/ICCV-300w_cameraready.pdf,Guided Unsupervised Learning of Mode Specific Models for Facial Point Detection in the Wild,2013 +106,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,f070d739fb812d38571ec77490ccd8777e95ce7a,citation,https://zhzhanp.github.io/papers/PR2015.pdf,Hierarchical facial landmark localization via cascaded random binary patterns,2015 +107,China,Helen,helen,22.53521465,113.9315911,Shenzhen University,edu,f070d739fb812d38571ec77490ccd8777e95ce7a,citation,https://zhzhanp.github.io/papers/PR2015.pdf,Hierarchical facial landmark localization via cascaded random binary patterns,2015 +108,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,87e6cb090aecfc6f03a3b00650a5c5f475dfebe1,citation,https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf,Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection,2016 +109,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,87e6cb090aecfc6f03a3b00650a5c5f475dfebe1,citation,https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf,Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection,2016 +110,Singapore,Helen,helen,1.2962018,103.77689944,National University of Singapore,edu,0ea7b7fff090c707684fd4dc13e0a8f39b300a97,citation,https://arxiv.org/pdf/1711.06055.pdf,Integrated Face Analytics Networks through Cross-Dataset Hybrid Training,2017 +111,China,Helen,helen,39.9586652,116.30971281,Beijing Institute of Technology,edu,0ea7b7fff090c707684fd4dc13e0a8f39b300a97,citation,https://arxiv.org/pdf/1711.06055.pdf,Integrated Face Analytics Networks through Cross-Dataset Hybrid Training,2017 +112,China,Helen,helen,23.0490047,113.3971571,South China University of China,edu,7d7be6172fc2884e1da22d1e96d5899a29831ad2,citation,https://arxiv.org/pdf/1703.01605.pdf,L2GSCI: Local to Global Seam Cutting and Integrating for Accurate Face Contour Extraction,2017 +113,China,Helen,helen,22.46935655,114.19474194,Education University of Hong Kong,edu,7d7be6172fc2884e1da22d1e96d5899a29831ad2,citation,https://arxiv.org/pdf/1703.01605.pdf,L2GSCI: Local to Global Seam Cutting and Integrating for Accurate Face Contour Extraction,2017 +114,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,d28d32af7ef9889ef9cb877345a90ea85e70f7f1,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Kim_Local.pdf,Local-Global Landmark Confidences for Face Recognition,2017 +115,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,d28d32af7ef9889ef9cb877345a90ea85e70f7f1,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Kim_Local.pdf,Local-Global Landmark Confidences for Face Recognition,2017 +116,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,303a7099c01530fa0beb197eb1305b574168b653,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf,Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders,2016 +117,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,303a7099c01530fa0beb197eb1305b574168b653,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf,Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders,2016 +118,Sweden,Helen,helen,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,1824b1ccace464ba275ccc86619feaa89018c0ad,citation,http://www.csc.kth.se/~vahidk/face/KazemiCVPR14.pdf,One millisecond face alignment with an ensemble of regression trees,2014 +119,United States,Helen,helen,35.3103441,-80.73261617,University of North Carolina at Charlotte,edu,89002a64e96a82486220b1d5c3f060654b24ef2a,citation,http://research.rutgers.edu/~shaoting/paper/ICCV15_face.pdf,PIEFA: Personalized Incremental and Ensemble Face Alignment,2015 +120,United States,Helen,helen,45.57022705,-122.63709346,Concordia University,edu,6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d,citation,https://arxiv.org/pdf/1607.00659.pdf,Robust Deep Appearance Models,2016 +121,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d,citation,https://arxiv.org/pdf/1607.00659.pdf,Robust Deep Appearance Models,2016 +122,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf,Robust FEC-CNN: A High Accuracy Facial Landmark Detection System,2017 +123,China,Helen,helen,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf,Robust FEC-CNN: A High Accuracy Facial Landmark Detection System,2017 +124,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,788a7b59ea72e23ef4f86dc9abb4450efefeca41,citation,http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf,Robust Statistical Face Frontalization,2015 +125,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,788a7b59ea72e23ef4f86dc9abb4450efefeca41,citation,http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf,Robust Statistical Face Frontalization,2015 +126,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0,citation,http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf,Robust Statistical Frontalization of Human and Animal Faces,2016 +127,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0,citation,http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf,Robust Statistical Frontalization of Human and Animal Faces,2016 +128,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,04ff69aa20da4eeccdabbe127e3641b8e6502ec0,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf,Sequential Face Alignment via Person-Specific Modeling in the Wild,2016 +129,United States,Helen,helen,32.7283683,-97.11201835,University of Texas at Arlington,edu,04ff69aa20da4eeccdabbe127e3641b8e6502ec0,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf,Sequential Face Alignment via Person-Specific Modeling in the Wild,2016 +130,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,c8ca6a2dc41516c16ea0747e9b3b7b1db788dbdd,citation,https://arxiv.org/pdf/1609.02825.pdf,Track Facial Points in Unconstrained Videos,2016 +131,United States,Helen,helen,32.7298718,-97.1140116,The University of Texas at Arlington,edu,c8ca6a2dc41516c16ea0747e9b3b7b1db788dbdd,citation,https://arxiv.org/pdf/1609.02825.pdf,Track Facial Points in Unconstrained Videos,2016 +132,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,433a6d6d2a3ed8a6502982dccc992f91d665b9b3,citation,https://arxiv.org/pdf/1409.0602.pdf,Transferring Landmark Annotations for Cross-Dataset Face Alignment.,2014 +133,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,433a6d6d2a3ed8a6502982dccc992f91d665b9b3,citation,https://arxiv.org/pdf/1409.0602.pdf,Transferring Landmark Annotations for Cross-Dataset Face Alignment.,2014 +134,Canada,Helen,helen,49.8091536,-97.13304179,University of Manitoba,edu,3bf249f716a384065443abc6172f4bdef88738d9,citation,https://arxiv.org/pdf/1812.01063.pdf,A Hybrid Instance-based Transfer Learning Method,2018 +135,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,afdf9a3464c3b015f040982750f6b41c048706f5,citation,https://arxiv.org/pdf/1608.05477.pdf,A Recurrent Encoder-Decoder Network for Sequential Face Alignment,2016 +136,South Korea,Helen,helen,37.26728,126.9841151,Seoul National University,edu,b4362cd87ad219790800127ddd366cc465606a78,citation,https://pdfs.semanticscholar.org/b436/2cd87ad219790800127ddd366cc465606a78.pdf,A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy,2015 +137,Canada,Helen,helen,43.66333345,-79.39769975,University of Toronto,edu,3a54b23cdbd159bb32c39c3adcba8229e3237e56,citation,https://arxiv.org/pdf/1805.12302.pdf,Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization,2018 +138,United States,Helen,helen,32.8800604,-117.2340135,University of California San Diego,edu,3ac0aefb379dedae4a6054e649e98698b3e5fb82,citation,https://arxiv.org/pdf/1802.02137.pdf,An Occluded Stacked Hourglass Approach to Facial Landmark Localization and Occlusion Estimation,2017 +139,United Kingdom,Helen,helen,53.8066815,-1.5550328,The University of Leeds,edu,c5ea084531212284ce3f1ca86a6209f0001de9d1,citation,https://pdfs.semanticscholar.org/c5ea/084531212284ce3f1ca86a6209f0001de9d1.pdf,Audio-visual speech processing for multimedia localisation,2016 +140,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,06c2dfe1568266ad99368fc75edf79585e29095f,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/joan_cvpr2014.pdf,Bayesian Active Appearance Models,2014 +141,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,ccf16bcf458e4d7a37643b8364594656287f5bfc,citation,https://pdfs.semanticscholar.org/ccf1/6bcf458e4d7a37643b8364594656287f5bfc.pdf,Cascade for Landmark Guided Semantic Part Segmentation,2016 +142,China,Helen,helen,31.4854255,120.2739581,Jiangnan University,edu,60824ee635777b4ee30fcc2485ef1e103b8e7af9,citation,http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf,Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting,2015 +143,United Kingdom,Helen,helen,51.2421839,-0.5905421,University of Surrey Guildford,edu,60824ee635777b4ee30fcc2485ef1e103b8e7af9,citation,http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf,Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting,2015 +144,China,Helen,helen,22.304572,114.17976285,Hong Kong Polytechnic University,edu,4836b084a583d2e794eb6a94982ea30d7990f663,citation,https://arxiv.org/pdf/1611.06642.pdf,Cascaded Face Alignment via Intimacy Definition Feature,2017 +145,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,72a1852c78b5e95a57efa21c92bdc54219975d8f,citation,http://eprints.nottingham.ac.uk/31303/1/prl_blockwise_SDM.pdf,Cascaded regression with sparsified feature covariance matrix for facial landmark detection,2016 +146,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,4140498e96a5ff3ba816d13daf148fffb9a2be3f,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/10/2017_FG_Li_Constrained.pdf,Constrained Ensemble Initialization for Facial Landmark Tracking in Video,2017 +147,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,963d0d40de8780161b70d28d2b125b5222e75596,citation,https://arxiv.org/pdf/1611.08657.pdf,Convolutional Experts Constrained Local Model for Facial Landmark Detection,2017 +148,United States,Helen,helen,32.87935255,-117.23110049,"University of California, San Diego",edu,ee418372b0038bd3b8ae82bd1518d5c01a33a7ec,citation,https://pdfs.semanticscholar.org/ee41/8372b0038bd3b8ae82bd1518d5c01a33a7ec.pdf,CSE 255 Winter 2015 Assignment 1 : Eye Detection using Histogram of Oriented Gradients and Adaboost Classifier,2015 +149,Poland,Helen,helen,52.22165395,21.00735776,Warsaw University of Technology,edu,f27b8b8f2059248f77258cf8595e9434cf0b0228,citation,https://arxiv.org/pdf/1706.01789.pdf,Deep Alignment Network: A Convolutional Neural Network for Robust Face Alignment,2017 +150,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,a0b1990dd2b4cd87e4fd60912cc1552c34792770,citation,https://pdfs.semanticscholar.org/a0b1/990dd2b4cd87e4fd60912cc1552c34792770.pdf,Deep Constrained Local Models for Facial Landmark Detection,2016 +151,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,38cbb500823057613494bacd0078aa0e57b30af8,citation,https://arxiv.org/pdf/1704.08772.pdf,Deep Face Deblurring,2017 +152,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,9b8f7a6850d991586b7186f0bb7e424924a9fd74,citation,https://ibug.doc.ic.ac.uk/media/uploads/documents/disentangling-modes-variation.pdf,Disentangling the Modes of Variation in Unlabelled Data,2018 +153,China,Helen,helen,30.642769,104.06751175,"Sichuan University, Chengdu",edu,b29b42f7ab8d25d244bfc1413a8d608cbdc51855,citation,https://arxiv.org/pdf/1702.02719.pdf,Effective face landmark localization via single deep network,2017 +154,China,Helen,helen,22.304572,114.17976285,Hong Kong Polytechnic University,edu,4cfa8755fe23a8a0b19909fa4dec54ce6c1bd2f7,citation,https://arxiv.org/pdf/1611.09956.pdf,Efficient likelihood Bayesian constrained local model,2017 +155,China,Helen,helen,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,5c820e47981d21c9dddde8d2f8020146e600368f,citation,https://pdfs.semanticscholar.org/5c82/0e47981d21c9dddde8d2f8020146e600368f.pdf,Extended Supervised Descent Method for Robust Face Alignment,2014 +156,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,f633d6dc02b2e55eb24b89f2b8c6df94a2de86dd,citation,http://parnec.nuaa.edu.cn/pubs/xiaoyang%20tan/journal/2016/JXPR-2016.pdf,Face alignment by robust discriminative Hough voting,2016 +157,Romania,Helen,helen,46.7723581,23.5852075,Technical University,edu,f0ae807627f81acb63eb5837c75a1e895a92c376,citation,https://pdfs.semanticscholar.org/f0ae/807627f81acb63eb5837c75a1e895a92c376.pdf,Facial Landmark Detection using Ensemble of Cascaded Regressions,2016 +158,Czech Republic,Helen,helen,50.0764296,14.41802312,Czech Technical University,edu,37c8514df89337f34421dc27b86d0eb45b660a5e,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Uricar_Facial_Landmark_Tracking_ICCV_2015_paper.pdf,Facial Landmark Tracking by Tree-Based Deformable Part Model Based Detector,2015 +159,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,5b0bf1063b694e4b1575bb428edb4f3451d9bf04,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Yang_Facial_Shape_Tracking_ICCV_2015_paper.pdf,Facial Shape Tracking via Spatio-Temporal Cascade Shape Regression,2015 +160,Switzerland,Helen,helen,47.376313,8.5476699,ETH Zurich,edu,a66d89357ada66d98d242c124e1e8d96ac9b37a0,citation,https://arxiv.org/pdf/1608.06451.pdf,Failure Detection for Facial Landmark Detectors,2016 +161,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,f1b4583c576d6d8c661b4b2c82bdebf3ba3d7e53,citation,https://arxiv.org/pdf/1707.05653.pdf,Faster than Real-Time Facial Alignment: A 3D Spatial Transformer Network Approach in Unconstrained Poses,2017 +162,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,70a69569ba61f3585cd90c70ca5832e838fa1584,citation,https://pdfs.semanticscholar.org/70a6/9569ba61f3585cd90c70ca5832e838fa1584.pdf,Friendly Faces: Weakly Supervised Character Identification,2014 +163,United States,Helen,helen,37.36566745,-120.42158888,"University of California, Merced",edu,f0a4a3fb6997334511d7b8fc090f9ce894679faf,citation,https://arxiv.org/pdf/1704.05838.pdf,Generative Face Completion,2017 +164,United States,Helen,helen,28.0599999,-82.41383619,University of South Florida,edu,ba21fd28003994480f713b0a1276160fea2e89b5,citation,https://pdfs.semanticscholar.org/ba21/fd28003994480f713b0a1276160fea2e89b5.pdf,Identification of Individuals from Ears in Real World Conditions,2018 +165,United States,Helen,helen,28.59899755,-81.19712501,University of Central Florida,edu,a40edf6eb979d1ddfe5894fac7f2cf199519669f,citation,https://arxiv.org/pdf/1704.08740.pdf,Improving Facial Attribute Prediction Using Semantic Segmentation,2017 +166,Germany,Helen,helen,48.263011,11.666857,Technical University of Munich,edu,e6178de1ef15a6a973aad2791ce5fbabc2cb8ae5,citation,https://pdfs.semanticscholar.org/e617/8de1ef15a6a973aad2791ce5fbabc2cb8ae5.pdf,Improving Facial Landmark Detection via a Super-Resolution Inception Network,2017 +167,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,9ca0626366e136dac6bfd628cec158e26ed959c7,citation,https://arxiv.org/pdf/1811.02194.pdf,In-the-wild Facial Expression Recognition in Extreme Poses,2017 +168,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,500b92578e4deff98ce20e6017124e6d2053b451,citation,http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf,Incremental Face Alignment in the Wild,2014 +169,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,500b92578e4deff98ce20e6017124e6d2053b451,citation,http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf,Incremental Face Alignment in the Wild,2014 +170,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,8dd162c9419d29564e9777dd523382a20c683d89,citation,https://arxiv.org/pdf/1806.02479.pdf,Interlinked Convolutional Neural Networks for Face Parsing,2015 +171,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,2c14c3bb46275da5706c466f9f51f4424ffda914,citation,http://braismartinez.com/media/documents/2015ivc_-_l21-based_regression_and_prediction_accumulation_across_views_for_robust_facial_landmark_detection.pdf,"L2, 1-based regression and prediction accumulation across views for robust facial landmark detection",2016 +172,China,Helen,helen,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,c00f402b9cfc3f8dd2c74d6b3552acbd1f358301,citation,https://arxiv.org/pdf/1608.00207.pdf,Learning deep representation from coarse to fine for face alignment,2016 +173,China,Helen,helen,31.83907195,117.26420748,University of Science and Technology of China,edu,b5f79df712ad535d88ae784a617a30c02e0551ca,citation,http://staff.ustc.edu.cn/~juyong/Papers/FaceAlignment-2015.pdf,Locating Facial Landmarks Using Probabilistic Random Forest,2015 +174,United Kingdom,Helen,helen,52.3793131,-1.5604252,University of Warwick,edu,0bc53b338c52fc635687b7a6c1e7c2b7191f42e5,citation,https://pdfs.semanticscholar.org/a32a/8d6d4c3b4d69544763be48ffa7cb0d7f2f23.pdf,Loglet SIFT for Part Description in Deformable Part Models: Application to Face Alignment,2016 +175,United Kingdom,Helen,helen,53.4717306,-2.2399239,Manchester Metropolitan University,edu,6fd4048bfe3123e94c2648e53a56bc6bf8ff4cdd,citation,https://pdfs.semanticscholar.org/6fd4/048bfe3123e94c2648e53a56bc6bf8ff4cdd.pdf,Micro-facial movement detection using spatio-temporal features,2016 +176,United Kingdom,Helen,helen,51.5247272,-0.03931035,Queen Mary University of London,edu,0f81b0fa8df5bf3fcfa10f20120540342a0c92e5,citation,https://arxiv.org/pdf/1501.05152.pdf,"Mirror, mirror on the wall, tell me, is the error small?",2015 +177,South Africa,Helen,helen,-33.95828745,18.45997349,University of Cape Town,edu,36e8ef2e5d52a78dddf0002e03918b101dcdb326,citation,http://www.milbo.org/stasm-files/multiview-active-shape-models-with-sift-for-300w.pdf,Multiview Active Shape Models with SIFT Descriptors for the 300-W Face Landmark Challenge,2013 +178,United States,Helen,helen,40.51865195,-74.44099801,State University of New Jersey,edu,bbc5f4052674278c96abe7ff9dc2d75071b6e3f3,citation,https://pdfs.semanticscholar.org/287b/7baff99d6995fd5852002488eb44659be6c1.pdf,Nonlinear Hierarchical Part-Based Regression for Unconstrained Face Alignment,2016 +179,United States,Helen,helen,33.6404952,-117.8442962,University of California at Irvine,edu,bd13f50b8997d0733169ceba39b6eb1bda3eb1aa,citation,https://arxiv.org/pdf/1506.08347.pdf,Occlusion Coherence: Detecting and Localizing Occluded Faces,2015 +180,United States,Helen,helen,33.6404952,-117.8442962,University of California Irvine,edu,65126e0b1161fc8212643b8ff39c1d71d262fbc1,citation,http://vision.ics.uci.edu/papers/GhiasiF_CVPR_2014/GhiasiF_CVPR_2014.pdf,Occlusion Coherence: Localizing Occluded Faces with a Hierarchical Deformable Part Model,2014 +181,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,4a8480d58c30dc484bda08969e754cd13a64faa1,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/paper_offline.pdf,Offline Deformable Face Tracking in Arbitrary Videos,2015 +182,Germany,Helen,helen,52.14005065,11.64471248,Otto von Guericke University,edu,7d1688ce0b48096e05a66ead80e9270260cb8082,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w44/Saxen_Real_vs._Fake_ICCV_2017_paper.pdf,Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors,2017 +183,United Kingdom,Helen,helen,51.24303255,-0.59001382,University of Surrey,edu,3c6cac7ecf546556d7c6050f7b693a99cc8a57b3,citation,https://pdfs.semanticscholar.org/3c6c/ac7ecf546556d7c6050f7b693a99cc8a57b3.pdf,Robust facial landmark detection in the wild,2016 +184,Germany,Helen,helen,53.8338371,10.7035939,Institute of Systems and Robotics,edu,4a04d4176f231683fd68ccf0c76fcc0c44d05281,citation,http://home.isr.uc.pt/~pedromartins/Publications/pmartins_icip2018.pdf,Simultaneous Cascaded Regression,2018 +185,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,11fc332bdcc843aad7475bb4566e73a957dffda5,citation,https://arxiv.org/pdf/1805.03356.pdf,SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting,2018 +186,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,d140c5add2cddd4a572f07358d666fe00e8f4fe1,citation,https://pdfs.semanticscholar.org/d140/c5add2cddd4a572f07358d666fe00e8f4fe1.pdf,Statistically Learned Deformable Eye Models,2014 +187,Australia,Helen,helen,-33.8809651,151.20107299,University of Technology Sydney,edu,77875d6e4d8c7ed3baeb259fd5696e921f59d7ad,citation,https://arxiv.org/pdf/1803.04108.pdf,Style Aggregated Network for Facial Landmark Detection,2018 +188,Germany,Helen,helen,50.7791703,6.06728733,RWTH Aachen University,edu,d32b155138dafd0a9099980eceec6081ab51b861,citation,https://arxiv.org/pdf/1902.03459.pdf,Super-realtime facial landmark detection and shape fitting by deep regression of shape model parameters,2019 +189,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,59d8fa6fd91cdb72cd0fa74c04016d79ef5a752b,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Zafeiriou_The_Menpo_Facial_CVPR_2017_paper.pdf,The Menpo Facial Landmark Localisation Challenge: A Step Towards the Solution,2017 +190,Sweden,Helen,helen,55.7039571,13.1902011,Lund University,edu,995d55fdf5b6fe7fb630c93a424700d4bc566104,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Nilsson_The_One_Triangle_ICCV_2015_paper.pdf,The One Triangle Three Parallelograms Sampling Strategy and Its Application in Shape Regression,2015 +191,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,671bfefb22d2044ab3e4402703bb88a10a7da78a,citation,https://arxiv.org/pdf/1811.03492.pdf,Triple consistency loss for pairing distributions in GAN-based face synthesis.,2018 +192,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,5c124b57699be19cd4eb4e1da285b4a8c84fc80d,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Zhao_Unified_Face_Analysis_2014_CVPR_paper.pdf,Unified Face Analysis by Iterative Multi-output Random Forests,2014 +193,France,Helen,helen,49.3849757,1.0683257,"INSA Rouen, France",edu,891b10c4b3b92ca30c9b93170ec9abd71f6099c4,citation,https://pdfs.semanticscholar.org/891b/10c4b3b92ca30c9b93170ec9abd71f6099c4.pdf,2 New Statement for Structured Output Regression Problems,2015 +194,France,Helen,helen,49.4583047,1.0688892,Rouen University,edu,891b10c4b3b92ca30c9b93170ec9abd71f6099c4,citation,https://pdfs.semanticscholar.org/891b/10c4b3b92ca30c9b93170ec9abd71f6099c4.pdf,2 New Statement for Structured Output Regression Problems,2015 +195,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +196,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +197,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +198,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,303065c44cf847849d04da16b8b1d9a120cef73a,citation,https://arxiv.org/pdf/1701.05360.pdf,"3D Face Morphable Models ""In-the-Wild""",2017 +199,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,2e3d081c8f0e10f138314c4d2c11064a981c1327,citation,https://arxiv.org/pdf/1603.06015.pdf,A Comprehensive Performance Evaluation of Deformable Face Tracking “In-the-Wild”,2017 +200,Italy,Helen,helen,40.3515155,18.1750161,"National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy",edu,6e38011e38a1c893b90a48e8f8eae0e22d2008e8,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Del_Coco_A_Computer_Vision_ICCV_2017_paper.pdf,A Computer Vision Based Approach for Understanding Emotional Involvements in Children with Autism Spectrum Disorders,2017 +201,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,131e395c94999c55c53afead65d81be61cd349a4,citation,https://arxiv.org/pdf/1612.02203.pdf,A Functional Regression Approach to Facial Landmark Tracking,2018 +202,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,131e395c94999c55c53afead65d81be61cd349a4,citation,https://arxiv.org/pdf/1612.02203.pdf,A Functional Regression Approach to Facial Landmark Tracking,2018 +203,France,Helen,helen,49.4583047,1.0688892,Normandie University,edu,2df4d05119fe3fbf1f8112b3ad901c33728b498a,citation,https://pdfs.semanticscholar.org/2df4/d05119fe3fbf1f8112b3ad901c33728b498a.pdf,A regularization scheme for structured output problems : an application to facial landmark detection,2016 +204,United States,Helen,helen,40.00471095,-83.02859368,Ohio State University,edu,9993f1a7cfb5b0078f339b9a6bfa341da76a3168,citation,https://arxiv.org/pdf/1609.09058.pdf,"A Simple, Fast and Highly-Accurate Algorithm to Recover 3D Shape from 2D Landmarks on a Single Image",2018 +205,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,5f5906168235613c81ad2129e2431a0e5ef2b6e4,citation,https://arxiv.org/pdf/1601.00199.pdf,A Unified Framework for Compositional Fitting of Active Appearance Models,2016 +206,France,Helen,helen,49.4583047,1.0688892,Rouen University,edu,0b0958493e43ca9c131315bcfb9a171d52ecbb8a,citation,https://pdfs.semanticscholar.org/0b09/58493e43ca9c131315bcfb9a171d52ecbb8a.pdf,A Unified Neural Based Model for Structured Output Problems,2015 +207,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,b730908bc1f80b711c031f3ea459e4de09a3d324,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf,Active Orientation Models for Face Alignment In-the-Wild,2014 +208,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,b730908bc1f80b711c031f3ea459e4de09a3d324,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf,Active Orientation Models for Face Alignment In-the-Wild,2014 +209,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,293ade202109c7f23637589a637bdaed06dc37c9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf,Adaptive cascaded regression,2016 +210,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,293ade202109c7f23637589a637bdaed06dc37c9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf,Adaptive cascaded regression,2016 +211,Australia,Helen,helen,-34.920603,138.6062277,Adelaide University,edu,45e7ddd5248977ba8ec61be111db912a4387d62f,citation,https://arxiv.org/pdf/1711.00253.pdf,Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization,2017 +212,China,Helen,helen,32.0565957,118.77408833,Nanjing University,edu,45e7ddd5248977ba8ec61be111db912a4387d62f,citation,https://arxiv.org/pdf/1711.00253.pdf,Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization,2017 +213,China,Helen,helen,32.035225,118.855317,Nanjing University of Science & Technology,edu,45e7ddd5248977ba8ec61be111db912a4387d62f,citation,https://arxiv.org/pdf/1711.00253.pdf,Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization,2017 +214,United States,Helen,helen,38.99203005,-76.9461029,University of Maryland College Park,edu,3504907a2e3c81d78e9dfe71c93ac145b1318f9c,citation,https://arxiv.org/pdf/1605.02686.pdf,An End-to-End System for Unconstrained Face Verification with Deep Convolutional Neural Networks,2015 +215,United States,Helen,helen,39.738444,-84.17918747,University of Dayton,edu,1f9ae272bb4151817866511bd970bffb22981a49,citation,https://arxiv.org/pdf/1709.03170.pdf,An Iterative Regression Approach for Face Pose Estimation from RGB Images,2017 +216,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,86c053c162c08bc3fe093cc10398b9e64367a100,citation,https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf,Cascade of forests for face alignment,2015 +217,United Kingdom,Helen,helen,51.5247272,-0.03931035,Queen Mary University of London,edu,86c053c162c08bc3fe093cc10398b9e64367a100,citation,https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf,Cascade of forests for face alignment,2015 +218,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,056ba488898a1a1b32daec7a45e0d550e0c51ae4,citation,https://arxiv.org/pdf/1608.01137.pdf,Cascaded Continuous Regression for Real-Time Incremental Face Tracking,2016 +219,United States,Helen,helen,43.07982815,-89.43066425,University of Wisconsin Madison,edu,2e091b311ac48c18aaedbb5117e94213f1dbb529,citation,http://pages.cs.wisc.edu/~lizhang/projects/collab-face-landmarks/SmithECCV2014.pdf,Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets,2014 +220,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,faead8f2eb54c7bc33bc7d0569adc7a4c2ec4c3b,citation,https://arxiv.org/pdf/1611.10152.pdf,Combining Data-Driven and Model-Driven Methods for Robust Facial Landmark Detection,2018 +221,Canada,Helen,helen,45.3290959,-75.6619858,"National Research Council, Italy",edu,08ecc281cdf954e405524287ee5920e7c4fb597e,citation,https://pdfs.semanticscholar.org/08ec/c281cdf954e405524287ee5920e7c4fb597e.pdf,Computational Assessment of Facial Expression Production in ASD Children,2018 +222,United Kingdom,Helen,helen,51.5247272,-0.03931035,Queen Mary University of London,edu,dee406a7aaa0f4c9d64b7550e633d81bc66ff451,citation,https://arxiv.org/pdf/1710.01453.pdf,Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning,2017 +223,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,dee406a7aaa0f4c9d64b7550e633d81bc66ff451,citation,https://arxiv.org/pdf/1710.01453.pdf,Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning,2017 +224,United Kingdom,Helen,helen,52.17638955,0.14308882,University of Cambridge,edu,029b53f32079063047097fa59cfc788b2b550c4b,citation,https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf,Continuous Conditional Neural Fields for Structured Regression,2014 +225,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,029b53f32079063047097fa59cfc788b2b550c4b,citation,https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf,Continuous Conditional Neural Fields for Structured Regression,2014 +226,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,88e2efab01e883e037a416c63a03075d66625c26,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w36/Zadeh_Convolutional_Experts_Constrained_ICCV_2017_paper.pdf,Convolutional Experts Constrained Local Model for 3D Facial Landmark Detection,2017 +227,Sweden,Helen,helen,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,656a59954de3c9fcf82ffcef926af6ade2f3fdb5,citation,https://pdfs.semanticscholar.org/656a/59954de3c9fcf82ffcef926af6ade2f3fdb5.pdf,Convolutional Network Representation for Visual Recognition,2017 +228,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,7360a2adcd6e3fe744b7d7aec5c08ee31094dfd4,citation,https://ibug.doc.ic.ac.uk/media/uploads/documents/deep-deformable-convolutional.pdf,Deep and Deformable: Convolutional Mixtures of Deformable Part-Based Models,2018 +229,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,7360a2adcd6e3fe744b7d7aec5c08ee31094dfd4,citation,https://ibug.doc.ic.ac.uk/media/uploads/documents/deep-deformable-convolutional.pdf,Deep and Deformable: Convolutional Mixtures of Deformable Part-Based Models,2018 +230,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,5239001571bc64de3e61be0be8985860f08d7e7e,citation,https://arxiv.org/pdf/1607.06871.pdf,Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling,2016 +231,United States,Helen,helen,45.57022705,-122.63709346,Concordia University,edu,5239001571bc64de3e61be0be8985860f08d7e7e,citation,https://arxiv.org/pdf/1607.06871.pdf,Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling,2016 +232,United States,Helen,helen,37.3239177,-122.0129693,"NEC Labs, Cupertino, CA",company,61f04606528ecf4a42b49e8ac2add2e9f92c0def,citation,https://arxiv.org/pdf/1605.01014.pdf,Deep Deformation Network for Object Landmark Localization,2016 +233,France,Helen,helen,49.4583047,1.0688892,Normandie University,edu,9ca7899338129f4ba6744f801e722d53a44e4622,citation,https://arxiv.org/pdf/1504.07550.pdf,Deep neural networks regularization for structured output prediction,2018 +234,China,Helen,helen,39.9808333,116.34101249,Beihang University,edu,5a7e62fdea39a4372e25cbbadc01d9b2204af95a,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf,Direct Shape Regression Networks for End-to-End Face Alignment,2018 +235,United States,Helen,helen,32.7283683,-97.11201835,University of Texas at Arlington,edu,5a7e62fdea39a4372e25cbbadc01d9b2204af95a,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf,Direct Shape Regression Networks for End-to-End Face Alignment,2018 +236,China,Helen,helen,34.1235825,108.83546,Xidian University,edu,5a7e62fdea39a4372e25cbbadc01d9b2204af95a,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf,Direct Shape Regression Networks for End-to-End Face Alignment,2018 +237,United States,Helen,helen,43.07982815,-89.43066425,University of Wisconsin Madison,edu,0eac652139f7ab44ff1051584b59f2dc1757f53b,citation,https://arxiv.org/pdf/1611.01584.pdf,Efficient Branching Cascaded Regression for Face Alignment under Significant Head Rotation,2016 +238,Brazil,Helen,helen,-13.0024602,-38.5089752,Federal University of Bahia,edu,b07582d1a59a9c6f029d0d8328414c7bef64dca0,citation,https://arxiv.org/pdf/1710.07662.pdf,Employing Fusion of Learned and Handcrafted Features for Unconstrained Ear Recognition,2018 +239,United States,Helen,helen,28.0599999,-82.41383619,University of South Florida,edu,b07582d1a59a9c6f029d0d8328414c7bef64dca0,citation,https://arxiv.org/pdf/1710.07662.pdf,Employing Fusion of Learned and Handcrafted Features for Unconstrained Ear Recognition,2018 +240,Spain,Helen,helen,41.5008957,2.111553,Autonomous University of Barcelona,edu,a40f8881a36bc01f3ae356b3e57eac84e989eef0,citation,https://arxiv.org/pdf/1703.03305.pdf,"End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks",2017 +241,Netherlands,Helen,helen,51.816701,5.865272,Radboud University Nijmegen,edu,a40f8881a36bc01f3ae356b3e57eac84e989eef0,citation,https://arxiv.org/pdf/1703.03305.pdf,"End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks",2017 +242,Spain,Helen,helen,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,a40f8881a36bc01f3ae356b3e57eac84e989eef0,citation,https://arxiv.org/pdf/1703.03305.pdf,"End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks",2017 +243,United States,Helen,helen,34.0224149,-118.28634407,University of Southern California,edu,49258cc3979103681848284470056956b77caf80,citation,https://5443dcab-a-62cb3a1a-s-sites.googlegroups.com/site/tuftsyuewu/epat-euclidean-perturbation.pdf?attachauth=ANoY7crlk9caZscfn0KRjed81DVoV-Ec6ZHI7txQrJiM_NBic36WKIg-ODwefcBtfgfKdS1iX28MlSXNyB7pE0D7opPjlGqxBVVa1UuIiydhFOgkXlXGfrYqSPS6749JeYWDkfvwWraRfB_CK8bu77jAEA2sIVNgaVRa_7zvmzwnstLwSUowbYC1LRc5yDt8ieT_jdEb_TuhMgR2j03BdHgyUkVjl0TXRukYHWglDOxzHAKwj0vsb4U%3D&attredirects=0,EPAT: Euclidean Perturbation Analysis and Transform - An Agnostic Data Adaptation Framework for Improving Facial Landmark Detectors,2017 +244,United States,Helen,helen,37.3307703,-121.8940951,Adobe,company,992ebd81eb448d1eef846bfc416fc929beb7d28b,citation,https://pdfs.semanticscholar.org/992e/bd81eb448d1eef846bfc416fc929beb7d28b.pdf,Exemplar-Based Face Parsing Supplementary Material,2013 +245,United States,Helen,helen,43.07982815,-89.43066425,University of Wisconsin Madison,edu,992ebd81eb448d1eef846bfc416fc929beb7d28b,citation,https://pdfs.semanticscholar.org/992e/bd81eb448d1eef846bfc416fc929beb7d28b.pdf,Exemplar-Based Face Parsing Supplementary Material,2013 +246,China,Helen,helen,35.86166,104.195397,"Megvii Inc. (Face++), China",company,1a8ccc23ed73db64748e31c61c69fe23c48a2bb1,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Zhou_Extensive_Facial_Landmark_2013_ICCV_paper.pdf,Extensive Facial Landmark Localization with Coarse-to-Fine Convolutional Network Cascade,2013 +247,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,6d8c9a1759e7204eacb4eeb06567ad0ef4229f93,citation,https://arxiv.org/pdf/1707.05938.pdf,"Face Alignment Robust to Pose, Expressions and Occlusions",2016 +248,United States,Helen,helen,42.718568,-84.47791571,Michigan State University,edu,6d8c9a1759e7204eacb4eeb06567ad0ef4229f93,citation,https://arxiv.org/pdf/1707.05938.pdf,"Face Alignment Robust to Pose, Expressions and Occlusions",2016 +249,Poland,Helen,helen,52.22165395,21.00735776,Warsaw University of Technology,edu,eb48a58b873295d719827e746d51b110f5716d6c,citation,https://arxiv.org/pdf/1706.01820.pdf,Face Alignment Using K-Cluster Regression Forests With Weighted Splitting,2016 +250,France,Helen,helen,48.8507603,2.3412757,"Sorbonne Universités, Paris, France",edu,31e57fa83ac60c03d884774d2b515813493977b9,citation,https://arxiv.org/pdf/1703.01597.pdf,Face Alignment with Cascaded Semi-Parametric Deep Greedy Neural Forests,2018 +251,United States,Helen,helen,30.44235995,-84.29747867,Florida State University,edu,9207671d9e2b668c065e06d9f58f597601039e5e,citation,https://pdfs.semanticscholar.org/9207/671d9e2b668c065e06d9f58f597601039e5e.pdf,Face Detection Using a 3D Model on Face Keypoints,2014 +252,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,bc704680b5032eadf78c4e49f548ba14040965bf,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Trigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.pdf,"Face Normals ""In-the-Wild"" Using Fully Convolutional Networks",2017 +253,United Kingdom,Helen,helen,51.5231607,-0.1282037,University College London,edu,bc704680b5032eadf78c4e49f548ba14040965bf,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Trigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.pdf,"Face Normals ""In-the-Wild"" Using Fully Convolutional Networks",2017 +254,China,Helen,helen,23.09461185,113.28788994,Sun Yat-Sen University,edu,a4ce0f8cfa7d9aa343cb30b0792bb379e20ef41b,citation,https://arxiv.org/pdf/1812.03887.pdf,Facial Landmark Machines: A Backbone-Branches Architecture with Progressive Representation Learning,2018 +255,China,Helen,helen,22.2081469,114.25964115,University of Hong Kong,edu,a4ce0f8cfa7d9aa343cb30b0792bb379e20ef41b,citation,https://arxiv.org/pdf/1812.03887.pdf,Facial Landmark Machines: A Backbone-Branches Architecture with Progressive Representation Learning,2018 +256,Israel,Helen,helen,32.06932925,34.84334339,Bar-Ilan University,edu,e4f032ee301d4a4b3d598e6fa6cffbcdb9cdfdd1,citation,https://arxiv.org/pdf/1805.01760.pdf,Facial Landmark Point Localization using Coarse-to-Fine Deep Recurrent Neural Network,2018 +257,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,ebedc841a2c1b3a9ab7357de833101648281ff0e,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf,Facial landmarking for in-the-wild images with local inference based on global appearance,2015 +258,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,ebedc841a2c1b3a9ab7357de833101648281ff0e,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf,Facial landmarking for in-the-wild images with local inference based on global appearance,2015 +259,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,375435fb0da220a65ac9e82275a880e1b9f0a557,citation,http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf,From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild,2015 +260,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,375435fb0da220a65ac9e82275a880e1b9f0a557,citation,http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf,From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild,2015 +261,China,Helen,helen,31.30104395,121.50045497,Fudan University,edu,37381718559f767fc496cc34ceb98ff18bc7d3e1,citation,https://pdfs.semanticscholar.org/3738/1718559f767fc496cc34ceb98ff18bc7d3e1.pdf,Harnessing Synthesized Abstraction Images to Improve Facial Attribute Recognition,2018 +262,China,Helen,helen,31.19884,121.432567,Jiaotong University,edu,37381718559f767fc496cc34ceb98ff18bc7d3e1,citation,https://pdfs.semanticscholar.org/3738/1718559f767fc496cc34ceb98ff18bc7d3e1.pdf,Harnessing Synthesized Abstraction Images to Improve Facial Attribute Recognition,2018 +263,Spain,Helen,helen,42.797263,-1.6321518,Public University of Navarra,edu,8c0a47c61143ceb5bbabef403923e4bf92fb854d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Larumbe_Improved_Strategies_for_ICCV_2017_paper.pdf,Improved Strategies for HPE Employing Learning-by-Synthesis Approaches,2017 +264,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,3352426a67eabe3516812cb66a77aeb8b4df4d1b,citation,https://arxiv.org/pdf/1708.06023.pdf,Joint Multi-view Face Alignment in the Wild,2017 +265,China,Helen,helen,22.4162632,114.2109318,Chinese University of Hong Kong,edu,390f3d7cdf1ce127ecca65afa2e24c563e9db93b,citation,https://pdfs.semanticscholar.org/6e80/a3558f9170f97c103137ea2e18ddd782e8d7.pdf,Learning and Transferring Multi-task Deep Representation for Face Alignment,2014 +266,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,df80fed59ffdf751a20af317f265848fe6bfb9c9,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Learning%20deep%20sharable%20and%20structural%20detectors%20for%20face%20alignment.pdf,Learning Deep Sharable and Structural Detectors for Face Alignment,2017 +267,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,d9deafd9d9e60657a7f34df5f494edff546c4fb8,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_Learning_the_Multilinear_CVPR_2017_paper.pdf,Learning the Multilinear Structure of Visual Data,2017 +268,Canada,Helen,helen,45.504384,-73.6128829,Polytechnique Montréal,edu,4f77a37753c03886ca9c9349723ec3bbfe4ee967,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Hasan_Localizing_Facial_Keypoints_2013_ICCV_paper.pdf,"Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models",2013 +269,Canada,Helen,helen,43.66333345,-79.39769975,University of Toronto,edu,4f77a37753c03886ca9c9349723ec3bbfe4ee967,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Hasan_Localizing_Facial_Keypoints_2013_ICCV_paper.pdf,"Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models",2013 +270,United States,Helen,helen,38.7768106,-94.9442982,Amazon,company,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +271,China,Helen,helen,39.993008,116.329882,SenseTime,company,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +272,China,Helen,helen,40.00229045,116.32098908,Tsinghua University,edu,e7265c560b3f10013bf70aacbbf0eb4631b7e2aa,citation,https://arxiv.org/pdf/1805.10483.pdf,Look at Boundary: A Boundary-Aware Face Alignment Algorithm,2018 +273,United States,Helen,helen,32.87935255,-117.23110049,"University of California, San Diego",edu,1b0a071450c419138432c033f722027ec88846ea,citation,http://cvrr.ucsd.edu/publications/2016/YuenMartinTrivediITSC2016.pdf,Looking at faces in a vehicle: A deep CNN based approach and evaluation,2016 +274,Iran,Helen,helen,35.704514,51.40972058,Amirkabir University of Technology,edu,6f5ce5570dc2960b8b0e4a0a50eab84b7f6af5cb,citation,https://arxiv.org/pdf/1706.06247.pdf,Low Resolution Face Recognition Using a Two-Branch Deep Convolutional Neural Network Architecture,2017 +275,United States,Helen,helen,42.3583961,-71.09567788,MIT,edu,6f5ce5570dc2960b8b0e4a0a50eab84b7f6af5cb,citation,https://arxiv.org/pdf/1706.06247.pdf,Low Resolution Face Recognition Using a Two-Branch Deep Convolutional Neural Network Architecture,2017 +276,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,47e8db3d9adb79a87c8c02b88f432f911eb45dc5,citation,https://arxiv.org/pdf/1509.05715.pdf,MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization,2016 +277,United Kingdom,Helen,helen,53.46600455,-2.23300881,University of Manchester,edu,daa4cfde41d37b2ab497458e331556d13dd14d0b,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Rajamanoharan_Multi-View_Constrained_Local_ICCV_2015_paper.pdf,Multi-view Constrained Local Models for Large Head Angle Facial Tracking,2015 +278,China,Helen,helen,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,d03265ea9200a993af857b473c6bf12a095ca178,citation,https://pdfs.semanticscholar.org/d032/65ea9200a993af857b473c6bf12a095ca178.pdf,Multiple deep convolutional neural networks averaging for face alignment,2015 +279,France,Helen,helen,49.3849757,1.0683257,"INSA Rouen, France",edu,0a6a25ee84fc0bf7284f41eaa6fefaa58b5b329a,citation,https://arxiv.org/pdf/1807.05292.pdf,Neural Networks Regularization Through Representation Learning,2018 +280,France,Helen,helen,49.4583047,1.0688892,"LITIS, Université de Rouen, Rouen, France",edu,0a6a25ee84fc0bf7284f41eaa6fefaa58b5b329a,citation,https://arxiv.org/pdf/1807.05292.pdf,Neural Networks Regularization Through Representation Learning,2018 +281,United Kingdom,Helen,helen,53.7641378,-2.7092453,University of Central Lancashire,edu,ef52f1e2b52fd84a7e22226ed67132c6ce47b829,citation,https://pdfs.semanticscholar.org/ef52/f1e2b52fd84a7e22226ed67132c6ce47b829.pdf,Online Eye Status Detection in the Wild with Convolutional Neural Networks,2017 +282,United Kingdom,Helen,helen,50.7944026,-1.0971748,Cambridge University,edu,2fda461869f84a9298a0e93ef280f79b9fb76f94,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf,OpenFace: An open source facial behavior analysis toolkit,2016 +283,United States,Helen,helen,40.4441619,-79.94272826,Carnegie Mellon University,edu,2fda461869f84a9298a0e93ef280f79b9fb76f94,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf,OpenFace: An open source facial behavior analysis toolkit,2016 +284,Sweden,Helen,helen,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,12d8730da5aab242795bdff17b30b6e0bac82998,citation,https://arxiv.org/pdf/1411.6509.pdf,Persistent Evidence of Local Image Properties in Generic ConvNets,2015 +285,United States,Helen,helen,33.6404952,-117.8442962,UC Irvine,edu,5711400c59a162112c57e9f899147d457537f701,citation,https://pdfs.semanticscholar.org/5711/400c59a162112c57e9f899147d457537f701.pdf,Recognizing and Segmenting Objects in the Presence of Occlusion and Clutter,2016 +286,United States,Helen,helen,41.2097516,-73.8026467,IBM Research T. J. Watson Center,company,ac5d0705a9ddba29151fd539c668ba2c0d16deb6,citation,https://arxiv.org/pdf/1801.06066.pdf,RED-Net: A Recurrent Encoder–Decoder Network for Video-Based Face Alignment,2018 +287,United States,Helen,helen,40.47913175,-74.43168868,Rutgers University,edu,ac5d0705a9ddba29151fd539c668ba2c0d16deb6,citation,https://arxiv.org/pdf/1801.06066.pdf,RED-Net: A Recurrent Encoder–Decoder Network for Video-Based Face Alignment,2018 +288,Singapore,Helen,helen,1.3484104,103.68297965,Nanyang Technological University,edu,2bfccbf6f4e88a92a7b1f2b5c588b68c5fa45a92,citation,https://arxiv.org/pdf/1807.11079.pdf,ReenactGAN: Learning to Reenact Faces via Boundary Transfer,2018 +289,China,Helen,helen,39.993008,116.329882,SenseTime,company,2bfccbf6f4e88a92a7b1f2b5c588b68c5fa45a92,citation,https://arxiv.org/pdf/1807.11079.pdf,ReenactGAN: Learning to Reenact Faces via Boundary Transfer,2018 +290,Italy,Helen,helen,46.0658836,11.1159894,University of Trento,edu,f61829274cfe64b94361e54351f01a0376cd1253,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Tulyakov_Regressing_a_3D_ICCV_2015_paper.pdf,Regressing a 3D Face Shape from a Single Image,2015 +291,Singapore,Helen,helen,1.3484104,103.68297965,Nanyang Technological University,edu,4d23bb65c6772cb374fc05b1f10dedf9b43e63cf,citation,https://pdfs.semanticscholar.org/4d23/bb65c6772cb374fc05b1f10dedf9b43e63cf.pdf,Robust face alignment and partial face recognition,2016 +292,United States,Helen,helen,34.13710185,-118.12527487,California Institute of Technology,edu,2724ba85ec4a66de18da33925e537f3902f21249,citation,,Robust Face Landmark Estimation under Occlusion,2013 +293,United States,Helen,helen,47.6423318,-122.1369302,Microsoft,company,2724ba85ec4a66de18da33925e537f3902f21249,citation,,Robust Face Landmark Estimation under Occlusion,2013 +294,United States,Helen,helen,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,1c1f957d85b59d23163583c421755869f248ceef,citation,https://arxiv.org/pdf/1709.08127.pdf,Robust Facial Landmark Detection Under Significant Head Poses and Occlusion,2015 +295,Germany,Helen,helen,48.263011,11.666857,Technical University of Munich,edu,1121873326ab0c9f324b004aa0970a31d4f83eb8,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Merget_Robust_Facial_Landmark_CVPR_2018_paper.pdf,Robust Facial Landmark Detection via a Fully-Convolutional Local-Global Context Network,2018 +296,United States,Helen,helen,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,c3d3d2229500c555c7a7150a8b126ef874cbee1c,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Wu_Shape_Augmented_Regression_ICCV_2015_paper.pdf,Shape Augmented Regression Method for Face Alignment,2015 +297,Canada,Helen,helen,43.66333345,-79.39769975,University of Toronto,edu,33ae696546eed070717192d393f75a1583cd8e2c,citation,https://arxiv.org/pdf/1708.08508.pdf,Subspace selection to suppress confounding source domain information in AAM transfer learning,2017 +298,Finland,Helen,helen,65.0592157,25.46632601,University of Oulu,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +299,United Kingdom,Helen,helen,51.59029705,-0.22963221,Middlesex University,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +300,United Kingdom,Helen,helen,50.7369302,-3.53647672,University of Exeter,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +301,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,f3745aa4a723d791d3a04ddf7a5546e411226459,citation,,The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking,2018 +302,Germany,Helen,helen,49.01546,8.4257999,Fraunhofer,company,50ccc98d9ce06160cdf92aaf470b8f4edbd8b899,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf,Towards robust cascaded regression for face alignment in the wild,2015 +303,Germany,Helen,helen,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,50ccc98d9ce06160cdf92aaf470b8f4edbd8b899,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf,Towards robust cascaded regression for face alignment in the wild,2015 +304,Switzerland,Helen,helen,46.5184121,6.5684654,École Polytechnique Fédérale de Lausanne,edu,50ccc98d9ce06160cdf92aaf470b8f4edbd8b899,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Qu_Towards_Robust_Cascaded_2015_CVPR_paper.pdf,Towards robust cascaded regression for face alignment in the wild,2015 +305,Poland,Helen,helen,52.22165395,21.00735776,Warsaw University of Technology,edu,e52272f92fa553687f1ac068605f1de929efafc2,citation,https://repo.pw.edu.pl/docstore/download/WUT8aeb20bbb6964b7da1cfefbf2e370139/1-s2.0-S0952197617301227-main.pdf,Using a Probabilistic Neural Network for lip-based biometric verification,2017 +306,United States,Helen,helen,33.6404952,-117.8442962,UC Irvine,edu,397085122a5cade71ef6c19f657c609f0a4f7473,citation,https://pdfs.semanticscholar.org/db11/4901d09a07ab66bffa6986bc81303e133ae1.pdf,Using Segmentation to Predict the Absence of Occluded Parts,2015 +307,China,Helen,helen,39.980196,116.333305,"CASIA, China",edu,708f4787bec9d7563f4bb8b33834de445147133b,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.pdf,Wavelet-SRNet: A Wavelet-Based CNN for Multi-scale Face Super Resolution,2017 +308,China,Helen,helen,40.0044795,116.370238,Chinese Academy of Sciences,edu,708f4787bec9d7563f4bb8b33834de445147133b,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.pdf,Wavelet-SRNet: A Wavelet-Based CNN for Multi-scale Face Super Resolution,2017 +309,United Kingdom,Helen,helen,51.49887085,-0.17560797,Imperial College London,edu,044d9a8c61383312cdafbcc44b9d00d650b21c70,citation,,300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,2013 +310,United Kingdom,Helen,helen,53.22853665,-0.54873472,University of Lincoln,edu,044d9a8c61383312cdafbcc44b9d00d650b21c70,citation,,300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,2013 +311,Netherlands,Helen,helen,52.2380139,6.8566761,University of Twente,edu,044d9a8c61383312cdafbcc44b9d00d650b21c70,citation,,300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge,2013 +312,Italy,Helen,helen,46.0658836,11.1159894,University of Trento,edu,b48d3694a8342b6efc18c9c9124c62406e6bf3b3,citation,,Recurrent Convolutional Shape Regression,2018 +313,United States,Helen,helen,33.9850469,-118.4694832,"Snapchat Research, Venice, CA",company,b48d3694a8342b6efc18c9c9124c62406e6bf3b3,citation,,Recurrent Convolutional Shape Regression,2018 +314,Italy,Helen,helen,40.3515155,18.1750161,"National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Lecce, Italy",edu,523db6dee0e60a2d513759fa04aa96f2fed40ff4,citation,,Study of Mechanisms of Social Interaction Stimulation in Autism Spectrum Disorder by Assisted Humanoid Robot,2018 +315,Italy,Helen,helen,38.1937335,15.5542057,"National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Messina, Italy",edu,523db6dee0e60a2d513759fa04aa96f2fed40ff4,citation,,Study of Mechanisms of Social Interaction Stimulation in Autism Spectrum Disorder by Assisted Humanoid Robot,2018 +316,United States,Helen,helen,37.3307703,-121.8940951,Adobe,company,95f12d27c3b4914e0668a268360948bce92f7db3,citation,https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf,Interactive Facial Feature Localization,2012 +317,United States,Helen,helen,37.3936717,-122.0807262,Facebook,company,95f12d27c3b4914e0668a268360948bce92f7db3,citation,https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf,Interactive Facial Feature Localization,2012 +318,United States,Helen,helen,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,95f12d27c3b4914e0668a268360948bce92f7db3,citation,https://pdfs.semanticscholar.org/95f1/2d27c3b4914e0668a268360948bce92f7db3.pdf,Interactive Facial Feature Localization,2012 +319,United States,Helen,helen,33.9832526,-118.40417,USC,edu,0a34fe39e9938ae8c813a81ae6d2d3a325600e5c,citation,https://arxiv.org/pdf/1708.07517.pdf,FacePoseNet: Making a Case for Landmark-Free Face Alignment,2017 +320,Israel,Helen,helen,32.77824165,34.99565673,Open University of Israel,edu,0a34fe39e9938ae8c813a81ae6d2d3a325600e5c,citation,https://arxiv.org/pdf/1708.07517.pdf,FacePoseNet: Making a Case for Landmark-Free Face Alignment,2017 +321,United Kingdom,Helen,helen,52.9387428,-1.20029569,University of Nottingham,edu,c46a4db7247d26aceafed3e4f38ce52d54361817,citation,https://arxiv.org/pdf/1609.09642.pdf,A CNN Cascade for Landmark Guided Semantic Part Segmentation,2016 +322,United States,Helen,helen,38.9869183,-76.9425543,"Maryland Univ., College Park, MD, USA",edu,59b6e9320a4e1de9216c6fc49b4b0309211b17e8,citation,https://pdfs.semanticscholar.org/59b6/e9320a4e1de9216c6fc49b4b0309211b17e8.pdf,Robust Representations for unconstrained Face Recognition and its Applications,2016 diff --git a/site/datasets/verified/ijb_c.csv b/site/datasets/verified/ijb_c.csv index 4b8c251d..a728f73d 100644 --- a/site/datasets/verified/ijb_c.csv +++ b/site/datasets/verified/ijb_c.csv @@ -1,6 +1,5 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,IJB-C,ijb_c,0.0,0.0,,,,main,,IARPA Janus Benchmark - C: Face Dataset and Protocol,2018 -1,United Kingdom,IJB-C,ijb_c,51.7520849,-1.2516646,Oxford University,edu,9286eab328444401a848cd2e13186840be8f0409,citation,https://arxiv.org/pdf/1807.09192.pdf,Multicolumn Networks for Face Recognition,2018 -2,United Kingdom,IJB-C,ijb_c,51.7520849,-1.2516646,Oxford University,edu,ac5ab8f71edde6d1a2129da12d051ed03a8446a1,citation,https://arxiv.org/pdf/1807.11440.pdf,Comparator Networks,2018 -3,United States,IJB-C,ijb_c,29.7207902,-95.34406271,University of Houston,edu,3b3941524d97e7f778367a1250ba1efb9205d5fc,citation,https://arxiv.org/pdf/1901.09447.pdf,Open Source Face Recognition Performance Evaluation Package,2019 -4,United States,IJB-C,ijb_c,42.718568,-84.47791571,Michigan State University,edu,fa03cac5aa5192822a85273852090ca20a6c47aa,citation,https://arxiv.org/pdf/1805.00611.pdf,Towards Interpretable Face Recognition,2018 +1,United Kingdom,IJB-C,ijb_c,51.7520849,-1.2516646,Oxford University,edu,ac5ab8f71edde6d1a2129da12d051ed03a8446a1,citation,https://arxiv.org/pdf/1807.11440.pdf,Comparator Networks,2018 +2,United States,IJB-C,ijb_c,29.7207902,-95.34406271,University of Houston,edu,3b3941524d97e7f778367a1250ba1efb9205d5fc,citation,https://arxiv.org/pdf/1901.09447.pdf,Open Source Face Recognition Performance Evaluation Package,2019 +3,United States,IJB-C,ijb_c,42.718568,-84.47791571,Michigan State University,edu,fa03cac5aa5192822a85273852090ca20a6c47aa,citation,https://arxiv.org/pdf/1805.00611.pdf,Towards Interpretable Face Recognition,2018 diff --git a/site/datasets/verified/imdb_wiki.csv b/site/datasets/verified/imdb_wiki.csv index 913f9f8d..5e7d3af6 100644 --- a/site/datasets/verified/imdb_wiki.csv +++ b/site/datasets/verified/imdb_wiki.csv @@ -1,2 +1,7 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year -0,,IMDB,imdb_wiki,0.0,0.0,,,,main,,Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks,2016 +0,,IMDB-Wiki,imdb_wiki,0.0,0.0,,,,main,,Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks,2016 +1,Denmark,IMDB-Wiki,imdb_wiki,56.1681384,10.2030118,Aarhus University,edu,1277b1b8b609a18b94e4907d76a117c9783a5373,citation,https://arxiv.org/pdf/1808.10151.pdf,VirtualIdentity: Privacy preserving user profiling,2016 +2,United States,IMDB-Wiki,imdb_wiki,36.9915847,-122.0582771,University of California Santa Cruz,edu,1277b1b8b609a18b94e4907d76a117c9783a5373,citation,https://arxiv.org/pdf/1808.10151.pdf,VirtualIdentity: Privacy preserving user profiling,2016 +3,United States,IMDB-Wiki,imdb_wiki,47.6543238,-122.30800894,University of Washington,edu,1277b1b8b609a18b94e4907d76a117c9783a5373,citation,https://arxiv.org/pdf/1808.10151.pdf,VirtualIdentity: Privacy preserving user profiling,2016 +4,Germany,IMDB-Wiki,imdb_wiki,49.4295181,7.7513891,"German Institute of Artificial Intelligence (DFKI), Kaiserslautern, Germany",edu,775c15a5dfca426d53c634668e58dd5d3314ea89,citation,https://pdfs.semanticscholar.org/775c/15a5dfca426d53c634668e58dd5d3314ea89.pdf,Image Quality-aware Deep Networks Ensemble for Efficient Gender Recognition in the Wild,2018 +5,Germany,IMDB-Wiki,imdb_wiki,49.4253891,7.7553196,"TU Kaiserslautern, Germany",edu,775c15a5dfca426d53c634668e58dd5d3314ea89,citation,https://pdfs.semanticscholar.org/775c/15a5dfca426d53c634668e58dd5d3314ea89.pdf,Image Quality-aware Deep Networks Ensemble for Efficient Gender Recognition in the Wild,2018 diff --git a/site/datasets/verified/lfpw.csv b/site/datasets/verified/lfpw.csv index a2b6a265..ac34778e 100644 --- a/site/datasets/verified/lfpw.csv +++ b/site/datasets/verified/lfpw.csv @@ -1,2 +1,232 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year -0,,LFWP,lfpw,0.0,0.0,,,,main,,Localizing Parts of Faces Using a Consensus of Exemplars,2011 +0,,LFPW,lfpw,0.0,0.0,,,,main,,Localizing Parts of Faces Using a Consensus of Exemplars,2011 +1,China,LFPW,lfpw,28.2290209,112.99483204,"National University of Defense Technology, China",mil,ac51d9ddbd462d023ec60818bac6cdae83b66992,citation,http://downloads.hindawi.com/journals/cin/2015/709072.pdf,An Efficient Robust Eye Localization by Learning the Convolution Distribution Using Eye Template,2015 +2,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,529b1f33aed49dbe025a99ac1d211c777ad881ec,citation,http://eprints.eemcs.utwente.nl/26776/01/Pantic_Fast_and_Exact_Bi-Directional_Fitting.pdf,Fast and exact bi-directional fitting of active appearance models,2015 +3,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,529b1f33aed49dbe025a99ac1d211c777ad881ec,citation,http://eprints.eemcs.utwente.nl/26776/01/Pantic_Fast_and_Exact_Bi-Directional_Fitting.pdf,Fast and exact bi-directional fitting of active appearance models,2015 +4,China,LFPW,lfpw,31.32235655,121.38400941,Shanghai University,edu,63fd7a159e58add133b9c71c4b1b37b899dd646f,citation,http://wei-shen.weebly.com/uploads/2/3/8/2/23825939/posecorrection.pdf,Exemplar-Based Human Action Pose Correction,2014 +5,China,LFPW,lfpw,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,63fd7a159e58add133b9c71c4b1b37b899dd646f,citation,http://wei-shen.weebly.com/uploads/2/3/8/2/23825939/posecorrection.pdf,Exemplar-Based Human Action Pose Correction,2014 +6,United States,LFPW,lfpw,47.6423318,-122.1369302,Microsoft,company,63fd7a159e58add133b9c71c4b1b37b899dd646f,citation,http://wei-shen.weebly.com/uploads/2/3/8/2/23825939/posecorrection.pdf,Exemplar-Based Human Action Pose Correction,2014 +7,United States,LFPW,lfpw,42.3614256,-71.0812092,Microsoft Research Asia,company,63fd7a159e58add133b9c71c4b1b37b899dd646f,citation,http://wei-shen.weebly.com/uploads/2/3/8/2/23825939/posecorrection.pdf,Exemplar-Based Human Action Pose Correction,2014 +8,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,57ebeff9273dea933e2a75c306849baf43081a8c,citation,http://mmlab.ie.cuhk.edu.hk/archive/CNN/data/CNN_FacePoint.pdf,Deep Convolutional Network Cascade for Facial Point Detection,2013 +9,China,LFPW,lfpw,22.59805605,113.98533784,Shenzhen Institutes of Advanced Technology,edu,57ebeff9273dea933e2a75c306849baf43081a8c,citation,http://mmlab.ie.cuhk.edu.hk/archive/CNN/data/CNN_FacePoint.pdf,Deep Convolutional Network Cascade for Facial Point Detection,2013 +10,Canada,LFPW,lfpw,43.0095971,-81.2737336,University of Western Ontario,edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +11,Canada,LFPW,lfpw,42.960348,-81.226628,"London Healthcare Sciences Centre, Ontario, Canada",edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +12,United Kingdom,LFPW,lfpw,55.0030632,-1.57463231,Northumbria University,edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +13,Canada,LFPW,lfpw,43.0012953,-81.2550455,"St. Joseph's Health Care, Ontario, Canada",edu,f7ae38a073be7c9cd1b92359131b9c8374579b13,citation,http://www.digitalimaginggroup.ca/members/Shuo/07487053.pdf,Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression,2017 +14,United States,LFPW,lfpw,37.3936717,-122.0807262,Facebook,company,dcd2ac544a8336d73e4d3d80b158477c783e1e50,citation,https://arxiv.org/pdf/1709.01591.pdf,Improving Landmark Localization with Semi-Supervised Learning,2018 +15,United States,LFPW,lfpw,37.3706254,-121.9671894,NVIDIA,company,dcd2ac544a8336d73e4d3d80b158477c783e1e50,citation,https://arxiv.org/pdf/1709.01591.pdf,Improving Landmark Localization with Semi-Supervised Learning,2018 +16,Canada,LFPW,lfpw,45.5010087,-73.6157778,University of Montreal,edu,dcd2ac544a8336d73e4d3d80b158477c783e1e50,citation,https://arxiv.org/pdf/1709.01591.pdf,Improving Landmark Localization with Semi-Supervised Learning,2018 +17,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,034b3f3bac663fb814336a69a9fd3514ca0082b9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/alabort_cvpr2015.pdf,Unifying holistic and Parts-Based Deformable Model fitting,2015 +18,China,LFPW,lfpw,31.83907195,117.26420748,University of Science and Technology of China,edu,084bd02d171e36458f108f07265386f22b34a1ae,citation,http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf,Face Alignment at 3000 FPS via Regressing Local Binary Features,2014 +19,United States,LFPW,lfpw,47.6423318,-122.1369302,Microsoft,company,084bd02d171e36458f108f07265386f22b34a1ae,citation,http://7xrqgw.com1.z0.glb.clouddn.com/3000fps.pdf,Face Alignment at 3000 FPS via Regressing Local Binary Features,2014 +20,United Kingdom,LFPW,lfpw,51.24303255,-0.59001382,University of Surrey,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +21,United Kingdom,LFPW,lfpw,56.1454119,-3.9205713,University of Stirling,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +22,China,LFPW,lfpw,31.4854255,120.2739581,Jiangnan University,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +23,China,LFPW,lfpw,30.642769,104.06751175,"Sichuan University, Chengdu",edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +24,Germany,LFPW,lfpw,48.48187645,9.18682404,Reutlingen University,edu,2d2e1d1f50645fe20c051339e9a0fca7b176422a,citation,https://arxiv.org/pdf/1803.05536.pdf,Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild,2018 +25,United States,LFPW,lfpw,45.57022705,-122.63709346,Concordia University,edu,266ed43dcea2e7db9f968b164ca08897539ca8dd,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf,Beyond Principal Components: Deep Boltzmann Machines for face modeling,2015 +26,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,266ed43dcea2e7db9f968b164ca08897539ca8dd,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_037.pdf,Beyond Principal Components: Deep Boltzmann Machines for face modeling,2015 +27,United States,LFPW,lfpw,40.47913175,-74.43168868,Rutgers University,edu,3b470b76045745c0ef5321e0f1e0e6a4b1821339,citation,https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf,Consensus of Regression for Occlusion-Robust Facial Feature Localization,2014 +28,United States,LFPW,lfpw,37.3309307,-121.8940485,"Adobe Research, San Jose, CA",company,3b470b76045745c0ef5321e0f1e0e6a4b1821339,citation,https://pdfs.semanticscholar.org/8e72/fa02f2d90ba31f31e0a7aa96a6d3e10a66fc.pdf,Consensus of Regression for Occlusion-Robust Facial Feature Localization,2014 +29,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,2a4153655ad1169d482e22c468d67f3bc2c49f12,citation,http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf,Face Alignment Across Large Poses: A 3D Solution,2016 +30,United States,LFPW,lfpw,42.718568,-84.47791571,Michigan State University,edu,2a4153655ad1169d482e22c468d67f3bc2c49f12,citation,http://cseweb.ucsd.edu/~mkchandraker/classes/CSE291/Winter2018/Lectures/FaceAlignment.pdf,Face Alignment Across Large Poses: A 3D Solution,2016 +31,United Kingdom,LFPW,lfpw,53.22853665,-0.54873472,University of Lincoln,edu,232b6e2391c064d483546b9ee3aafe0ba48ca519,citation,http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf,Optimization Problems for Fast AAM Fitting in-the-Wild,2013 +32,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,232b6e2391c064d483546b9ee3aafe0ba48ca519,citation,http://doc.utwente.nl/89696/1/Pantic_Optimization_problems_for_fast_AAM_fitting.pdf,Optimization Problems for Fast AAM Fitting in-the-Wild,2013 +33,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,75fd9acf5e5b7ed17c658cc84090c4659e5de01d,citation,http://eprints.nottingham.ac.uk/31442/1/tzimiro_CVPR15.pdf,Project-Out Cascaded Regression with an application to face alignment,2015 +34,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,788a7b59ea72e23ef4f86dc9abb4450efefeca41,citation,http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf,Robust Statistical Face Frontalization,2015 +35,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,788a7b59ea72e23ef4f86dc9abb4450efefeca41,citation,http://eprints.eemcs.utwente.nl/26840/01/Pantic_Robust_Statistical_Face_Frontalization.pdf,Robust Statistical Face Frontalization,2015 +36,China,LFPW,lfpw,39.9041999,116.4073963,Key Lab of Intelligent Information Processing of Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +37,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +38,China,LFPW,lfpw,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +39,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,090ff8f992dc71a1125636c1adffc0634155b450,citation,https://pdfs.semanticscholar.org/090f/f8f992dc71a1125636c1adffc0634155b450.pdf,Topic-Aware Deep Auto-Encoders (TDA) for Face Alignment,2014 +40,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +41,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +42,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,e4754afaa15b1b53e70743880484b8d0736990ff,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885616000147-main.pdf,300 Faces In-The-Wild Challenge: database and results,2016 +43,United States,LFPW,lfpw,38.2167565,-85.75725023,University of Louisville,edu,9a4c45e5c6e4f616771a7325629d167a38508691,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W02/papers/Mostafa_A_Facial_Features_2015_CVPR_paper.pdf,A facial features detector integrating holistic facial information and part-based model,2015 +44,Egypt,LFPW,lfpw,31.21051105,29.91314562,Alexandria University,edu,9a4c45e5c6e4f616771a7325629d167a38508691,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W02/papers/Mostafa_A_Facial_Features_2015_CVPR_paper.pdf,A facial features detector integrating holistic facial information and part-based model,2015 +45,Egypt,LFPW,lfpw,27.18794105,31.17009498,Assiut University,edu,9a4c45e5c6e4f616771a7325629d167a38508691,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W02/papers/Mostafa_A_Facial_Features_2015_CVPR_paper.pdf,A facial features detector integrating holistic facial information and part-based model,2015 +46,China,LFPW,lfpw,31.4854255,120.2739581,Jiangnan University,edu,60824ee635777b4ee30fcc2485ef1e103b8e7af9,citation,http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf,Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting,2015 +47,United Kingdom,LFPW,lfpw,51.2421839,-0.5905421,University of Surrey Guildford,edu,60824ee635777b4ee30fcc2485ef1e103b8e7af9,citation,http://epubs.surrey.ac.uk/808177/1/Feng-TIP-2015.pdf,Cascaded Collaborative Regression for Robust Facial Landmark Detection Trained Using a Mixture of Synthetic and Real Images With Dynamic Weighting,2015 +48,China,LFPW,lfpw,39.9041999,116.4073963,Key Lab of Intelligent Information Processing of Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +49,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +50,China,LFPW,lfpw,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,22e2066acfb795ac4db3f97d2ac176d6ca41836c,citation,https://pdfs.semanticscholar.org/26f5/3a1abb47b1f0ea1f213dc7811257775dc6e6.pdf,Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment,2014 +51,United States,LFPW,lfpw,42.3614256,-71.0812092,Microsoft Research Asia,company,63d865c66faaba68018defee0daf201db8ca79ed,citation,https://arxiv.org/pdf/1409.5230.pdf,Deep Regression for Face Alignment,2014 +52,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,5e9ec3b8daa95d45138e30c07321e386590f8ec7,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/eleftheriadis_tip.pdf,Discriminative Shared Gaussian Processes for Multiview and View-Invariant Facial Expression Recognition,2015 +53,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,35f921def890210dda4b72247849ad7ba7d35250,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Zhou_Exemplar-Based_Graph_Matching_2013_ICCV_paper.pdf,Exemplar-Based Graph Matching for Robust Facial Landmark Localization,2013 +54,China,LFPW,lfpw,35.86166,104.195397,"Megvii Inc. (Face++), China",company,1a8ccc23ed73db64748e31c61c69fe23c48a2bb1,citation,http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W11/papers/Zhou_Extensive_Facial_Landmark_2013_ICCV_paper.pdf,Extensive Facial Landmark Localization with Coarse-to-Fine Convolutional Network Cascade,2013 +55,United Kingdom,LFPW,lfpw,52.17638955,0.14308882,University of Cambridge,edu,023be757b1769ecb0db810c95c010310d7daf00b,citation,https://arxiv.org/pdf/1507.03148.pdf,Face Alignment Assisted by Head Pose Estimation,2015 +56,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,023be757b1769ecb0db810c95c010310d7daf00b,citation,https://arxiv.org/pdf/1507.03148.pdf,Face Alignment Assisted by Head Pose Estimation,2015 +57,United States,LFPW,lfpw,42.36782045,-71.12666653,Harvard University,edu,023be757b1769ecb0db810c95c010310d7daf00b,citation,https://arxiv.org/pdf/1507.03148.pdf,Face Alignment Assisted by Head Pose Estimation,2015 +58,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +59,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +60,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,71b07c537a9e188b850192131bfe31ef206a39a0,citation,https://pdfs.semanticscholar.org/71b0/7c537a9e188b850192131bfe31ef206a39a0.pdf,Faces InThe-Wild Challenge : database and results,2016 +61,United Kingdom,LFPW,lfpw,53.22853665,-0.54873472,University of Lincoln,edu,624496296af19243d5f05e7505fd927db02fd0ce,citation,http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf,Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild,2014 +62,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,624496296af19243d5f05e7505fd927db02fd0ce,citation,http://eprints.eemcs.utwente.nl/25815/01/Pantic_Gauss-Newton_Deformable_Part_Models.pdf,Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild,2014 +63,United Kingdom,LFPW,lfpw,53.22853665,-0.54873472,University of Lincoln,edu,6a4ebd91c4d380e21da0efb2dee276897f56467a,citation,http://eprints.nottingham.ac.uk/31441/1/tzimiroICIP14b.pdf,HOG active appearance models,2014 +64,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,500b92578e4deff98ce20e6017124e6d2053b451,citation,http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf,Incremental Face Alignment in the Wild,2014 +65,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,500b92578e4deff98ce20e6017124e6d2053b451,citation,http://eprints.eemcs.utwente.nl/25818/01/Pantic_Incremental_Face_Alignment_in_the_Wild.pdf,Incremental Face Alignment in the Wild,2014 +66,United Kingdom,LFPW,lfpw,52.17638955,0.14308882,University of Cambridge,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +67,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +68,Germany,LFPW,lfpw,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,c17a332e59f03b77921942d487b4b102b1ee73b6,citation,https://pdfs.semanticscholar.org/c17a/332e59f03b77921942d487b4b102b1ee73b6.pdf,Learning an appearance-based gaze estimator from one million synthesised images,2016 +69,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,a820941eaf03077d68536732a4d5f28d94b5864a,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Zhang_Leveraging_Datasets_With_ICCV_2015_paper.pdf,Leveraging Datasets with Varying Annotations for Face Alignment via Deep Regression Network,2015 +70,China,LFPW,lfpw,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,a820941eaf03077d68536732a4d5f28d94b5864a,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Zhang_Leveraging_Datasets_With_ICCV_2015_paper.pdf,Leveraging Datasets with Varying Annotations for Face Alignment via Deep Regression Network,2015 +71,Sweden,LFPW,lfpw,59.34986645,18.07063213,"KTH Royal Institute of Technology, Stockholm",edu,1824b1ccace464ba275ccc86619feaa89018c0ad,citation,http://www.csc.kth.se/~vahidk/face/KazemiCVPR14.pdf,One millisecond face alignment with an ensemble of regression trees,2014 +72,United States,LFPW,lfpw,35.3070929,-80.735164,"North Carolina Univ., Charlotte, NC, USA",edu,3fb3c7dd12561e9443ac301f5527d539b1f4574e,citation,http://research.cs.rutgers.edu/~xiangyu/paper/iccv13_face_final.pdf,Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model,2013 +73,United States,LFPW,lfpw,40.47913175,-74.43168868,Rutgers University,edu,3fb3c7dd12561e9443ac301f5527d539b1f4574e,citation,http://research.cs.rutgers.edu/~xiangyu/paper/iccv13_face_final.pdf,Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model,2013 +74,United States,LFPW,lfpw,32.7283683,-97.11201835,University of Texas at Arlington,edu,3fb3c7dd12561e9443ac301f5527d539b1f4574e,citation,http://research.cs.rutgers.edu/~xiangyu/paper/iccv13_face_final.pdf,Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model,2013 +75,United States,LFPW,lfpw,45.55236,-122.9142988,Intel Corporation,company,9ef2b2db11ed117521424c275c3ce1b5c696b9b3,citation,https://arxiv.org/pdf/1511.04404.pdf,Robust Face Alignment Using a Mixture of Invariant Experts,2016 +76,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,03f98c175b4230960ac347b1100fbfc10c100d0c,citation,http://courses.cs.washington.edu/courses/cse590v/13au/intraface.pdf,Supervised Descent Method and Its Applications to Face Alignment,2013 +77,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,131e395c94999c55c53afead65d81be61cd349a4,citation,https://arxiv.org/pdf/1612.02203.pdf,A Functional Regression Approach to Facial Landmark Tracking,2018 +78,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,131e395c94999c55c53afead65d81be61cd349a4,citation,https://arxiv.org/pdf/1612.02203.pdf,A Functional Regression Approach to Facial Landmark Tracking,2018 +79,United Kingdom,LFPW,lfpw,51.24303255,-0.59001382,University of Surrey,edu,7a0b78879a13bd42c63cd947f583129137b16830,citation,https://pdfs.semanticscholar.org/7a0b/78879a13bd42c63cd947f583129137b16830.pdf,A Multiresolution 3D Morphable Face Model and Fitting Framework,2016 +80,Germany,LFPW,lfpw,48.48187645,9.18682404,Reutlingen University,edu,7a0b78879a13bd42c63cd947f583129137b16830,citation,https://pdfs.semanticscholar.org/7a0b/78879a13bd42c63cd947f583129137b16830.pdf,A Multiresolution 3D Morphable Face Model and Fitting Framework,2016 +81,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,b730908bc1f80b711c031f3ea459e4de09a3d324,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf,Active Orientation Models for Face Alignment In-the-Wild,2014 +82,United Kingdom,LFPW,lfpw,53.22853665,-0.54873472,University of Lincoln,edu,b730908bc1f80b711c031f3ea459e4de09a3d324,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/tifs_aoms.pdf,Active Orientation Models for Face Alignment In-the-Wild,2014 +83,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,1a85956154c170daf7f15f32f29281269028ff69,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/active_pictorial_structures.pdf,Active Pictorial Structures,2015 +84,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,293ade202109c7f23637589a637bdaed06dc37c9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf,Adaptive cascaded regression,2016 +85,Finland,LFPW,lfpw,65.0592157,25.46632601,University of Oulu,edu,293ade202109c7f23637589a637bdaed06dc37c9,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2016adaptive.pdf,Adaptive cascaded regression,2016 +86,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,86c053c162c08bc3fe093cc10398b9e64367a100,citation,https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf,Cascade of forests for face alignment,2015 +87,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,86c053c162c08bc3fe093cc10398b9e64367a100,citation,https://pdfs.semanticscholar.org/86c0/53c162c08bc3fe093cc10398b9e64367a100.pdf,Cascade of forests for face alignment,2015 +88,United States,LFPW,lfpw,33.9832526,-118.40417,USC Institute for Creative Technologies,edu,0a6d344112b5af7d1abbd712f83c0d70105211d0,citation,http://ict.usc.edu/pubs/Constrained%20local%20neural%20fields%20for%20robust%20facial%20landmark%20detection%20in%20the%20wild.pdf,Constrained Local Neural Fields for Robust Facial Landmark Detection in the Wild,2013 +89,United Kingdom,LFPW,lfpw,52.17638955,0.14308882,University of Cambridge,edu,029b53f32079063047097fa59cfc788b2b550c4b,citation,https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf,Continuous Conditional Neural Fields for Structured Regression,2014 +90,United States,LFPW,lfpw,34.0224149,-118.28634407,University of Southern California,edu,029b53f32079063047097fa59cfc788b2b550c4b,citation,https://pdfs.semanticscholar.org/f4e3/c42df13aeed9196647d4e3fe0f84fa725252.pdf,Continuous Conditional Neural Fields for Structured Regression,2014 +91,Italy,LFPW,lfpw,44.4056499,8.946256,"Istituto Italiano di Tecnologia, Genova, Italy",edu,14ff9c89f00dacc8e0c13c94f9fadcd90e4e604d,citation,http://www.hamedkiani.com/uploads/5/1/8/8/51882963/wacv_presentation.pdf,Correlation filter cascade for facial landmark localization,2016 +92,Singapore,LFPW,lfpw,1.2962018,103.77689944,National University of Singapore,edu,14ff9c89f00dacc8e0c13c94f9fadcd90e4e604d,citation,http://www.hamedkiani.com/uploads/5/1/8/8/51882963/wacv_presentation.pdf,Correlation filter cascade for facial landmark localization,2016 +93,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,5239001571bc64de3e61be0be8985860f08d7e7e,citation,https://arxiv.org/pdf/1607.06871.pdf,Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling,2016 +94,United States,LFPW,lfpw,45.57022705,-122.63709346,Concordia University,edu,5239001571bc64de3e61be0be8985860f08d7e7e,citation,https://arxiv.org/pdf/1607.06871.pdf,Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling,2016 +95,China,LFPW,lfpw,23.09461185,113.28788994,Sun Yat-Sen University,edu,3be8f1f7501978287af8d7ebfac5963216698249,citation,https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf,Deep Cascaded Regression for Face Alignment,2015 +96,Singapore,LFPW,lfpw,1.2962018,103.77689944,National University of Singapore,edu,3be8f1f7501978287af8d7ebfac5963216698249,citation,https://pdfs.semanticscholar.org/3be8/f1f7501978287af8d7ebfac5963216698249.pdf,Deep Cascaded Regression for Face Alignment,2015 +97,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,0209389b8369aaa2a08830ac3b2036d4901ba1f1,citation,https://arxiv.org/pdf/1612.01202.pdf,DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild,2017 +98,United Kingdom,LFPW,lfpw,51.5231607,-0.1282037,University College London,edu,0209389b8369aaa2a08830ac3b2036d4901ba1f1,citation,https://arxiv.org/pdf/1612.01202.pdf,DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild,2017 +99,United States,LFPW,lfpw,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,191d30e7e7360d565b0c1e2814b5bcbd86a11d41,citation,http://homepages.rpi.edu/~wuy9/DiscriminativeDeepFaceShape/DiscriminativeDeepFaceShape_IJCV.pdf,Discriminative Deep Face Shape Model for Facial Point Detection,2014 +100,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,2fb8d7601fc3ad637781127620104aaab5122acd,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/zhou2016estimating.pdf,Estimating Correspondences of Deformable Objects “In-the-Wild”,2016 +101,Finland,LFPW,lfpw,65.0592157,25.46632601,University of Oulu,edu,2fb8d7601fc3ad637781127620104aaab5122acd,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/zhou2016estimating.pdf,Estimating Correspondences of Deformable Objects “In-the-Wild”,2016 +102,United States,LFPW,lfpw,39.2899685,-76.62196103,University of Maryland,edu,ceeb67bf53ffab1395c36f1141b516f893bada27,citation,https://arxiv.org/pdf/1601.07950.pdf,Face Alignment by Local Deep Descriptor Regression,2016 +103,United States,LFPW,lfpw,40.47913175,-74.43168868,Rutgers University,edu,ceeb67bf53ffab1395c36f1141b516f893bada27,citation,https://arxiv.org/pdf/1601.07950.pdf,Face Alignment by Local Deep Descriptor Regression,2016 +104,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,6d8c9a1759e7204eacb4eeb06567ad0ef4229f93,citation,https://arxiv.org/pdf/1707.05938.pdf,"Face Alignment Robust to Pose, Expressions and Occlusions",2016 +105,United States,LFPW,lfpw,42.718568,-84.47791571,Michigan State University,edu,6d8c9a1759e7204eacb4eeb06567ad0ef4229f93,citation,https://arxiv.org/pdf/1707.05938.pdf,"Face Alignment Robust to Pose, Expressions and Occlusions",2016 +106,South Korea,LFPW,lfpw,36.3697191,127.362537,Korea Advanced Institute of Science and Technology,edu,72e10a2a7a65db7ecdc7d9bd3b95a4160fab4114,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/2B_094.pdf,Face alignment using cascade Gaussian process regression trees,2015 +107,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,4b6387e608afa83ac8d855de2c9b0ae3b86f31cc,citation,http://www.researchgate.net/profile/Heng_Yang3/publication/263813517_Face_Sketch_Landmarks_Localization_in_the_Wild/links/53d3dd3b0cf220632f3ce8b3.pdf,Face Sketch Landmarks Localization in the Wild,2014 +108,China,LFPW,lfpw,22.59805605,113.98533784,Shenzhen Institutes of Advanced Technology,edu,4b6387e608afa83ac8d855de2c9b0ae3b86f31cc,citation,http://www.researchgate.net/profile/Heng_Yang3/publication/263813517_Face_Sketch_Landmarks_Localization_in_the_Wild/links/53d3dd3b0cf220632f3ce8b3.pdf,Face Sketch Landmarks Localization in the Wild,2014 +109,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,ebedc841a2c1b3a9ab7357de833101648281ff0e,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf,Facial landmarking for in-the-wild images with local inference based on global appearance,2015 +110,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,ebedc841a2c1b3a9ab7357de833101648281ff0e,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/1-s2.0-s0262885615000116-main.pdf,Facial landmarking for in-the-wild images with local inference based on global appearance,2015 +111,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,2f7aa942313b1eb12ebfab791af71d0a3830b24c,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf,Feature-Based Lucas–Kanade and Active Appearance Models,2015 +112,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,2f7aa942313b1eb12ebfab791af71d0a3830b24c,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/antonakos2015feature.pdf,Feature-Based Lucas–Kanade and Active Appearance Models,2015 +113,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,375435fb0da220a65ac9e82275a880e1b9f0a557,citation,http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf,From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild,2015 +114,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,375435fb0da220a65ac9e82275a880e1b9f0a557,citation,http://eprints.lincoln.ac.uk/17528/7/__ddat02_staffhome_jpartridge_tzimiroTPAMI15.pdf,From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild,2015 +115,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,e42998bbebddeeb4b2bedf5da23fa5c4efc976fa,citation,https://pdfs.semanticscholar.org/e429/98bbebddeeb4b2bedf5da23fa5c4efc976fa.pdf,Generic Active Appearance Models Revisited,2012 +116,United Kingdom,LFPW,lfpw,53.22853665,-0.54873472,University of Lincoln,edu,e42998bbebddeeb4b2bedf5da23fa5c4efc976fa,citation,https://pdfs.semanticscholar.org/e429/98bbebddeeb4b2bedf5da23fa5c4efc976fa.pdf,Generic Active Appearance Models Revisited,2012 +117,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,1c1a98df3d0d5e2034ea723994bdc85af45934db,citation,http://www.cs.nott.ac.uk/~pszmv/Documents/ICCV-300w_cameraready.pdf,Guided Unsupervised Learning of Mode Specific Models for Facial Point Detection in the Wild,2013 +118,United States,LFPW,lfpw,34.0224149,-118.28634407,University of Southern California,edu,87e6cb090aecfc6f03a3b00650a5c5f475dfebe1,citation,https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf,Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection,2016 +119,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,87e6cb090aecfc6f03a3b00650a5c5f475dfebe1,citation,https://pdfs.semanticscholar.org/87e6/cb090aecfc6f03a3b00650a5c5f475dfebe1.pdf,Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection,2016 +120,China,LFPW,lfpw,31.4854255,120.2739581,Jiangnan University,edu,9d57c4036a0e5f1349cd11bc342ac515307b6720,citation,https://arxiv.org/pdf/1808.05399.pdf,Landmark Weighting for 3DMM Shape Fitting,2018 +121,United Kingdom,LFPW,lfpw,51.24303255,-0.59001382,University of Surrey,edu,9d57c4036a0e5f1349cd11bc342ac515307b6720,citation,https://arxiv.org/pdf/1808.05399.pdf,Landmark Weighting for 3DMM Shape Fitting,2018 +122,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,321c8ba38db118d8b02c0ba209be709e6792a2c7,citation,http://www.cbsr.ia.ac.cn/users/jjyan/ICCVW2013.pdf,Learn to Combine Multiple Hypotheses for Accurate Face Alignment,2013 +123,China,LFPW,lfpw,40.00229045,116.32098908,Tsinghua University,edu,329d58e8fb30f1bf09acb2f556c9c2f3e768b15c,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf,Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment,2017 +124,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,329d58e8fb30f1bf09acb2f556c9c2f3e768b15c,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Wu_Leveraging_Intra_and_CVPR_2017_paper.pdf,Leveraging Intra and Inter-Dataset Variations for Robust Face Alignment,2017 +125,United States,LFPW,lfpw,33.6404952,-117.8442962,University of California Irvine,edu,65126e0b1161fc8212643b8ff39c1d71d262fbc1,citation,http://vision.ics.uci.edu/papers/GhiasiF_CVPR_2014/GhiasiF_CVPR_2014.pdf,Occlusion Coherence: Localizing Occluded Faces with a Hierarchical Deformable Part Model,2014 +126,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,303a7099c01530fa0beb197eb1305b574168b653,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf,Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders,2016 +127,China,LFPW,lfpw,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,303a7099c01530fa0beb197eb1305b574168b653,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_Occlusion-Free_Face_Alignment_CVPR_2016_paper.pdf,Occlusion-Free Face Alignment: Deep Regression Networks Coupled with De-Corrupt AutoEncoders,2016 +128,United Kingdom,LFPW,lfpw,50.7944026,-1.0971748,Cambridge University,edu,2fda461869f84a9298a0e93ef280f79b9fb76f94,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf,OpenFace: An open source facial behavior analysis toolkit,2016 +129,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,2fda461869f84a9298a0e93ef280f79b9fb76f94,citation,http://multicomp.cs.cmu.edu/wp-content/uploads/2017/09/2016_WACV_Baltrusaitis_OpenFace.pdf,OpenFace: An open source facial behavior analysis toolkit,2016 +130,United States,LFPW,lfpw,35.3103441,-80.73261617,University of North Carolina at Charlotte,edu,89002a64e96a82486220b1d5c3f060654b24ef2a,citation,http://research.rutgers.edu/~shaoting/paper/ICCV15_face.pdf,PIEFA: Personalized Incremental and Ensemble Face Alignment,2015 +131,China,LFPW,lfpw,31.28473925,121.49694909,Tongji University,edu,7aafeb9aab48fb2c34bed4b86755ac71e3f00338,citation,https://pdfs.semanticscholar.org/7aaf/eb9aab48fb2c34bed4b86755ac71e3f00338.pdf,Real Time 3D Facial Movement Tracking Using a Monocular Camera,2016 +132,Japan,LFPW,lfpw,32.8164178,130.72703969,Kumamoto University,edu,7aafeb9aab48fb2c34bed4b86755ac71e3f00338,citation,https://pdfs.semanticscholar.org/7aaf/eb9aab48fb2c34bed4b86755ac71e3f00338.pdf,Real Time 3D Facial Movement Tracking Using a Monocular Camera,2016 +133,United States,LFPW,lfpw,45.57022705,-122.63709346,Concordia University,edu,6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d,citation,https://arxiv.org/pdf/1607.00659.pdf,Robust Deep Appearance Models,2016 +134,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58d,citation,https://arxiv.org/pdf/1607.00659.pdf,Robust Deep Appearance Models,2016 +135,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf,Robust FEC-CNN: A High Accuracy Facial Landmark Detection System,2017 +136,China,LFPW,lfpw,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/He_Robust_FEC-CNN_A_CVPR_2017_paper.pdf,Robust FEC-CNN: A High Accuracy Facial Landmark Detection System,2017 +137,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0,citation,http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf,Robust Statistical Frontalization of Human and Animal Faces,2016 +138,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,7cdf3bc1de6c7948763c0c2dfa4384dcbd3677a0,citation,http://eprints.eemcs.utwente.nl/27129/01/sagonas2016robust.pdf,Robust Statistical Frontalization of Human and Animal Faces,2016 +139,United States,LFPW,lfpw,40.47913175,-74.43168868,Rutgers University,edu,04ff69aa20da4eeccdabbe127e3641b8e6502ec0,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf,Sequential Face Alignment via Person-Specific Modeling in the Wild,2016 +140,United States,LFPW,lfpw,32.7283683,-97.11201835,University of Texas at Arlington,edu,04ff69aa20da4eeccdabbe127e3641b8e6502ec0,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w28/papers/Peng_Sequential_Face_Alignment_CVPR_2016_paper.pdf,Sequential Face Alignment via Person-Specific Modeling in the Wild,2016 +141,China,LFPW,lfpw,22.304572,114.17976285,Hong Kong Polytechnic University,edu,3c88ffb74573c87c994106b3ae164f316182fc2c,citation,https://opus.lib.uts.edu.au/bitstream/10453/43334/1/SAC-AAM_v10_Huiling_20151023_modifiedVersion.pdf,Shape-appearance-correlated active appearance model,2016 +142,Australia,LFPW,lfpw,-33.8840504,151.1992254,University of Technology,edu,3c88ffb74573c87c994106b3ae164f316182fc2c,citation,https://opus.lib.uts.edu.au/bitstream/10453/43334/1/SAC-AAM_v10_Huiling_20151023_modifiedVersion.pdf,Shape-appearance-correlated active appearance model,2016 +143,China,LFPW,lfpw,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,4a1d640f5e25bb60bb2347d36009718249ce9230,citation,http://ir.ia.ac.cn/bitstream/173211/4555/1/CVPR14FaceAlignmentCameraReady.pdf,Towards Multi-view and Partially-Occluded Face Alignment,2014 +144,Singapore,LFPW,lfpw,1.2962018,103.77689944,National University of Singapore,edu,4a1d640f5e25bb60bb2347d36009718249ce9230,citation,http://ir.ia.ac.cn/bitstream/173211/4555/1/CVPR14FaceAlignmentCameraReady.pdf,Towards Multi-view and Partially-Occluded Face Alignment,2014 +145,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,433a6d6d2a3ed8a6502982dccc992f91d665b9b3,citation,https://arxiv.org/pdf/1409.0602.pdf,Transferring Landmark Annotations for Cross-Dataset Face Alignment.,2014 +146,China,LFPW,lfpw,40.00229045,116.32098908,Tsinghua University,edu,433a6d6d2a3ed8a6502982dccc992f91d665b9b3,citation,https://arxiv.org/pdf/1409.0602.pdf,Transferring Landmark Annotations for Cross-Dataset Face Alignment.,2014 +147,United States,LFPW,lfpw,40.47913175,-74.43168868,Rutgers University,edu,3d78c144672c4ee76d92d21dad012bdf3c3aa1a0,citation,http://www.rci.rutgers.edu/~vmp93/Journal_pub/IJCV_20170517_v4.pdf,Unconstrained Still/Video-Based Face Verification with Deep Convolutional Neural Networks,2017 +148,United States,LFPW,lfpw,39.2899685,-76.62196103,University of Maryland,edu,3d78c144672c4ee76d92d21dad012bdf3c3aa1a0,citation,http://www.rci.rutgers.edu/~vmp93/Journal_pub/IJCV_20170517_v4.pdf,Unconstrained Still/Video-Based Face Verification with Deep Convolutional Neural Networks,2017 +149,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,5c124b57699be19cd4eb4e1da285b4a8c84fc80d,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Zhao_Unified_Face_Analysis_2014_CVPR_paper.pdf,Unified Face Analysis by Iterative Multi-output Random Forests,2014 +150,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,4c87aafa779747828054cffee3125fcea332364d,citation,https://pdfs.semanticscholar.org/4c87/aafa779747828054cffee3125fcea332364d.pdf,View-Constrained Latent Variable Model for Multi-view Facial Expression Classification,2014 +151,Netherlands,LFPW,lfpw,52.2380139,6.8566761,University of Twente,edu,4c87aafa779747828054cffee3125fcea332364d,citation,https://pdfs.semanticscholar.org/4c87/aafa779747828054cffee3125fcea332364d.pdf,View-Constrained Latent Variable Model for Multi-view Facial Expression Classification,2014 +152,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,303065c44cf847849d04da16b8b1d9a120cef73a,citation,https://arxiv.org/pdf/1701.05360.pdf,"3D Face Morphable Models ""In-the-Wild""",2017 +153,United States,LFPW,lfpw,40.47913175,-74.43168868,Rutgers University,edu,afdf9a3464c3b015f040982750f6b41c048706f5,citation,https://arxiv.org/pdf/1608.05477.pdf,A Recurrent Encoder-Decoder Network for Sequential Face Alignment,2016 +154,China,LFPW,lfpw,30.672721,104.098806,University of Electronic Science and Technology of China,edu,88e2574af83db7281c2064e5194c7d5dfa649846,citation,http://downloads.hindawi.com/journals/cin/2017/4579398.pdf,A Robust Shape Reconstruction Method for Facial Feature Point Detection,2017 +155,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,5f5906168235613c81ad2129e2431a0e5ef2b6e4,citation,https://arxiv.org/pdf/1601.00199.pdf,A Unified Framework for Compositional Fitting of Active Appearance Models,2016 +156,France,LFPW,lfpw,49.4583047,1.0688892,Rouen University,edu,0b0958493e43ca9c131315bcfb9a171d52ecbb8a,citation,https://pdfs.semanticscholar.org/0b09/58493e43ca9c131315bcfb9a171d52ecbb8a.pdf,A Unified Neural Based Model for Structured Output Problems,2015 +157,China,LFPW,lfpw,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,7343f0b7bcdaf909c5e37937e295bf0ac7b69499,citation,http://wuyuebupt.github.io/files/csi.pdf,Adaptive Cascade Deep Convolutional Neural Networks for face alignment,2015 +158,United States,LFPW,lfpw,38.99203005,-76.9461029,University of Maryland College Park,edu,3504907a2e3c81d78e9dfe71c93ac145b1318f9c,citation,https://arxiv.org/pdf/1605.02686.pdf,An End-to-End System for Unconstrained Face Verification with Deep Convolutional Neural Networks,2015 +159,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,06c2dfe1568266ad99368fc75edf79585e29095f,citation,http://ibug.doc.ic.ac.uk/media/uploads/documents/joan_cvpr2014.pdf,Bayesian Active Appearance Models,2014 +160,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,056ba488898a1a1b32daec7a45e0d550e0c51ae4,citation,https://arxiv.org/pdf/1608.01137.pdf,Cascaded Continuous Regression for Real-Time Incremental Face Tracking,2016 +161,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,72a1852c78b5e95a57efa21c92bdc54219975d8f,citation,http://eprints.nottingham.ac.uk/31303/1/prl_blockwise_SDM.pdf,Cascaded regression with sparsified feature covariance matrix for facial landmark detection,2016 +162,United States,LFPW,lfpw,43.07982815,-89.43066425,University of Wisconsin Madison,edu,2e091b311ac48c18aaedbb5117e94213f1dbb529,citation,http://pages.cs.wisc.edu/~lizhang/projects/collab-face-landmarks/SmithECCV2014.pdf,Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets,2014 +163,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,88e2efab01e883e037a416c63a03075d66625c26,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w36/Zadeh_Convolutional_Experts_Constrained_ICCV_2017_paper.pdf,Convolutional Experts Constrained Local Model for 3D Facial Landmark Detection,2017 +164,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,963d0d40de8780161b70d28d2b125b5222e75596,citation,https://arxiv.org/pdf/1611.08657.pdf,Convolutional Experts Constrained Local Model for Facial Landmark Detection,2017 +165,Poland,LFPW,lfpw,52.22165395,21.00735776,Warsaw University of Technology,edu,f27b8b8f2059248f77258cf8595e9434cf0b0228,citation,https://arxiv.org/pdf/1706.01789.pdf,Deep Alignment Network: A Convolutional Neural Network for Robust Face Alignment,2017 +166,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,38cbb500823057613494bacd0078aa0e57b30af8,citation,https://arxiv.org/pdf/1704.08772.pdf,Deep Face Deblurring,2017 +167,France,LFPW,lfpw,49.4583047,1.0688892,Normandie University,edu,9ca7899338129f4ba6744f801e722d53a44e4622,citation,https://arxiv.org/pdf/1504.07550.pdf,Deep neural networks regularization for structured output prediction,2018 +168,United States,LFPW,lfpw,43.07982815,-89.43066425,University of Wisconsin Madison,edu,0eac652139f7ab44ff1051584b59f2dc1757f53b,citation,https://arxiv.org/pdf/1611.01584.pdf,Efficient Branching Cascaded Regression for Face Alignment under Significant Head Rotation,2016 +169,China,LFPW,lfpw,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,5c820e47981d21c9dddde8d2f8020146e600368f,citation,https://pdfs.semanticscholar.org/5c82/0e47981d21c9dddde8d2f8020146e600368f.pdf,Extended Supervised Descent Method for Robust Face Alignment,2014 +170,China,LFPW,lfpw,32.0565957,118.77408833,Nanjing University,edu,f633d6dc02b2e55eb24b89f2b8c6df94a2de86dd,citation,http://parnec.nuaa.edu.cn/pubs/xiaoyang%20tan/journal/2016/JXPR-2016.pdf,Face alignment by robust discriminative Hough voting,2016 +171,Poland,LFPW,lfpw,52.22165395,21.00735776,Warsaw University of Technology,edu,eb48a58b873295d719827e746d51b110f5716d6c,citation,https://arxiv.org/pdf/1706.01820.pdf,Face Alignment Using K-Cluster Regression Forests With Weighted Splitting,2016 +172,United States,LFPW,lfpw,30.44235995,-84.29747867,Florida State University,edu,9207671d9e2b668c065e06d9f58f597601039e5e,citation,https://pdfs.semanticscholar.org/9207/671d9e2b668c065e06d9f58f597601039e5e.pdf,Face Detection Using a 3D Model on Face Keypoints,2014 +173,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,1a140d9265df8cf50a3cd69074db7e20dc060d14,citation,https://pdfs.semanticscholar.org/1a14/0d9265df8cf50a3cd69074db7e20dc060d14.pdf,Face Parts Localization Using Structured-Output Regression Forests,2012 +174,United States,LFPW,lfpw,35.9542493,-83.9307395,University of Tennessee,edu,5e97a1095f2811e0bc188f52380ea7c9c460c896,citation,http://web.eecs.utk.edu/~rguo1/FacialParsing.pdf,Facial feature parsing and landmark detection via low-rank matrix decomposition,2015 +175,China,LFPW,lfpw,32.0565957,118.77408833,Nanjing University,edu,5b0bf1063b694e4b1575bb428edb4f3451d9bf04,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Yang_Facial_Shape_Tracking_ICCV_2015_paper.pdf,Facial Shape Tracking via Spatio-Temporal Cascade Shape Regression,2015 +176,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,e5533c70706109ee8d0b2a4360fbe73fd3b0f35d,citation,https://arxiv.org/pdf/1703.07332.pdf,"How Far are We from Solving the 2D & 3D Face Alignment Problem? (and a Dataset of 230,000 3D Facial Landmarks)",2017 +177,United Kingdom,LFPW,lfpw,52.17638955,0.14308882,University of Cambridge,edu,9901f473aeea177a55e58bac8fd4f1b086e575a4,citation,https://arxiv.org/pdf/1509.04954.pdf,Human and sheep facial landmarks localisation by triplet interpolated features,2016 +178,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,9ca0626366e136dac6bfd628cec158e26ed959c7,citation,https://arxiv.org/pdf/1811.02194.pdf,In-the-wild Facial Expression Recognition in Extreme Poses,2017 +179,United States,LFPW,lfpw,29.7207902,-95.34406271,University of Houston,edu,466f80b066215e85da63e6f30e276f1a9d7c843b,citation,http://cbl.uh.edu/pub_files/07961802.pdf,Joint Head Pose Estimation and Face Alignment Framework Using Global and Local CNN Features,2017 +180,United Kingdom,LFPW,lfpw,52.9387428,-1.20029569,University of Nottingham,edu,2c14c3bb46275da5706c466f9f51f4424ffda914,citation,http://braismartinez.com/media/documents/2015ivc_-_l21-based_regression_and_prediction_accumulation_across_views_for_robust_facial_landmark_detection.pdf,"L2, 1-based regression and prediction accumulation across views for robust facial landmark detection",2016 +181,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,390f3d7cdf1ce127ecca65afa2e24c563e9db93b,citation,https://pdfs.semanticscholar.org/6e80/a3558f9170f97c103137ea2e18ddd782e8d7.pdf,Learning and Transferring Multi-task Deep Representation for Face Alignment,2014 +182,China,LFPW,lfpw,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,c00f402b9cfc3f8dd2c74d6b3552acbd1f358301,citation,https://arxiv.org/pdf/1608.00207.pdf,Learning deep representation from coarse to fine for face alignment,2016 +183,China,LFPW,lfpw,40.00229045,116.32098908,Tsinghua University,edu,df80fed59ffdf751a20af317f265848fe6bfb9c9,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Learning%20deep%20sharable%20and%20structural%20detectors%20for%20face%20alignment.pdf,Learning Deep Sharable and Structural Detectors for Face Alignment,2017 +184,United Kingdom,LFPW,lfpw,52.3793131,-1.5604252,University of Warwick,edu,0bc53b338c52fc635687b7a6c1e7c2b7191f42e5,citation,https://pdfs.semanticscholar.org/a32a/8d6d4c3b4d69544763be48ffa7cb0d7f2f23.pdf,Loglet SIFT for Part Description in Deformable Part Models: Application to Face Alignment,2016 +185,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,0f81b0fa8df5bf3fcfa10f20120540342a0c92e5,citation,https://arxiv.org/pdf/1501.05152.pdf,"Mirror, mirror on the wall, tell me, is the error small?",2015 +186,United Kingdom,LFPW,lfpw,53.46600455,-2.23300881,University of Manchester,edu,daa4cfde41d37b2ab497458e331556d13dd14d0b,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Rajamanoharan_Multi-View_Constrained_Local_ICCV_2015_paper.pdf,Multi-view Constrained Local Models for Large Head Angle Facial Tracking,2015 +187,South Africa,LFPW,lfpw,-33.95828745,18.45997349,University of Cape Town,edu,36e8ef2e5d52a78dddf0002e03918b101dcdb326,citation,http://www.milbo.org/stasm-files/multiview-active-shape-models-with-sift-for-300w.pdf,Multiview Active Shape Models with SIFT Descriptors for the 300-W Face Landmark Challenge,2013 +188,United States,LFPW,lfpw,33.6404952,-117.8442962,University of California at Irvine,edu,bd13f50b8997d0733169ceba39b6eb1bda3eb1aa,citation,https://arxiv.org/pdf/1506.08347.pdf,Occlusion Coherence: Detecting and Localizing Occluded Faces,2015 +189,United States,LFPW,lfpw,42.718568,-84.47791571,Michigan State University,edu,b53485dbdd2dc5e4f3c7cff26bd8707964bb0503,citation,http://cvlab.cse.msu.edu/pdfs/Jourabloo_Liu_IJCV_2017.pdf,Pose-Invariant Face Alignment via CNN-Based Dense 3D Model Fitting,2017 +190,Canada,LFPW,lfpw,45.5010087,-73.6157778,University of Montreal,edu,3176ee88d1bb137d0b561ee63edf10876f805cf0,citation,https://arxiv.org/pdf/1511.07356.pdf,Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation,2016 +191,Taiwan,LFPW,lfpw,25.01353105,121.54173736,National Taiwan University of Science and Technology,edu,27c6cd568d0623d549439edc98f6b92528d39bfe,citation,http://openaccess.thecvf.com/content_iccv_2015/papers/Hsu_Regressive_Tree_Structured_ICCV_2015_paper.pdf,Regressive Tree Structured Model for Facial Landmark Localization,2015 +192,United States,LFPW,lfpw,38.2167565,-85.75725023,University of Louisville,edu,84bc3ca61fc63b47ec3a1a6566ab8dcefb3d0015,citation,http://www.cvip.louisville.edu/wwwcvip/research/publications/Pub_Pdf/2012/BTAS%20144.pdf,Rejecting pseudo-faces using the likelihood of facial features and skin,2012 +193,Australia,LFPW,lfpw,-35.28121335,149.11665331,"Australian National University, Canberra",edu,24e099e77ae7bae3df2bebdc0ee4e00acca71250,citation,http://users.cecs.anu.edu.au/~hexm/papers/heng_tip.pdf,Robust Face Alignment Under Occlusion via Regional Predictive Power Estimation,2015 +194,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,24e099e77ae7bae3df2bebdc0ee4e00acca71250,citation,http://users.cecs.anu.edu.au/~hexm/papers/heng_tip.pdf,Robust Face Alignment Under Occlusion via Regional Predictive Power Estimation,2015 +195,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,24e099e77ae7bae3df2bebdc0ee4e00acca71250,citation,http://users.cecs.anu.edu.au/~hexm/papers/heng_tip.pdf,Robust Face Alignment Under Occlusion via Regional Predictive Power Estimation,2015 +196,United States,LFPW,lfpw,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,1c1f957d85b59d23163583c421755869f248ceef,citation,https://arxiv.org/pdf/1709.08127.pdf,Robust Facial Landmark Detection Under Significant Head Poses and Occlusion,2015 +197,United States,LFPW,lfpw,42.7298459,-73.67950216,Rensselaer Polytechnic Institute,edu,c3d3d2229500c555c7a7150a8b126ef874cbee1c,citation,http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w25/papers/Wu_Shape_Augmented_Regression_ICCV_2015_paper.pdf,Shape Augmented Regression Method for Face Alignment,2015 +198,Australia,LFPW,lfpw,-33.8809651,151.20107299,University of Technology Sydney,edu,77875d6e4d8c7ed3baeb259fd5696e921f59d7ad,citation,https://arxiv.org/pdf/1803.04108.pdf,Style Aggregated Network for Facial Landmark Detection,2018 +199,China,LFPW,lfpw,40.00229045,116.32098908,Tsinghua University,edu,e8523c4ac9d7aa21f3eb4062e09f2a3bc1eedcf7,citation,https://arxiv.org/pdf/1701.07174.pdf,Toward End-to-End Face Recognition Through Alignment Learning,2017 +200,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,7cfbf90368553333b47731729e0e358479c25340,citation,http://www.andrew.cmu.edu/user/kseshadr/TPAMI_2016_Paper_Final_Submission.pdf,"Towards a Unified Framework for Pose, Expression, and Occlusion Tolerant Automatic Facial Alignment",2016 +201,Poland,LFPW,lfpw,52.22165395,21.00735776,Warsaw University of Technology,edu,e52272f92fa553687f1ac068605f1de929efafc2,citation,https://repo.pw.edu.pl/docstore/download/WUT8aeb20bbb6964b7da1cfefbf2e370139/1-s2.0-S0952197617301227-main.pdf,Using a Probabilistic Neural Network for lip-based biometric verification,2017 +202,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,2e3d081c8f0e10f138314c4d2c11064a981c1327,citation,https://arxiv.org/pdf/1603.06015.pdf,A Comprehensive Performance Evaluation of Deformable Face Tracking “In-the-Wild”,2017 +203,United Kingdom,LFPW,lfpw,50.7944026,-1.0971748,Cambridge University,edu,cc96eab1e55e771e417b758119ce5d7ef1722b43,citation,https://arxiv.org/pdf/1511.05049.pdf,An Empirical Study of Recent Face Alignment Methods,2015 +204,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,cc96eab1e55e771e417b758119ce5d7ef1722b43,citation,https://arxiv.org/pdf/1511.05049.pdf,An Empirical Study of Recent Face Alignment Methods,2015 +205,China,LFPW,lfpw,35.86166,104.195397,"Megvii Inc. (Face++), China",company,064b797aa1da2000640e437cacb97256444dee82,citation,https://arxiv.org/pdf/1511.04901.pdf,Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression,2015 +206,Germany,LFPW,lfpw,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,9b9ccd4954cf9dd605d49e9c3504224d06725ab7,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w13/papers/Schwarz_DriveAHead_-_A_CVPR_2017_paper.pdf,DriveAHead — A Large-Scale Driver Head Pose Dataset,2017 +207,China,LFPW,lfpw,32.0565957,118.77408833,Nanjing University,edu,91883dabc11245e393786d85941fb99a6248c1fb,citation,https://arxiv.org/pdf/1608.04188.pdf,Face Alignment In-the-Wild: A Survey,2017 +208,United Kingdom,LFPW,lfpw,51.521975,-0.130462,"Birkbeck College, London, UK",edu,38192a0f9261d9727b119e294a65f2e25f72d7e6,citation,https://arxiv.org/pdf/1410.1037.pdf,Facial feature point detection: A comprehensive survey,2018 +209,Australia,LFPW,lfpw,-33.8809651,151.20107299,University of Technology Sydney,edu,38192a0f9261d9727b119e294a65f2e25f72d7e6,citation,https://arxiv.org/pdf/1410.1037.pdf,Facial feature point detection: A comprehensive survey,2018 +210,China,LFPW,lfpw,34.1235825,108.83546,Xidian University,edu,38192a0f9261d9727b119e294a65f2e25f72d7e6,citation,https://arxiv.org/pdf/1410.1037.pdf,Facial feature point detection: A comprehensive survey,2018 +211,China,LFPW,lfpw,30.19331415,120.11930822,Zhejiang University,edu,bd8e2d27987be9e13af2aef378754f89ab20ce10,citation,http://bksy.zju.edu.cn/attachments/tlxjxj/2016-10/99999-1477633998-1097578.pdf,Facial feature points detecting based on Gaussian Mixture Models,2015 +212,Japan,LFPW,lfpw,35.2742655,137.01327841,Chubu University,edu,62f0d8446adee6a5e8102053a63a61af07ac4098,citation,http://www.vision.cs.chubu.ac.jp/MPRG/C_group/C072_yamashita2015.pdf,Facial point detection using convolutional neural network transferred from a heterogeneous task,2015 +213,Sweden,LFPW,lfpw,58.3978364,15.5760072,Linköping University,edu,ebd5df2b4105ba04cef4ca334fcb9bfd6ea0430c,citation,https://arxiv.org/pdf/1403.6888.pdf,Fast Localization of Facial Landmark Points,2014 +214,Croatia,LFPW,lfpw,45.801121,15.9708409,University of Zagreb,edu,ebd5df2b4105ba04cef4ca334fcb9bfd6ea0430c,citation,https://arxiv.org/pdf/1403.6888.pdf,Fast Localization of Facial Landmark Points,2014 +215,United States,LFPW,lfpw,29.736724,-95.3931825,Houston University,edu,5b2cfee6e81ef36507ebf3c305e84e9e0473575a,citation,https://arxiv.org/pdf/1704.02402.pdf,GoDP: Globally Optimized Dual Pathway deep network architecture for facial landmark localization in-the-wild,2018 +216,United States,LFPW,lfpw,43.07982815,-89.43066425,University of Wisconsin Madison,edu,fd615118fb290a8e3883e1f75390de8a6c68bfde,citation,https://pdfs.semanticscholar.org/fd61/5118fb290a8e3883e1f75390de8a6c68bfde.pdf,Joint Face Alignment with Non-parametric Shape Models,2012 +217,United Kingdom,LFPW,lfpw,51.49887085,-0.17560797,Imperial College London,edu,47471105d9ee2276e14ab4a3a4d66ef58612188f,citation,https://arxiv.org/pdf/1708.06023.pdf,Joint Multi-view Face Alignment in the Wild,2019 +218,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,d511e903a882658c9f6f930d6dd183007f508eda,citation,https://www.computer.org/csdl/proceedings/fg/2013/5545/00/06553766.pdf,Privileged information-based conditional regression forest for facial feature detection,2013 +219,China,LFPW,lfpw,31.4854255,120.2739581,Jiangnan University,edu,2d072cd43de8d17ce3198fae4469c498f97c6277,citation,http://www.ee.surrey.ac.uk/CVSSP/Publications/papers/Feng-IEEE-SPL-2015.pdf,Random Cascaded-Regression Copse for Robust Facial Landmark Detection,2015 +220,United Kingdom,LFPW,lfpw,51.24303255,-0.59001382,University of Surrey,edu,2d072cd43de8d17ce3198fae4469c498f97c6277,citation,http://www.ee.surrey.ac.uk/CVSSP/Publications/papers/Feng-IEEE-SPL-2015.pdf,Random Cascaded-Regression Copse for Robust Facial Landmark Detection,2015 +221,Italy,LFPW,lfpw,46.0658836,11.1159894,University of Trento,edu,b48d3694a8342b6efc18c9c9124c62406e6bf3b3,citation,,Recurrent Convolutional Shape Regression,2018 +222,United States,LFPW,lfpw,33.9850469,-118.4694832,"Snapchat Research, Venice, CA",company,b48d3694a8342b6efc18c9c9124c62406e6bf3b3,citation,,Recurrent Convolutional Shape Regression,2018 +223,United States,LFPW,lfpw,34.13710185,-118.12527487,California Institute of Technology,edu,2724ba85ec4a66de18da33925e537f3902f21249,citation,,Robust Face Landmark Estimation under Occlusion,2013 +224,United States,LFPW,lfpw,47.6423318,-122.1369302,Microsoft,company,2724ba85ec4a66de18da33925e537f3902f21249,citation,,Robust Face Landmark Estimation under Occlusion,2013 +225,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,1035b073455165a31de875390977c8c09a672f2d,citation,https://pdfs.semanticscholar.org/1035/b073455165a31de875390977c8c09a672f2d.pdf,Robust Facial Landmark Localization Under Simultaneous Real-World Degradations,2015 +226,China,LFPW,lfpw,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2f489bd9bfb61a7d7165a2f05c03377a00072477,citation,https://pdfs.semanticscholar.org/2f48/9bd9bfb61a7d7165a2f05c03377a00072477.pdf,Structured Semi-supervised Forest for Facial Landmarks Localization with Face Mask Reasoning,2014 +227,United Kingdom,LFPW,lfpw,51.5247272,-0.03931035,Queen Mary University of London,edu,2f489bd9bfb61a7d7165a2f05c03377a00072477,citation,https://pdfs.semanticscholar.org/2f48/9bd9bfb61a7d7165a2f05c03377a00072477.pdf,Structured Semi-supervised Forest for Facial Landmarks Localization with Face Mask Reasoning,2014 +228,United States,LFPW,lfpw,40.4441619,-79.94272826,Carnegie Mellon University,edu,fd4ac1da699885f71970588f84316589b7d8317b,citation,https://arxiv.org/pdf/1405.0601.pdf,Supervised Descent Method for Solving Nonlinear Least Squares Problems in Computer Vision,2014 +229,China,LFPW,lfpw,40.0044795,116.370238,Chinese Academy of Sciences,edu,e0162dea3746d58083dd1d061fb276015d875b2e,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Shao_Unconstrained_Face_Alignment_CVPR_2017_paper.pdf,Unconstrained Face Alignment Without Face Detection,2017 +230,United Kingdom,LFPW,lfpw,51.7534538,-1.25400997,University of Oxford,edu,73c9cbbf3f9cea1bc7dce98fce429bf0616a1a8c,citation,https://arxiv.org/pdf/1705.02193.pdf,Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings,2017 diff --git a/site/datasets/verified/market_1501.csv b/site/datasets/verified/market_1501.csv index 8561b33f..404b7423 100644 --- a/site/datasets/verified/market_1501.csv +++ b/site/datasets/verified/market_1501.csv @@ -1,177 +1,114 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,Market 1501,market_1501,0.0,0.0,,,,main,,Scalable Person Re-identification: A Benchmark,2015 -1,China,Market 1501,market_1501,31.83907195,117.26420748,University of Science and Technology of China,edu,5b309f6d98c503efb679eda51bd898543fb746f9,citation,https://arxiv.org/pdf/1809.05864.pdf,In Defense of the Classification Loss for Person Re-Identification,2018 -2,United States,Market 1501,market_1501,42.3614256,-71.0812092,Microsoft Research Asia,company,5b309f6d98c503efb679eda51bd898543fb746f9,citation,https://arxiv.org/pdf/1809.05864.pdf,In Defense of the Classification Loss for Person Re-Identification,2018 -3,United States,Market 1501,market_1501,39.2899685,-76.62196103,University of Maryland,edu,fe3f8826f615cc5ada33b01777b9f9dc93e0023c,citation,https://arxiv.org/pdf/1901.07702.pdf,Exploring Uncertainty in Conditional Multi-Modal Retrieval Systems,2019 -4,China,Market 1501,market_1501,24.4399419,118.09301781,Xiamen University,edu,d95ce873ed42b7c7facaa4c1e9c72b57b4e279f6,citation,https://pdfs.semanticscholar.org/d95c/e873ed42b7c7facaa4c1e9c72b57b4e279f6.pdf,Generalizing a Person Retrieval Model Hetero- and Homogeneously,2018 -5,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,d95ce873ed42b7c7facaa4c1e9c72b57b4e279f6,citation,https://pdfs.semanticscholar.org/d95c/e873ed42b7c7facaa4c1e9c72b57b4e279f6.pdf,Generalizing a Person Retrieval Model Hetero- and Homogeneously,2018 -6,Australia,Market 1501,market_1501,-35.2776999,149.118527,Australian National University,edu,d95ce873ed42b7c7facaa4c1e9c72b57b4e279f6,citation,https://pdfs.semanticscholar.org/d95c/e873ed42b7c7facaa4c1e9c72b57b4e279f6.pdf,Generalizing a Person Retrieval Model Hetero- and Homogeneously,2018 -7,China,Market 1501,market_1501,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,927ec8dde9eb0e3bc5bf0b1a0ae57f9cf745fd9c,citation,https://arxiv.org/pdf/1804.01438.pdf,Learning Discriminative Features with Multiple Granularities for Person Re-Identification,2018 -8,China,Market 1501,market_1501,31.83907195,117.26420748,University of Science and Technology of China,edu,04ca65f1454f1014ef5af5bfafb7aee576ee1be6,citation,https://arxiv.org/pdf/1812.08967.pdf,Densely Semantically Aligned Person Re-Identification,2018 -9,United States,Market 1501,market_1501,42.3614256,-71.0812092,Microsoft Research Asia,company,04ca65f1454f1014ef5af5bfafb7aee576ee1be6,citation,https://arxiv.org/pdf/1812.08967.pdf,Densely Semantically Aligned Person Re-Identification,2018 -10,China,Market 1501,market_1501,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,7daa2c0f76fd3bfc7feadf313d6ac7504d4ecd20,citation,https://arxiv.org/pdf/1803.09937.pdf,Dual Attention Matching Network for Context-Aware Feature Sequence Based Person Re-identification,2018 -11,Singapore,Market 1501,market_1501,1.3484104,103.68297965,Nanyang Technological University,edu,7daa2c0f76fd3bfc7feadf313d6ac7504d4ecd20,citation,https://arxiv.org/pdf/1803.09937.pdf,Dual Attention Matching Network for Context-Aware Feature Sequence Based Person Re-identification,2018 -12,China,Market 1501,market_1501,32.0565957,118.77408833,Nanjing University,edu,08b28a8f2699501d46d87956cbaa37255000daa3,citation,https://arxiv.org/pdf/1804.03864.pdf,MaskReID: A Mask Based Deep Ranking Neural Network for Person Re-identification,2018 -13,Australia,Market 1501,market_1501,-34.40505545,150.87834655,University of Wollongong,edu,08b28a8f2699501d46d87956cbaa37255000daa3,citation,https://arxiv.org/pdf/1804.03864.pdf,MaskReID: A Mask Based Deep Ranking Neural Network for Person Re-identification,2018 -14,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,baf5ab5e8972e9366951b7e66951e05e2a4b3e36,citation,https://arxiv.org/pdf/1802.08122.pdf,Harmonious Attention Network for Person Re-identification,2018 -15,United Kingdom,Market 1501,market_1501,52.3793131,-1.5604252,University of Warwick,edu,124d60fae338b1f87455d1fc4ede5fcfd806da1a,citation,https://arxiv.org/pdf/1807.01440.pdf,Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification,2018 -16,Singapore,Market 1501,market_1501,1.3484104,103.68297965,Nanyang Technological University,edu,124d60fae338b1f87455d1fc4ede5fcfd806da1a,citation,https://arxiv.org/pdf/1807.01440.pdf,Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification,2018 -17,Australia,Market 1501,market_1501,-35.0636071,147.3552234,Charles Sturt University,edu,124d60fae338b1f87455d1fc4ede5fcfd806da1a,citation,https://arxiv.org/pdf/1807.01440.pdf,Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification,2018 -18,United States,Market 1501,market_1501,33.776033,-84.39884086,Georgia Institute of Technology,edu,45a44e61236f7c144d9ec11561e236b2960c7cf6,citation,https://pdfs.semanticscholar.org/4eb8/4fd65703fc92863f9f589e3a07e6c841f7c4.pdf,Multi-object Tracking with Neural Gating Using Bilinear LSTM,2018 -19,United States,Market 1501,market_1501,45.5198289,-122.67797964,Oregon State University,edu,45a44e61236f7c144d9ec11561e236b2960c7cf6,citation,https://pdfs.semanticscholar.org/4eb8/4fd65703fc92863f9f589e3a07e6c841f7c4.pdf,Multi-object Tracking with Neural Gating Using Bilinear LSTM,2018 -20,China,Market 1501,market_1501,34.1235825,108.83546,Xidian University,edu,55355b0317f6e0c5218887441de71f05da4b42f6,citation,https://arxiv.org/pdf/1811.12150.pdf,Parameter-Free Spatial Attention Network for Person Re-Identification,2018 -21,Germany,Market 1501,market_1501,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,55355b0317f6e0c5218887441de71f05da4b42f6,citation,https://arxiv.org/pdf/1811.12150.pdf,Parameter-Free Spatial Attention Network for Person Re-Identification,2018 -22,China,Market 1501,market_1501,31.2284923,121.40211389,East China Normal University,edu,e1af55ad7bb26e5e1acde3ec6c5c43cffe884b04,citation,https://pdfs.semanticscholar.org/e1af/55ad7bb26e5e1acde3ec6c5c43cffe884b04.pdf,Person Re-identification by Mid-level Attribute and Part-based Identity Learning,2018 -23,Australia,Market 1501,market_1501,-35.2776999,149.118527,Australian National University,edu,c66350cbdee8c6873cc99807d342e932594aa0b9,citation,https://arxiv.org/pdf/1812.02162.pdf,Dissecting Person Re-identification from the Viewpoint of Viewpoint,2018 -24,Brazil,Market 1501,market_1501,-27.5953995,-48.6154218,University of Campinas,edu,b986a535e45751cef684a30631a74476e911a749,citation,https://arxiv.org/pdf/1807.05618.pdf,Improved Person Re-Identification Based on Saliency and Semantic Parsing with Deep Neural Network Models,2018 -25,South Korea,Market 1501,market_1501,37.26728,126.9841151,Seoul National University,edu,315df9b7dd354ae78ddf1049fb428b086eee632c,citation,https://arxiv.org/pdf/1804.07094.pdf,Part-Aligned Bilinear Representations for Person Re-identification,2018 -26,Germany,Market 1501,market_1501,48.7468939,9.0805141,Max Planck Institute for Intelligent Systems,edu,315df9b7dd354ae78ddf1049fb428b086eee632c,citation,https://arxiv.org/pdf/1804.07094.pdf,Part-Aligned Bilinear Representations for Person Re-identification,2018 -27,United States,Market 1501,market_1501,47.6423318,-122.1369302,Microsoft,company,315df9b7dd354ae78ddf1049fb428b086eee632c,citation,https://arxiv.org/pdf/1804.07094.pdf,Part-Aligned Bilinear Representations for Person Re-identification,2018 -28,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,7f23a4bb0c777dd72cca7665a5f370ac7980217e,citation,https://arxiv.org/pdf/1703.07220.pdf,Improving Person Re-identification by Attribute and Identity Learning,2017 -29,United States,Market 1501,market_1501,40.1019523,-88.2271615,UIUC,edu,cc78e3f1e531342f639e4a1fc8107a7a778ae1cf,citation,https://arxiv.org/pdf/1811.10144.pdf,One Shot Domain Adaptation for Person Re-Identification,2018 -30,China,Market 1501,market_1501,22.053565,113.39913285,Jilin University,edu,4abf902cefca527f707e4f76dd4e14fcd5d47361,citation,https://arxiv.org/pdf/1811.11510.pdf,Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-identification,2018 -31,China,Market 1501,market_1501,32.0565957,118.77408833,Nanjing University,edu,088e7b24bd1cf6e5922ae6c80d37439e05fadce9,citation,https://arxiv.org/pdf/1711.07155.pdf,Let Features Decide for Themselves: Feature Mask Network for Person Re-identification,2017 -32,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 -33,China,Market 1501,market_1501,39.993008,116.329882,SenseTime,company,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 -34,United States,Market 1501,market_1501,39.3299013,-76.6205177,Johns Hopkins University,edu,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 -35,China,Market 1501,market_1501,31.83907195,117.26420748,University of Science and Technology of China,edu,4f8e06ac894e9cc1eb1617a293e43448930c7d4f,citation,https://arxiv.org/pdf/1810.02936.pdf,FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification,2018 -36,China,Market 1501,market_1501,30.19331415,120.11930822,Zhejiang University,edu,84984c7201a7e5bc8ef4c01f0a7cfbe08c2c523b,citation,https://arxiv.org/pdf/1804.06964.pdf,GNAS: A Greedy Neural Architecture Search Method for Multi-Attribute Learning,2018 -37,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,c753521ba6fb06c12369d6fff814bb704c682ef5,citation,https://pdfs.semanticscholar.org/c753/521ba6fb06c12369d6fff814bb704c682ef5.pdf,Mancs: A Multi-task Attentional Network with Curriculum Sampling for Person Re-Identification,2018 -38,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,0a808a17f5c86413bd552a324ee6ba180a12f46d,citation,https://arxiv.org/pdf/1808.01571.pdf,Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association,2018 -39,China,Market 1501,market_1501,39.993008,116.329882,SenseTime,company,0a808a17f5c86413bd552a324ee6ba180a12f46d,citation,https://arxiv.org/pdf/1808.01571.pdf,Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association,2018 -40,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,0a808a17f5c86413bd552a324ee6ba180a12f46d,citation,https://arxiv.org/pdf/1808.01571.pdf,Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association,2018 -41,Germany,Market 1501,market_1501,48.7468939,9.0805141,"Max Planck Instutite for Intelligent Systems, Tüebingen",edu,9db841848aa96f60e765299de4cce7abe5ccb47d,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Tang_Multiple_People_Tracking_CVPR_2017_paper.pdf,Multiple People Tracking by Lifted Multicut and Person Re-identification,2017 -42,Germany,Market 1501,market_1501,49.2578657,7.0457956,"Max-Planck-Institut für Informatik, Saarbrücken, Germany",edu,9db841848aa96f60e765299de4cce7abe5ccb47d,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Tang_Multiple_People_Tracking_CVPR_2017_paper.pdf,Multiple People Tracking by Lifted Multicut and Person Re-identification,2017 -43,France,Market 1501,market_1501,48.8457981,2.3567236,Pierre and Marie Curie University,edu,231a12de5dedddf1184ae9caafbc4a954ce584c3,citation,https://pdfs.semanticscholar.org/231a/12de5dedddf1184ae9caafbc4a954ce584c3.pdf,Closed and Open World Multi-shot Person Re-identification. (Ré-identification de personnes à partir de multiples images dans le cadre de bases d'identités fermées et ouvertes),2017 -44,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,07dead6b98379faac1cf0b2cb34a5db842ab9de9,citation,https://arxiv.org/pdf/1711.10658.pdf,Deep-Person: Learning Discriminative Deep Features for Person Re-Identification,2017 -45,Canada,Market 1501,market_1501,46.7817463,-71.2747424,Université Laval,edu,a743127b44397b7a017a65a7ad52d0d7ccb4db93,citation,https://arxiv.org/pdf/1804.10094.pdf,Domain Adaptation Through Synthesis for Unsupervised Person Re-identification,2018 -46,Australia,Market 1501,market_1501,-35.2776999,149.118527,Australian National University,edu,12d62f1360587fdecee728e6c509acc378f38dc9,citation,https://arxiv.org/pdf/1805.06118.pdf,Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification,2018 -47,China,Market 1501,market_1501,32.20541,118.726956,Nanjing University of Information Science & Technology,edu,12d62f1360587fdecee728e6c509acc378f38dc9,citation,https://arxiv.org/pdf/1805.06118.pdf,Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification,2018 -48,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,12d62f1360587fdecee728e6c509acc378f38dc9,citation,https://arxiv.org/pdf/1805.06118.pdf,Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification,2018 -49,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,14b3a7aa61c15fd9cab0a4d8bc2a205a89fb572e,citation,https://arxiv.org/pdf/1807.11206.pdf,Hard-Aware Point-to-Set Deep Metric for Person Re-identification,2018 -50,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,14b3a7aa61c15fd9cab0a4d8bc2a205a89fb572e,citation,https://arxiv.org/pdf/1807.11206.pdf,Hard-Aware Point-to-Set Deep Metric for Person Re-identification,2018 -51,China,Market 1501,market_1501,22.304572,114.17976285,Hong Kong Polytechnic University,edu,fea0895326b663bf72be89151a751362db8ae881,citation,https://arxiv.org/pdf/1804.08866.pdf,Homocentric Hypersphere Feature Embedding for Person Re-identification,2018 -52,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,0c769c19d894e0dbd6eb314781dc1db3c626df57,citation,https://arxiv.org/pdf/1604.01850.pdf,Joint Detection and Identification Feature Learning for Person Search,2017 -53,China,Market 1501,market_1501,39.993008,116.329882,SenseTime,company,0c769c19d894e0dbd6eb314781dc1db3c626df57,citation,https://arxiv.org/pdf/1604.01850.pdf,Joint Detection and Identification Feature Learning for Person Search,2017 -54,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,0c769c19d894e0dbd6eb314781dc1db3c626df57,citation,https://arxiv.org/pdf/1604.01850.pdf,Joint Detection and Identification Feature Learning for Person Search,2017 -55,China,Market 1501,market_1501,30.209484,120.220912,"Hikvision Digital Technology Co., Ltd.",company,ed3991046e6dfba0c5cebdbbe914cc3aa06d0235,citation,https://arxiv.org/pdf/1812.06576.pdf,Learning Incremental Triplet Margin for Person Re-identification,2019 -56,China,Market 1501,market_1501,24.4399419,118.09301781,Xiamen University,edu,e746447afc4898713a0bcf2bb560286eb4d20019,citation,https://arxiv.org/pdf/1811.02074.pdf,Leveraging Virtual and Real Person for Unsupervised Person Re-identification,2018 -57,United States,Market 1501,market_1501,40.4441619,-79.94272826,Carnegie Mellon University,edu,76fb9e2963928bf8e940944d45c13d52db947702,citation,https://arxiv.org/pdf/1710.00478.pdf,Margin Sample Mining Loss: A Deep Learning Based Method for Person Re-identification,2017 -58,China,Market 1501,market_1501,30.19331415,120.11930822,Zhejiang University,edu,76fb9e2963928bf8e940944d45c13d52db947702,citation,https://arxiv.org/pdf/1710.00478.pdf,Margin Sample Mining Loss: A Deep Learning Based Method for Person Re-identification,2017 -59,Italy,Market 1501,market_1501,45.434532,12.326197,"DAIS, Università Ca’ Foscari, Venice, Italy",edu,bee609ea6e71aba9b449731242efdb136d556222,citation,https://arxiv.org/pdf/1706.06196.pdf,Multi-Target Tracking in Multiple Non-Overlapping Cameras using Constrained Dominant Sets,2017 -60,Italy,Market 1501,market_1501,45.4377672,12.321807,University Iuav of Venice,edu,bee609ea6e71aba9b449731242efdb136d556222,citation,https://arxiv.org/pdf/1706.06196.pdf,Multi-Target Tracking in Multiple Non-Overlapping Cameras using Constrained Dominant Sets,2017 -61,India,Market 1501,market_1501,13.0222347,77.56718325,Indian Institute of Science Bangalore,edu,317f5a56519df95884cce81cfba180ee3adaf5a5,citation,https://arxiv.org/pdf/1807.07295.pdf,Operator-In-The-Loop Deep Sequential Multi-camera Feature Fusion for Person Re-identification,2018 -62,Spain,Market 1501,market_1501,41.5007811,2.11143663,Universitat Autònoma de Barcelona,edu,388b03244e7cdf28c750d7f6d4b4eb64219c3e7a,citation,https://arxiv.org/pdf/1812.02937.pdf,Optimizing Speed/Accuracy Trade-Off for Person Re-identification via Knowledge Distillation,2018 -63,China,Market 1501,market_1501,39.10041,121.821932,Dalian University,edu,ae5983048e59a339c77fee89e9279a4a787ba985,citation,https://arxiv.org/pdf/1705.02145.pdf,Part-Based Deep Hashing for Large-Scale Person Re-Identification,2017 -64,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,ae5983048e59a339c77fee89e9279a4a787ba985,citation,https://arxiv.org/pdf/1705.02145.pdf,Part-Based Deep Hashing for Large-Scale Person Re-Identification,2017 -65,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,ae5983048e59a339c77fee89e9279a4a787ba985,citation,https://arxiv.org/pdf/1705.02145.pdf,Part-Based Deep Hashing for Large-Scale Person Re-Identification,2017 -66,Germany,Market 1501,market_1501,49.10184375,8.4331256,Karlsruhe Institute of Technology,edu,9812542cae5a470ea601e7c3a871331694105093,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w17/papers/Schumann_Person_Re-Identification_by_CVPR_2017_paper.pdf,Person Re-identification by Deep Learning Attribute-Complementary Information,2017 -67,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,e1dcc3946fa750da4bc05b1154b6321db163ad62,citation,http://gr.xjtu.edu.cn/c/document_library/get_file?folderId=1540809&name=DLFE-80365.pdf,Similarity Learning with Spatial Constraints for Person Re-identification,2016 -68,United States,Market 1501,market_1501,42.366183,-71.092455,Mitsubishi Electric Research Laboratories,company,bb4f83458976755e9310b241a689c8d21b481238,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w23/Jones_Improving_Face_Verification_ICCV_2017_paper.pdf,Improving Face Verification and Person Re-Identification Accuracy Using Hyperplane Similarity,2017 -69,United States,Market 1501,market_1501,42.3383668,-71.08793524,Northeastern University,edu,32dc3e04dea2306ec34ca3f39db27a2b0a49e0a1,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w21/Gou_moM_Mean_of_ICCV_2017_paper.pdf,moM: Mean of Moments Feature for Person Re-identification,2017 -70,United States,Market 1501,market_1501,42.3383668,-71.08793524,Northeastern University,edu,0deca8c53adcc13d8da72050d9a4b638da52264b,citation,https://pdfs.semanticscholar.org/0dec/a8c53adcc13d8da72050d9a4b638da52264b.pdf,"A Comprehensive Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets",2016 -71,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,193089d56758ab88391d846edd08d359b1f9a863,citation,https://arxiv.org/pdf/1611.05666.pdf,A Discriminatively Learned CNN Embedding for Person Reidentification,2017 -72,China,Market 1501,market_1501,31.821994,117.28059,"USTC, Hefei, China",edu,83c19722450e8f7dcb89dabb38265f19efafba27,citation,https://arxiv.org/pdf/1803.02983.pdf,A framework with updateable joint images re-ranking for Person Re-identification.,2018 -73,Singapore,Market 1501,market_1501,1.3484104,103.68297965,Nanyang Technological University,edu,6bb8a5f9e2ddf1bdcd42aa7212eb0499992c1e9e,citation,https://arxiv.org/pdf/1607.08381.pdf,A Siamese Long Short-Term Memory Architecture for Human Re-Identification,2016 -74,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,6bb8a5f9e2ddf1bdcd42aa7212eb0499992c1e9e,citation,https://arxiv.org/pdf/1607.08381.pdf,A Siamese Long Short-Term Memory Architecture for Human Re-Identification,2016 -75,Australia,Market 1501,market_1501,-33.88890695,151.18943366,University of Sydney,edu,6bb8a5f9e2ddf1bdcd42aa7212eb0499992c1e9e,citation,https://arxiv.org/pdf/1607.08381.pdf,A Siamese Long Short-Term Memory Architecture for Human Re-Identification,2016 -76,Germany,Market 1501,market_1501,49.4109266,8.6979529,Heidelberg University,edu,5fdb3533152f9862e3e4c2282cd5f1400af18956,citation,https://arxiv.org/pdf/1804.04694.pdf,A Variational U-Net for Conditional Appearance and Shape Generation,2018 -77,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,635efc8bddec1cf94b1ee4951e4d216331758422,citation,https://arxiv.org/pdf/1812.00914.pdf,Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling,2018 -78,Canada,Market 1501,market_1501,53.5238572,-113.52282665,University of Alberta,edu,635efc8bddec1cf94b1ee4951e4d216331758422,citation,https://arxiv.org/pdf/1812.00914.pdf,Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling,2018 -79,China,Market 1501,market_1501,39.9808333,116.34101249,Beihang University,edu,19be4580df2e76b70a39af6e749bf189e1ca3975,citation,https://arxiv.org/pdf/1803.10914.pdf,Adversarial Binary Coding for Efficient Person Re-identification,2018 -80,United Kingdom,Market 1501,market_1501,51.7534538,-1.25400997,University of Oxford,edu,47f4dec5f733e933c8b9a8fdcda9419741f2bf62,citation,https://arxiv.org/pdf/1901.10650.pdf,Adversarial Metric Attack for Person Re-identification,2019 -81,United States,Market 1501,market_1501,39.3299013,-76.6205177,Johns Hopkins University,edu,47f4dec5f733e933c8b9a8fdcda9419741f2bf62,citation,https://arxiv.org/pdf/1901.10650.pdf,Adversarial Metric Attack for Person Re-identification,2019 -82,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,eee4cc389ca85d23700cba9627fa11e5ee65d740,citation,https://arxiv.org/pdf/1807.10482.pdf,Adversarial Open-World Person Re-Identification,2018 -83,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,7969cc315bbafcd38a637eb8cd5d45ba897be319,citation,https://arxiv.org/pdf/1604.07807.pdf,An enhanced deep feature representation for person re-identification,2016 -84,China,Market 1501,market_1501,22.3874201,114.2082222,Hong Kong Baptist University,edu,c0e9d06383442d89426808d723ca04586db91747,citation,https://pdfs.semanticscholar.org/c0e9/d06383442d89426808d723ca04586db91747.pdf,Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification,2018 -85,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,c0e9d06383442d89426808d723ca04586db91747,citation,https://pdfs.semanticscholar.org/c0e9/d06383442d89426808d723ca04586db91747.pdf,Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification,2018 -86,Japan,Market 1501,market_1501,35.6924853,139.7582533,"National Institute of Informatics, Japan",edu,c0e9d06383442d89426808d723ca04586db91747,citation,https://pdfs.semanticscholar.org/c0e9/d06383442d89426808d723ca04586db91747.pdf,Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification,2018 -87,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,5e1514de6d20d3b1d148d6925edc89a6c891ce47,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Lin_Consistent-Aware_Deep_Learning_CVPR_2017_paper.pdf,Consistent-Aware Deep Learning for Person Re-identification in a Camera Network,2017 -88,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,bff1e1ecf00c37ec91edc7c5c85c1390726c3687,citation,https://arxiv.org/pdf/1511.07545.pdf,Constrained Deep Metric Learning for Person Re-identification,2015 -89,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,6ce6da7a6b2d55fac604d986595ba6979580393b,citation,https://arxiv.org/pdf/1611.06026.pdf,Cross Domain Knowledge Transfer for Person Re-identification,2016 -90,China,Market 1501,market_1501,23.0502042,113.39880323,South China University of Technology,edu,c249f0aa1416c51bf82be5bb47cbeb8aac6dee35,citation,https://arxiv.org/pdf/1806.04533.pdf,Cross-Dataset Person Re-identification Using Similarity Preserved Generative Adversarial Networks,2018 -91,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,4f83ef534c164bd7fbd1e71fe6a3d09a30326b26,citation,https://arxiv.org/pdf/1810.10221.pdf,Cross-Resolution Person Re-identification with Deep Antithetical Learning,2018 -92,China,Market 1501,market_1501,28.16437,112.93251,Central South University,edu,a6bc69831dea3efc5804b8ab65cf5a06688ddae0,citation,https://arxiv.org/pdf/1801.01760.pdf,Crossing Generative Adversarial Networks for Cross-View Person Re-identification,2018 -93,Australia,Market 1501,market_1501,-27.49741805,153.01316956,University of Queensland,edu,a6bc69831dea3efc5804b8ab65cf5a06688ddae0,citation,https://arxiv.org/pdf/1801.01760.pdf,Crossing Generative Adversarial Networks for Cross-View Person Re-identification,2018 -94,Australia,Market 1501,market_1501,-33.91758275,151.23124025,University of New South Wales,edu,a6bc69831dea3efc5804b8ab65cf5a06688ddae0,citation,https://arxiv.org/pdf/1801.01760.pdf,Crossing Generative Adversarial Networks for Cross-View Person Re-identification,2018 -95,China,Market 1501,market_1501,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,34b8e675d4651db45e484da34f3c415c60ef3ea2,citation,https://arxiv.org/pdf/1707.01220.pdf,DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer,2018 -96,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,34b8e675d4651db45e484da34f3c415c60ef3ea2,citation,https://arxiv.org/pdf/1707.01220.pdf,DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer,2018 -97,Australia,Market 1501,market_1501,-27.49741805,153.01316956,University of Queensland,edu,d1ba33106567c880bf99daba2bd31fe88df4ecba,citation,https://arxiv.org/pdf/1706.03160.pdf,Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification,2018 -98,Australia,Market 1501,market_1501,-33.91758275,151.23124025,University of New South Wales,edu,d1ba33106567c880bf99daba2bd31fe88df4ecba,citation,https://arxiv.org/pdf/1706.03160.pdf,Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification,2018 -99,Australia,Market 1501,market_1501,-33.88890695,151.18943366,University of Sydney,edu,d1ba33106567c880bf99daba2bd31fe88df4ecba,citation,https://arxiv.org/pdf/1706.03160.pdf,Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification,2018 -100,China,Market 1501,market_1501,39.9922379,116.30393816,Peking University,edu,2788f382e4396290acfc8b21df45cc811586e66e,citation,https://arxiv.org/pdf/1605.03259.pdf,Deep Attributes Driven Multi-Camera Person Re-identification,2016 -101,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,2788f382e4396290acfc8b21df45cc811586e66e,citation,https://arxiv.org/pdf/1605.03259.pdf,Deep Attributes Driven Multi-Camera Person Re-identification,2016 -102,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,2788f382e4396290acfc8b21df45cc811586e66e,citation,https://arxiv.org/pdf/1605.03259.pdf,Deep Attributes Driven Multi-Camera Person Re-identification,2016 -103,United States,Market 1501,market_1501,40.4441619,-79.94272826,Carnegie Mellon University,edu,63e1ce7de0fdbce6e03d25b5001c670c30139aa8,citation,https://arxiv.org/pdf/1707.07791.pdf,Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification,2018 -104,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,63e1ce7de0fdbce6e03d25b5001c670c30139aa8,citation,https://arxiv.org/pdf/1707.07791.pdf,Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification,2018 -105,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,e3e36ccd836458d51676789fb133b092d42dac16,citation,https://arxiv.org/pdf/1610.05047.pdf,Deep learning prototype domains for person re-identification,2017 -106,Australia,Market 1501,market_1501,-34.9189226,138.60423668,University of Adelaide,edu,63ac85ec1bff6009bb36f0b24ef189438836bc91,citation,https://arxiv.org/pdf/1606.01595.pdf,Deep linear discriminant analysis on fisher networks: A hybrid architecture for person re-identification,2017 -107,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,9a81f46fcf8c6c0efbe34649552b5056ce419a3d,citation,https://arxiv.org/pdf/1705.03332.pdf,Deep person re-identification with improved embedding and efficient training,2017 -108,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,6562c40932ea734f46e5068555fbf3a185a345de,citation,https://arxiv.org/pdf/1707.00409.pdf,Deep Ranking Model by Large Adaptive Margin Learning for Person Re-identification,2018 -109,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,35b9af6057801fb2f28881840c8427c9cf648757,citation,https://arxiv.org/pdf/1707.02785.pdf,Deep Reinforcement Learning Attention Selection For Person Re-Identification,2017 -110,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,8961677300a9ee30ca51e1a3cf9815b4a162265b,citation,https://arxiv.org/pdf/1707.00798.pdf,Deep Representation Learning with Part Loss for Person Re-Identification,2017 -111,China,Market 1501,market_1501,39.9922379,116.30393816,Peking University,edu,8961677300a9ee30ca51e1a3cf9815b4a162265b,citation,https://arxiv.org/pdf/1707.00798.pdf,Deep Representation Learning with Part Loss for Person Re-Identification,2017 -112,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,8961677300a9ee30ca51e1a3cf9815b4a162265b,citation,https://arxiv.org/pdf/1707.00798.pdf,Deep Representation Learning with Part Loss for Person Re-Identification,2017 -113,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,123286df95d93600f4281c60a60c69121c6440c7,citation,https://arxiv.org/pdf/1710.05711.pdf,Deep Self-Paced Learning for Person Re-Identification,2018 -114,China,Market 1501,market_1501,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,d8949f4f4085b15978e20ed7c5c34a080dd637f2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w17/papers/Chen_Deep_Spatial-Temporal_Fusion_CVPR_2017_paper.pdf,Deep Spatial-Temporal Fusion Network for Video-Based Person Re-identification,2017 -115,China,Market 1501,market_1501,39.9922379,116.30393816,Peking University,edu,31c0968fb5f587918f1c49bf7fa51453b3e89cf7,citation,https://arxiv.org/pdf/1611.05244.pdf,Deep Transfer Learning for Person Re-Identification,2018 -116,China,Market 1501,market_1501,30.19331415,120.11930822,Zhejiang University,edu,50bf4f77d8b66ec838ad59a869630eace7e0e4a7,citation,https://arxiv.org/pdf/1707.07256.pdf,Deeply-Learned Part-Aligned Representations for Person Re-identification,2017 -117,United States,Market 1501,market_1501,47.6423318,-122.1369302,Microsoft,company,50bf4f77d8b66ec838ad59a869630eace7e0e4a7,citation,https://arxiv.org/pdf/1707.07256.pdf,Deeply-Learned Part-Aligned Representations for Person Re-identification,2017 -118,China,Market 1501,market_1501,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,d497543834f23f72f4092252b613bf3adaefc606,citation,https://arxiv.org/pdf/1805.07698.pdf,Density-Adaptive Kernel based Re-Ranking for Person Re-Identification,2018 -119,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,19a0f34440c25323544b90d9d822a212bfed0eb5,citation,https://arxiv.org/pdf/1901.10100.pdf,Discovering Underlying Person Structure Pattern with Relative Local Distance for Person Re-identification,2019 -120,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,7b2e0c87aece7ff1404ef2034d4c5674770301b2,citation,https://arxiv.org/pdf/1807.01455.pdf,Discriminative Feature Learning with Foreground Attention for Person Re-Identification,2018 -121,China,Market 1501,market_1501,31.2284923,121.40211389,East China Normal University,edu,0353fe24ecd237f4d9ae4dbc277a6a67a69ce8ed,citation,https://pdfs.semanticscholar.org/0353/fe24ecd237f4d9ae4dbc277a6a67a69ce8ed.pdf,Discriminative Feature Representation for Person Re-identification by Batch-contrastive Loss,2018 -122,United Kingdom,Market 1501,market_1501,55.94951105,-3.19534913,University of Edinburgh,edu,68621721705e3115355268450b4b447362e455c6,citation,https://arxiv.org/pdf/1812.02605.pdf,Disjoint Label Space Transfer Learning with Common Factorised Space,2019 -123,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,d950af49c44bc5d9f4a5cc1634e606004790b1e5,citation,https://arxiv.org/pdf/1708.04169.pdf,Divide and Fuse: A Re-ranking Approach for Person Re-identification,2017 -124,United Arab Emirates,Market 1501,market_1501,24.453884,54.3773438,New York University Abu Dhabi,edu,a94b832facb57ea37b18927b13d2dd4c5fa3a9ea,citation,https://arxiv.org/pdf/1803.09733.pdf,Domain transfer convolutional attribute embedding,2018 -125,China,Market 1501,market_1501,39.9106327,116.3356321,Chinese Academy of Science,edu,7f8d4494aba2a2b11a88bf7de4b8879b047dd69b,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Easy_Identification_From_CVPR_2018_paper.pdf,Easy Identification from Better Constraints: Multi-shot Person Re-identification from Reference Constraints,2018 -126,United States,Market 1501,market_1501,42.0551164,-87.67581113,Northwestern University,edu,7f8d4494aba2a2b11a88bf7de4b8879b047dd69b,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Easy_Identification_From_CVPR_2018_paper.pdf,Easy Identification from Better Constraints: Multi-shot Person Re-identification from Reference Constraints,2018 -127,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,ca1db9dc493a045e3fadf8d8209eaa4311bbdc70,citation,https://arxiv.org/pdf/1709.09304.pdf,Effective Image Retrieval via Multilinear Multi-index Fusion,2017 -128,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,ca1db9dc493a045e3fadf8d8209eaa4311bbdc70,citation,https://arxiv.org/pdf/1709.09304.pdf,Effective Image Retrieval via Multilinear Multi-index Fusion,2017 -129,United States,Market 1501,market_1501,42.0551164,-87.67581113,Northwestern University,edu,00bf7bcf31ee71f5f325ca5307883157ba3d580f,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhou_Efficient_Online_Local_ICCV_2017_paper.pdf,Efficient Online Local Metric Adaptation via Negative Samples for Person Re-identification,2017 -130,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,febff0f6faa8dde77848845e4b3e6f6c91180d33,citation,https://arxiv.org/pdf/1611.00137.pdf,Embedding Deep Metric for Person Re-identication A Study Against Large Variations,2016 -131,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,febff0f6faa8dde77848845e4b3e6f6c91180d33,citation,https://arxiv.org/pdf/1611.00137.pdf,Embedding Deep Metric for Person Re-identication A Study Against Large Variations,2016 -132,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,febff0f6faa8dde77848845e4b3e6f6c91180d33,citation,https://arxiv.org/pdf/1611.00137.pdf,Embedding Deep Metric for Person Re-identication A Study Against Large Variations,2016 -133,China,Market 1501,market_1501,31.846918,117.29053367,Hefei University of Technology,edu,fd0e1fecf7e72318a4c53463fd5650720df40281,citation,https://arxiv.org/pdf/1606.04404.pdf,End-to-End Comparative Attention Networks for Person Re-Identification,2017 -134,China,Market 1501,market_1501,39.9041999,116.4073963,"Qihoo 360 AI Institute, Beijing, China",edu,fd0e1fecf7e72318a4c53463fd5650720df40281,citation,https://arxiv.org/pdf/1606.04404.pdf,End-to-End Comparative Attention Networks for Person Re-Identification,2017 -135,Singapore,Market 1501,market_1501,1.2966426,103.7763939,Singapore / National University of Singapore,edu,fd0e1fecf7e72318a4c53463fd5650720df40281,citation,https://arxiv.org/pdf/1606.04404.pdf,End-to-End Comparative Attention Networks for Person Re-Identification,2017 -136,China,Market 1501,market_1501,31.970907,118.8128989,PLA Army Engineering University,edu,c8ac121e9c4eb9964be9c5713f22a95c1c3b57e9,citation,https://arxiv.org/pdf/1901.05798.pdf,Ensemble Feature for Person Re-Identification,2019 -137,Spain,Market 1501,market_1501,41.5008957,2.111553,Autonomous University of Barcelona,edu,fe54a5a10288648f3bd0a71b053cdb896716b552,citation,https://arxiv.org/pdf/1804.04419.pdf,"Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification",2018 -138,Spain,Market 1501,market_1501,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,fe54a5a10288648f3bd0a71b053cdb896716b552,citation,https://arxiv.org/pdf/1804.04419.pdf,"Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification",2018 -139,Spain,Market 1501,market_1501,41.3868913,2.16352385,University of Barcelona,edu,fe54a5a10288648f3bd0a71b053cdb896716b552,citation,https://arxiv.org/pdf/1804.04419.pdf,"Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification",2018 -140,United States,Market 1501,market_1501,33.2416008,-111.8839083,Intel,company,6a9c3011b5092daa1d0cacda23f20ca4ae74b902,citation,https://arxiv.org/pdf/1812.02465.pdf,Fast and Accurate Person Re-Identification with RMNet.,2018 -141,China,Market 1501,market_1501,39.9808333,116.34101249,Beihang University,edu,91cc3981c304227e13ae151a43fbb124419bc0ce,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Chen_Fast_Person_Re-Identification_CVPR_2017_paper.pdf,Fast Person Re-identification via Cross-Camera Semantic Binary Transformation,2017 -142,United Kingdom,Market 1501,market_1501,52.6221571,1.2409136,University of East Anglia,edu,91cc3981c304227e13ae151a43fbb124419bc0ce,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Chen_Fast_Person_Re-Identification_CVPR_2017_paper.pdf,Fast Person Re-identification via Cross-Camera Semantic Binary Transformation,2017 -143,Singapore,Market 1501,market_1501,1.3484104,103.68297965,Nanyang Technological University,edu,6123e52c1a560c88817d8720e05fbff8565271fb,citation,https://arxiv.org/pdf/1607.08378.pdf,Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification,2016 -144,United States,Market 1501,market_1501,38.5336349,-121.79077264,"University of California, Davis",edu,79c959833ff49f860e20b6654dbf4d6acdee0230,citation,https://arxiv.org/pdf/1811.02545.pdf,Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond,2018 -145,China,Market 1501,market_1501,30.19331415,120.11930822,Zhejiang University,edu,79c959833ff49f860e20b6654dbf4d6acdee0230,citation,https://arxiv.org/pdf/1811.02545.pdf,Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond,2018 -146,Taiwan,Market 1501,market_1501,25.0410728,121.6147562,Institute of Information Science,edu,3cbb4cf942ee95d14505c0f83a48ba224abdd00b,citation,https://arxiv.org/pdf/1712.06820.pdf,Hierarchical Cross Network for Person Re-identification,2017 -147,Japan,Market 1501,market_1501,33.8941968,130.8394083,Kyushu Institute of Technology,edu,7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf,citation,https://arxiv.org/pdf/1706.04318.pdf,Hierarchical Gaussian Descriptors with Application to Person Re-Identification,2017 -148,Japan,Market 1501,market_1501,33.59914655,130.22359848,Kyushu University,edu,7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf,citation,https://arxiv.org/pdf/1706.04318.pdf,Hierarchical Gaussian Descriptors with Application to Person Re-Identification,2017 -149,Japan,Market 1501,market_1501,35.9020448,139.93622009,University of Tokyo,edu,7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf,citation,https://arxiv.org/pdf/1706.04318.pdf,Hierarchical Gaussian Descriptors with Application to Person Re-Identification,2017 -150,United States,Market 1501,market_1501,42.3504253,-71.10056114,Boston University,edu,7c25ed788da1f5f61d8d1da23dd319dfb4e5ac2d,citation,https://arxiv.org/pdf/1612.01345.pdf,Human-In-The-Loop Person Re-Identification,2016 -151,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,7c25ed788da1f5f61d8d1da23dd319dfb4e5ac2d,citation,https://arxiv.org/pdf/1612.01345.pdf,Human-In-The-Loop Person Re-Identification,2016 -152,United Kingdom,Market 1501,market_1501,55.378051,-3.435973,"Vision Semantics Ltd, UK",edu,7c25ed788da1f5f61d8d1da23dd319dfb4e5ac2d,citation,https://arxiv.org/pdf/1612.01345.pdf,Human-In-The-Loop Person Re-Identification,2016 -153,Australia,Market 1501,market_1501,-37.9062737,145.1319449,"CSIRO, Australia",edu,53492cb14b33a26b10c91102daa2d5a2a3ed069d,citation,https://arxiv.org/pdf/1806.07592.pdf,Improving Online Multiple Object tracking with Deep Metric Learning,2018 -154,Germany,Market 1501,market_1501,50.7791703,6.06728733,RWTH Aachen University,edu,a3d11e98794896849ab2304a42bf83e2979e5fb5,citation,https://arxiv.org/pdf/1703.07737.pdf,In Defense of the Triplet Loss for Person Re-Identification,2017 -155,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,cb8567f074573a0d66d50e75b5a91df283ccd503,citation,https://arxiv.org/pdf/1708.05512.pdf,Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification,2018 -156,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,207e0ac5301a3c79af862951b70632ed650f74f7,citation,https://arxiv.org/pdf/1603.02139.pdf,Learning a Discriminative Null Space for Person Re-identification,2016 -157,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,34cf90fcbf83025666c5c86ec30ac58b632b27b0,citation,https://arxiv.org/pdf/1710.06555.pdf,Learning Deep Context-Aware Features over Body and Latent Parts for Person Re-identification,2017 -158,United States,Market 1501,market_1501,40.007581,-105.2659417,University of Colorado,edu,ad3be20fe0106d80c567def71fef01146564df4b,citation,https://arxiv.org/pdf/1802.05312.pdf,Learning Deep Disentangled Embeddings With the F-Statistic Loss,2018 -159,Russia,Market 1501,market_1501,55.6846566,37.3407539,"Skolkovo Institute of Science and Technology, Skolkovo, Moscow",edu,218603147709344d4ff66625d83603deee2854bf,citation,https://arxiv.org/pdf/1611.00822.pdf,Learning Deep Embeddings with Histogram Loss,2016 -160,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,489decd84645b77d31001d17a66abb92bb96c731,citation,https://arxiv.org/pdf/1803.11333.pdf,Learning View-Specific Deep Networks for Person Re-Identification,2018 -161,Norway,Market 1501,market_1501,63.419499,10.4020771,Norwegian University of Science and Technology,edu,2102915d0c51cfda4d85133bd593ecb9508fa4bb,citation,https://arxiv.org/pdf/1701.03153.pdf,Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification,2018 -162,Italy,Market 1501,market_1501,41.9037626,12.5144384,Sapienza University of Rome,edu,2102915d0c51cfda4d85133bd593ecb9508fa4bb,citation,https://arxiv.org/pdf/1701.03153.pdf,Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification,2018 -163,Italy,Market 1501,market_1501,45.437398,11.003376,University of Verona,edu,2102915d0c51cfda4d85133bd593ecb9508fa4bb,citation,https://arxiv.org/pdf/1701.03153.pdf,Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification,2018 -164,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,c0387e788a52f10bf35d4d50659cfa515d89fbec,citation,https://pdfs.semanticscholar.org/c038/7e788a52f10bf35d4d50659cfa515d89fbec.pdf,MARS: A Video Benchmark for Large-Scale Person Re-Identification,2016 -165,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,1e83e2abcb258cd62b160e3f31a490a6bc042e83,citation,https://arxiv.org/pdf/1704.02492.pdf,Metric Learning in Codebook Generation of Bag-of-Words for Person Re-identification,2017 -166,China,Market 1501,market_1501,31.8405068,117.2638057,Hefei University,edu,7c9d8593cdf2f8ba9f27906b2b5827b145631a0b,citation,https://arxiv.org/pdf/1810.08534.pdf,MsCGAN: Multi-scale Conditional Generative Adversarial Networks for Person Image Generation,2018 -167,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,1565bf91f8fdfe5f5168a5050b1418debc662151,citation,https://arxiv.org/pdf/1711.03368.pdf,One-pass Person Re-identification by Sketch Online Discriminant Analysis,2017 -168,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,592e555ebe4bd2d821230e7074d7e9626af716b0,citation,https://arxiv.org/pdf/1809.02681.pdf,Open Set Adversarial Examples,2018 -169,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,fcaa88dcb1a440ef09c4e5d724ed209bfc5d3367,citation,https://arxiv.org/pdf/1811.09928.pdf,PCGAN: Partition-Controlled Human Image Generation,2019 -170,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,fcaa88dcb1a440ef09c4e5d724ed209bfc5d3367,citation,https://arxiv.org/pdf/1811.09928.pdf,PCGAN: Partition-Controlled Human Image Generation,2019 -171,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2fad06ed34169a5b1f736112364c58140577a6b4,citation,https://pdfs.semanticscholar.org/2fad/06ed34169a5b1f736112364c58140577a6b4.pdf,Pedestrian Color Naming via Convolutional Neural Network,2016 -172,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,25bb4212af72d64ec20cac533f58f7af1472e057,citation,https://arxiv.org/pdf/1703.08837.pdf,Person Re-Identification by Camera Correlation Aware Feature Augmentation,2018 -173,China,Market 1501,market_1501,28.2290209,112.99483204,"National University of Defense Technology, China",mil,25bb4212af72d64ec20cac533f58f7af1472e057,citation,https://arxiv.org/pdf/1703.08837.pdf,Person Re-Identification by Camera Correlation Aware Feature Augmentation,2018 -174,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,25bb4212af72d64ec20cac533f58f7af1472e057,citation,https://arxiv.org/pdf/1703.08837.pdf,Person Re-Identification by Camera Correlation Aware Feature Augmentation,2018 -175,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,744cc8c69255cbe9d992315e456b9efb06f42e20,citation,https://arxiv.org/pdf/1705.04724.pdf,Person Re-Identification by Deep Joint Learning of Multi-Loss Classification,2017 +1,Germany,Market 1501,market_1501,48.7468939,9.0805141,"Max Planck Instutite for Intelligent Systems, Tüebingen",edu,9db841848aa96f60e765299de4cce7abe5ccb47d,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Tang_Multiple_People_Tracking_CVPR_2017_paper.pdf,Multiple People Tracking by Lifted Multicut and Person Re-identification,2017 +2,Germany,Market 1501,market_1501,49.2578657,7.0457956,"Max-Planck-Institut für Informatik, Saarbrücken, Germany",edu,9db841848aa96f60e765299de4cce7abe5ccb47d,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Tang_Multiple_People_Tracking_CVPR_2017_paper.pdf,Multiple People Tracking by Lifted Multicut and Person Re-identification,2017 +3,France,Market 1501,market_1501,48.8457981,2.3567236,Pierre and Marie Curie University,edu,231a12de5dedddf1184ae9caafbc4a954ce584c3,citation,https://pdfs.semanticscholar.org/231a/12de5dedddf1184ae9caafbc4a954ce584c3.pdf,Closed and Open World Multi-shot Person Re-identification. (Ré-identification de personnes à partir de multiples images dans le cadre de bases d'identités fermées et ouvertes),2017 +4,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,07dead6b98379faac1cf0b2cb34a5db842ab9de9,citation,https://arxiv.org/pdf/1711.10658.pdf,Deep-Person: Learning Discriminative Deep Features for Person Re-Identification,2017 +5,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,e1dcc3946fa750da4bc05b1154b6321db163ad62,citation,http://gr.xjtu.edu.cn/c/document_library/get_file?folderId=1540809&name=DLFE-80365.pdf,Similarity Learning with Spatial Constraints for Person Re-identification,2016 +6,United States,Market 1501,market_1501,42.366183,-71.092455,Mitsubishi Electric Research Laboratories,company,bb4f83458976755e9310b241a689c8d21b481238,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w23/Jones_Improving_Face_Verification_ICCV_2017_paper.pdf,Improving Face Verification and Person Re-Identification Accuracy Using Hyperplane Similarity,2017 +7,United States,Market 1501,market_1501,42.3383668,-71.08793524,Northeastern University,edu,32dc3e04dea2306ec34ca3f39db27a2b0a49e0a1,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w21/Gou_moM_Mean_of_ICCV_2017_paper.pdf,moM: Mean of Moments Feature for Person Re-identification,2017 +8,United States,Market 1501,market_1501,42.3383668,-71.08793524,Northeastern University,edu,0deca8c53adcc13d8da72050d9a4b638da52264b,citation,https://pdfs.semanticscholar.org/0dec/a8c53adcc13d8da72050d9a4b638da52264b.pdf,"A Comprehensive Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets",2016 +9,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,193089d56758ab88391d846edd08d359b1f9a863,citation,https://arxiv.org/pdf/1611.05666.pdf,A Discriminatively Learned CNN Embedding for Person Reidentification,2017 +10,China,Market 1501,market_1501,31.821994,117.28059,"USTC, Hefei, China",edu,83c19722450e8f7dcb89dabb38265f19efafba27,citation,https://arxiv.org/pdf/1803.02983.pdf,A framework with updateable joint images re-ranking for Person Re-identification.,2018 +11,Singapore,Market 1501,market_1501,1.3484104,103.68297965,Nanyang Technological University,edu,6bb8a5f9e2ddf1bdcd42aa7212eb0499992c1e9e,citation,https://arxiv.org/pdf/1607.08381.pdf,A Siamese Long Short-Term Memory Architecture for Human Re-Identification,2016 +12,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,6bb8a5f9e2ddf1bdcd42aa7212eb0499992c1e9e,citation,https://arxiv.org/pdf/1607.08381.pdf,A Siamese Long Short-Term Memory Architecture for Human Re-Identification,2016 +13,Australia,Market 1501,market_1501,-33.88890695,151.18943366,University of Sydney,edu,6bb8a5f9e2ddf1bdcd42aa7212eb0499992c1e9e,citation,https://arxiv.org/pdf/1607.08381.pdf,A Siamese Long Short-Term Memory Architecture for Human Re-Identification,2016 +14,Germany,Market 1501,market_1501,49.4109266,8.6979529,Heidelberg University,edu,5fdb3533152f9862e3e4c2282cd5f1400af18956,citation,https://arxiv.org/pdf/1804.04694.pdf,A Variational U-Net for Conditional Appearance and Shape Generation,2018 +15,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,635efc8bddec1cf94b1ee4951e4d216331758422,citation,https://arxiv.org/pdf/1812.00914.pdf,Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling,2018 +16,Canada,Market 1501,market_1501,53.5238572,-113.52282665,University of Alberta,edu,635efc8bddec1cf94b1ee4951e4d216331758422,citation,https://arxiv.org/pdf/1812.00914.pdf,Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling,2018 +17,China,Market 1501,market_1501,39.9808333,116.34101249,Beihang University,edu,19be4580df2e76b70a39af6e749bf189e1ca3975,citation,https://arxiv.org/pdf/1803.10914.pdf,Adversarial Binary Coding for Efficient Person Re-identification,2018 +18,United Kingdom,Market 1501,market_1501,51.7534538,-1.25400997,University of Oxford,edu,47f4dec5f733e933c8b9a8fdcda9419741f2bf62,citation,https://arxiv.org/pdf/1901.10650.pdf,Adversarial Metric Attack for Person Re-identification,2019 +19,United States,Market 1501,market_1501,39.3299013,-76.6205177,Johns Hopkins University,edu,47f4dec5f733e933c8b9a8fdcda9419741f2bf62,citation,https://arxiv.org/pdf/1901.10650.pdf,Adversarial Metric Attack for Person Re-identification,2019 +20,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,eee4cc389ca85d23700cba9627fa11e5ee65d740,citation,https://arxiv.org/pdf/1807.10482.pdf,Adversarial Open-World Person Re-Identification,2018 +21,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,7969cc315bbafcd38a637eb8cd5d45ba897be319,citation,https://arxiv.org/pdf/1604.07807.pdf,An enhanced deep feature representation for person re-identification,2016 +22,China,Market 1501,market_1501,22.3874201,114.2082222,Hong Kong Baptist University,edu,c0e9d06383442d89426808d723ca04586db91747,citation,https://pdfs.semanticscholar.org/c0e9/d06383442d89426808d723ca04586db91747.pdf,Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification,2018 +23,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,c0e9d06383442d89426808d723ca04586db91747,citation,https://pdfs.semanticscholar.org/c0e9/d06383442d89426808d723ca04586db91747.pdf,Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification,2018 +24,Japan,Market 1501,market_1501,35.6924853,139.7582533,"National Institute of Informatics, Japan",edu,c0e9d06383442d89426808d723ca04586db91747,citation,https://pdfs.semanticscholar.org/c0e9/d06383442d89426808d723ca04586db91747.pdf,Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification,2018 +25,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,5e1514de6d20d3b1d148d6925edc89a6c891ce47,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Lin_Consistent-Aware_Deep_Learning_CVPR_2017_paper.pdf,Consistent-Aware Deep Learning for Person Re-identification in a Camera Network,2017 +26,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,bff1e1ecf00c37ec91edc7c5c85c1390726c3687,citation,https://arxiv.org/pdf/1511.07545.pdf,Constrained Deep Metric Learning for Person Re-identification,2015 +27,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,6ce6da7a6b2d55fac604d986595ba6979580393b,citation,https://arxiv.org/pdf/1611.06026.pdf,Cross Domain Knowledge Transfer for Person Re-identification,2016 +28,China,Market 1501,market_1501,23.0502042,113.39880323,South China University of Technology,edu,c249f0aa1416c51bf82be5bb47cbeb8aac6dee35,citation,https://arxiv.org/pdf/1806.04533.pdf,Cross-Dataset Person Re-identification Using Similarity Preserved Generative Adversarial Networks,2018 +29,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,4f83ef534c164bd7fbd1e71fe6a3d09a30326b26,citation,https://arxiv.org/pdf/1810.10221.pdf,Cross-Resolution Person Re-identification with Deep Antithetical Learning,2018 +30,China,Market 1501,market_1501,28.16437,112.93251,Central South University,edu,a6bc69831dea3efc5804b8ab65cf5a06688ddae0,citation,https://arxiv.org/pdf/1801.01760.pdf,Crossing Generative Adversarial Networks for Cross-View Person Re-identification,2018 +31,Australia,Market 1501,market_1501,-27.49741805,153.01316956,University of Queensland,edu,a6bc69831dea3efc5804b8ab65cf5a06688ddae0,citation,https://arxiv.org/pdf/1801.01760.pdf,Crossing Generative Adversarial Networks for Cross-View Person Re-identification,2018 +32,Australia,Market 1501,market_1501,-33.91758275,151.23124025,University of New South Wales,edu,a6bc69831dea3efc5804b8ab65cf5a06688ddae0,citation,https://arxiv.org/pdf/1801.01760.pdf,Crossing Generative Adversarial Networks for Cross-View Person Re-identification,2018 +33,China,Market 1501,market_1501,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,34b8e675d4651db45e484da34f3c415c60ef3ea2,citation,https://arxiv.org/pdf/1707.01220.pdf,DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer,2018 +34,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,34b8e675d4651db45e484da34f3c415c60ef3ea2,citation,https://arxiv.org/pdf/1707.01220.pdf,DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer,2018 +35,Australia,Market 1501,market_1501,-27.49741805,153.01316956,University of Queensland,edu,d1ba33106567c880bf99daba2bd31fe88df4ecba,citation,https://arxiv.org/pdf/1706.03160.pdf,Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification,2018 +36,Australia,Market 1501,market_1501,-33.91758275,151.23124025,University of New South Wales,edu,d1ba33106567c880bf99daba2bd31fe88df4ecba,citation,https://arxiv.org/pdf/1706.03160.pdf,Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification,2018 +37,Australia,Market 1501,market_1501,-33.88890695,151.18943366,University of Sydney,edu,d1ba33106567c880bf99daba2bd31fe88df4ecba,citation,https://arxiv.org/pdf/1706.03160.pdf,Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification,2018 +38,China,Market 1501,market_1501,39.9922379,116.30393816,Peking University,edu,2788f382e4396290acfc8b21df45cc811586e66e,citation,https://arxiv.org/pdf/1605.03259.pdf,Deep Attributes Driven Multi-Camera Person Re-identification,2016 +39,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,2788f382e4396290acfc8b21df45cc811586e66e,citation,https://arxiv.org/pdf/1605.03259.pdf,Deep Attributes Driven Multi-Camera Person Re-identification,2016 +40,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,2788f382e4396290acfc8b21df45cc811586e66e,citation,https://arxiv.org/pdf/1605.03259.pdf,Deep Attributes Driven Multi-Camera Person Re-identification,2016 +41,United States,Market 1501,market_1501,40.4441619,-79.94272826,Carnegie Mellon University,edu,63e1ce7de0fdbce6e03d25b5001c670c30139aa8,citation,https://arxiv.org/pdf/1707.07791.pdf,Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification,2018 +42,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,63e1ce7de0fdbce6e03d25b5001c670c30139aa8,citation,https://arxiv.org/pdf/1707.07791.pdf,Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification,2018 +43,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,e3e36ccd836458d51676789fb133b092d42dac16,citation,https://arxiv.org/pdf/1610.05047.pdf,Deep learning prototype domains for person re-identification,2017 +44,Australia,Market 1501,market_1501,-34.9189226,138.60423668,University of Adelaide,edu,63ac85ec1bff6009bb36f0b24ef189438836bc91,citation,https://arxiv.org/pdf/1606.01595.pdf,Deep linear discriminant analysis on fisher networks: A hybrid architecture for person re-identification,2017 +45,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,9a81f46fcf8c6c0efbe34649552b5056ce419a3d,citation,https://arxiv.org/pdf/1705.03332.pdf,Deep person re-identification with improved embedding and efficient training,2017 +46,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,6562c40932ea734f46e5068555fbf3a185a345de,citation,https://arxiv.org/pdf/1707.00409.pdf,Deep Ranking Model by Large Adaptive Margin Learning for Person Re-identification,2018 +47,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,35b9af6057801fb2f28881840c8427c9cf648757,citation,https://arxiv.org/pdf/1707.02785.pdf,Deep Reinforcement Learning Attention Selection For Person Re-Identification,2017 +48,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,8961677300a9ee30ca51e1a3cf9815b4a162265b,citation,https://arxiv.org/pdf/1707.00798.pdf,Deep Representation Learning with Part Loss for Person Re-Identification,2017 +49,China,Market 1501,market_1501,39.9922379,116.30393816,Peking University,edu,8961677300a9ee30ca51e1a3cf9815b4a162265b,citation,https://arxiv.org/pdf/1707.00798.pdf,Deep Representation Learning with Part Loss for Person Re-Identification,2017 +50,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,8961677300a9ee30ca51e1a3cf9815b4a162265b,citation,https://arxiv.org/pdf/1707.00798.pdf,Deep Representation Learning with Part Loss for Person Re-Identification,2017 +51,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,123286df95d93600f4281c60a60c69121c6440c7,citation,https://arxiv.org/pdf/1710.05711.pdf,Deep Self-Paced Learning for Person Re-Identification,2018 +52,China,Market 1501,market_1501,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,d8949f4f4085b15978e20ed7c5c34a080dd637f2,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w17/papers/Chen_Deep_Spatial-Temporal_Fusion_CVPR_2017_paper.pdf,Deep Spatial-Temporal Fusion Network for Video-Based Person Re-identification,2017 +53,China,Market 1501,market_1501,39.9922379,116.30393816,Peking University,edu,31c0968fb5f587918f1c49bf7fa51453b3e89cf7,citation,https://arxiv.org/pdf/1611.05244.pdf,Deep Transfer Learning for Person Re-Identification,2018 +54,China,Market 1501,market_1501,30.19331415,120.11930822,Zhejiang University,edu,50bf4f77d8b66ec838ad59a869630eace7e0e4a7,citation,https://arxiv.org/pdf/1707.07256.pdf,Deeply-Learned Part-Aligned Representations for Person Re-identification,2017 +55,United States,Market 1501,market_1501,47.6423318,-122.1369302,Microsoft,company,50bf4f77d8b66ec838ad59a869630eace7e0e4a7,citation,https://arxiv.org/pdf/1707.07256.pdf,Deeply-Learned Part-Aligned Representations for Person Re-identification,2017 +56,China,Market 1501,market_1501,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,d497543834f23f72f4092252b613bf3adaefc606,citation,https://arxiv.org/pdf/1805.07698.pdf,Density-Adaptive Kernel based Re-Ranking for Person Re-Identification,2018 +57,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,19a0f34440c25323544b90d9d822a212bfed0eb5,citation,https://arxiv.org/pdf/1901.10100.pdf,Discovering Underlying Person Structure Pattern with Relative Local Distance for Person Re-identification,2019 +58,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,7b2e0c87aece7ff1404ef2034d4c5674770301b2,citation,https://arxiv.org/pdf/1807.01455.pdf,Discriminative Feature Learning with Foreground Attention for Person Re-Identification,2018 +59,United Kingdom,Market 1501,market_1501,55.94951105,-3.19534913,University of Edinburgh,edu,68621721705e3115355268450b4b447362e455c6,citation,https://arxiv.org/pdf/1812.02605.pdf,Disjoint Label Space Transfer Learning with Common Factorised Space,2019 +60,China,Market 1501,market_1501,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,d950af49c44bc5d9f4a5cc1634e606004790b1e5,citation,https://arxiv.org/pdf/1708.04169.pdf,Divide and Fuse: A Re-ranking Approach for Person Re-identification,2017 +61,United Arab Emirates,Market 1501,market_1501,24.453884,54.3773438,New York University Abu Dhabi,edu,a94b832facb57ea37b18927b13d2dd4c5fa3a9ea,citation,https://arxiv.org/pdf/1803.09733.pdf,Domain transfer convolutional attribute embedding,2018 +62,China,Market 1501,market_1501,39.9106327,116.3356321,Chinese Academy of Science,edu,7f8d4494aba2a2b11a88bf7de4b8879b047dd69b,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Easy_Identification_From_CVPR_2018_paper.pdf,Easy Identification from Better Constraints: Multi-shot Person Re-identification from Reference Constraints,2018 +63,United States,Market 1501,market_1501,42.0551164,-87.67581113,Northwestern University,edu,7f8d4494aba2a2b11a88bf7de4b8879b047dd69b,citation,http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Easy_Identification_From_CVPR_2018_paper.pdf,Easy Identification from Better Constraints: Multi-shot Person Re-identification from Reference Constraints,2018 +64,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,ca1db9dc493a045e3fadf8d8209eaa4311bbdc70,citation,https://arxiv.org/pdf/1709.09304.pdf,Effective Image Retrieval via Multilinear Multi-index Fusion,2017 +65,United States,Market 1501,market_1501,29.58333105,-98.61944505,University of Texas at San Antonio,edu,ca1db9dc493a045e3fadf8d8209eaa4311bbdc70,citation,https://arxiv.org/pdf/1709.09304.pdf,Effective Image Retrieval via Multilinear Multi-index Fusion,2017 +66,United States,Market 1501,market_1501,42.0551164,-87.67581113,Northwestern University,edu,00bf7bcf31ee71f5f325ca5307883157ba3d580f,citation,http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhou_Efficient_Online_Local_ICCV_2017_paper.pdf,Efficient Online Local Metric Adaptation via Negative Samples for Person Re-identification,2017 +67,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,febff0f6faa8dde77848845e4b3e6f6c91180d33,citation,https://arxiv.org/pdf/1611.00137.pdf,Embedding Deep Metric for Person Re-identication A Study Against Large Variations,2016 +68,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,febff0f6faa8dde77848845e4b3e6f6c91180d33,citation,https://arxiv.org/pdf/1611.00137.pdf,Embedding Deep Metric for Person Re-identication A Study Against Large Variations,2016 +69,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,febff0f6faa8dde77848845e4b3e6f6c91180d33,citation,https://arxiv.org/pdf/1611.00137.pdf,Embedding Deep Metric for Person Re-identication A Study Against Large Variations,2016 +70,China,Market 1501,market_1501,31.846918,117.29053367,Hefei University of Technology,edu,fd0e1fecf7e72318a4c53463fd5650720df40281,citation,https://arxiv.org/pdf/1606.04404.pdf,End-to-End Comparative Attention Networks for Person Re-Identification,2017 +71,China,Market 1501,market_1501,39.9041999,116.4073963,"Qihoo 360 AI Institute, Beijing, China",edu,fd0e1fecf7e72318a4c53463fd5650720df40281,citation,https://arxiv.org/pdf/1606.04404.pdf,End-to-End Comparative Attention Networks for Person Re-Identification,2017 +72,Singapore,Market 1501,market_1501,1.2966426,103.7763939,Singapore / National University of Singapore,edu,fd0e1fecf7e72318a4c53463fd5650720df40281,citation,https://arxiv.org/pdf/1606.04404.pdf,End-to-End Comparative Attention Networks for Person Re-Identification,2017 +73,China,Market 1501,market_1501,32.035225,118.855317,PLA Army Engineering University,mil,c8ac121e9c4eb9964be9c5713f22a95c1c3b57e9,citation,https://arxiv.org/pdf/1901.05798.pdf,Ensemble Feature for Person Re-Identification,2019 +74,Spain,Market 1501,market_1501,41.5008957,2.111553,Autonomous University of Barcelona,edu,fe54a5a10288648f3bd0a71b053cdb896716b552,citation,https://arxiv.org/pdf/1804.04419.pdf,"Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification",2018 +75,Spain,Market 1501,market_1501,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,fe54a5a10288648f3bd0a71b053cdb896716b552,citation,https://arxiv.org/pdf/1804.04419.pdf,"Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification",2018 +76,Spain,Market 1501,market_1501,41.3868913,2.16352385,University of Barcelona,edu,fe54a5a10288648f3bd0a71b053cdb896716b552,citation,https://arxiv.org/pdf/1804.04419.pdf,"Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification",2018 +77,United States,Market 1501,market_1501,33.2416008,-111.8839083,Intel,company,6a9c3011b5092daa1d0cacda23f20ca4ae74b902,citation,https://arxiv.org/pdf/1812.02465.pdf,Fast and Accurate Person Re-Identification with RMNet.,2018 +78,China,Market 1501,market_1501,39.9808333,116.34101249,Beihang University,edu,91cc3981c304227e13ae151a43fbb124419bc0ce,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Chen_Fast_Person_Re-Identification_CVPR_2017_paper.pdf,Fast Person Re-identification via Cross-Camera Semantic Binary Transformation,2017 +79,United Kingdom,Market 1501,market_1501,52.6221571,1.2409136,University of East Anglia,edu,91cc3981c304227e13ae151a43fbb124419bc0ce,citation,http://openaccess.thecvf.com/content_cvpr_2017/papers/Chen_Fast_Person_Re-Identification_CVPR_2017_paper.pdf,Fast Person Re-identification via Cross-Camera Semantic Binary Transformation,2017 +80,Singapore,Market 1501,market_1501,1.3484104,103.68297965,Nanyang Technological University,edu,6123e52c1a560c88817d8720e05fbff8565271fb,citation,https://arxiv.org/pdf/1607.08378.pdf,Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification,2016 +81,United States,Market 1501,market_1501,38.5336349,-121.79077264,"University of California, Davis",edu,79c959833ff49f860e20b6654dbf4d6acdee0230,citation,https://arxiv.org/pdf/1811.02545.pdf,Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond,2018 +82,China,Market 1501,market_1501,30.19331415,120.11930822,Zhejiang University,edu,79c959833ff49f860e20b6654dbf4d6acdee0230,citation,https://arxiv.org/pdf/1811.02545.pdf,Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond,2018 +83,Taiwan,Market 1501,market_1501,25.0410728,121.6147562,Institute of Information Science,edu,3cbb4cf942ee95d14505c0f83a48ba224abdd00b,citation,https://arxiv.org/pdf/1712.06820.pdf,Hierarchical Cross Network for Person Re-identification,2017 +84,Japan,Market 1501,market_1501,33.8941968,130.8394083,Kyushu Institute of Technology,edu,7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf,citation,https://arxiv.org/pdf/1706.04318.pdf,Hierarchical Gaussian Descriptors with Application to Person Re-Identification,2017 +85,Japan,Market 1501,market_1501,33.59914655,130.22359848,Kyushu University,edu,7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf,citation,https://arxiv.org/pdf/1706.04318.pdf,Hierarchical Gaussian Descriptors with Application to Person Re-Identification,2017 +86,Japan,Market 1501,market_1501,35.9020448,139.93622009,University of Tokyo,edu,7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf,citation,https://arxiv.org/pdf/1706.04318.pdf,Hierarchical Gaussian Descriptors with Application to Person Re-Identification,2017 +87,United States,Market 1501,market_1501,42.3504253,-71.10056114,Boston University,edu,7c25ed788da1f5f61d8d1da23dd319dfb4e5ac2d,citation,https://arxiv.org/pdf/1612.01345.pdf,Human-In-The-Loop Person Re-Identification,2016 +88,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,7c25ed788da1f5f61d8d1da23dd319dfb4e5ac2d,citation,https://arxiv.org/pdf/1612.01345.pdf,Human-In-The-Loop Person Re-Identification,2016 +89,United Kingdom,Market 1501,market_1501,55.378051,-3.435973,"Vision Semantics Ltd, UK",edu,7c25ed788da1f5f61d8d1da23dd319dfb4e5ac2d,citation,https://arxiv.org/pdf/1612.01345.pdf,Human-In-The-Loop Person Re-Identification,2016 +90,Australia,Market 1501,market_1501,-37.9062737,145.1319449,"CSIRO, Australia",edu,53492cb14b33a26b10c91102daa2d5a2a3ed069d,citation,https://arxiv.org/pdf/1806.07592.pdf,Improving Online Multiple Object tracking with Deep Metric Learning,2018 +91,Germany,Market 1501,market_1501,50.7791703,6.06728733,RWTH Aachen University,edu,a3d11e98794896849ab2304a42bf83e2979e5fb5,citation,https://arxiv.org/pdf/1703.07737.pdf,In Defense of the Triplet Loss for Person Re-Identification,2017 +92,China,Market 1501,market_1501,34.250803,108.983693,Xi’an Jiaotong University,edu,cb8567f074573a0d66d50e75b5a91df283ccd503,citation,https://arxiv.org/pdf/1708.05512.pdf,Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification,2018 +93,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,207e0ac5301a3c79af862951b70632ed650f74f7,citation,https://arxiv.org/pdf/1603.02139.pdf,Learning a Discriminative Null Space for Person Re-identification,2016 +94,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,34cf90fcbf83025666c5c86ec30ac58b632b27b0,citation,https://arxiv.org/pdf/1710.06555.pdf,Learning Deep Context-Aware Features over Body and Latent Parts for Person Re-identification,2017 +95,United States,Market 1501,market_1501,40.007581,-105.2659417,University of Colorado,edu,ad3be20fe0106d80c567def71fef01146564df4b,citation,https://arxiv.org/pdf/1802.05312.pdf,Learning Deep Disentangled Embeddings With the F-Statistic Loss,2018 +96,Russia,Market 1501,market_1501,55.6846566,37.3407539,"Skolkovo Institute of Science and Technology, Skolkovo, Moscow",edu,218603147709344d4ff66625d83603deee2854bf,citation,https://arxiv.org/pdf/1611.00822.pdf,Learning Deep Embeddings with Histogram Loss,2016 +97,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,489decd84645b77d31001d17a66abb92bb96c731,citation,https://arxiv.org/pdf/1803.11333.pdf,Learning View-Specific Deep Networks for Person Re-Identification,2018 +98,Norway,Market 1501,market_1501,63.419499,10.4020771,Norwegian University of Science and Technology,edu,2102915d0c51cfda4d85133bd593ecb9508fa4bb,citation,https://arxiv.org/pdf/1701.03153.pdf,Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification,2018 +99,Italy,Market 1501,market_1501,41.9037626,12.5144384,Sapienza University of Rome,edu,2102915d0c51cfda4d85133bd593ecb9508fa4bb,citation,https://arxiv.org/pdf/1701.03153.pdf,Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification,2018 +100,Italy,Market 1501,market_1501,45.437398,11.003376,University of Verona,edu,2102915d0c51cfda4d85133bd593ecb9508fa4bb,citation,https://arxiv.org/pdf/1701.03153.pdf,Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification,2018 +101,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,c0387e788a52f10bf35d4d50659cfa515d89fbec,citation,https://pdfs.semanticscholar.org/c038/7e788a52f10bf35d4d50659cfa515d89fbec.pdf,MARS: A Video Benchmark for Large-Scale Person Re-Identification,2016 +102,China,Market 1501,market_1501,40.00229045,116.32098908,Tsinghua University,edu,1e83e2abcb258cd62b160e3f31a490a6bc042e83,citation,https://arxiv.org/pdf/1704.02492.pdf,Metric Learning in Codebook Generation of Bag-of-Words for Person Re-identification,2017 +103,China,Market 1501,market_1501,31.8405068,117.2638057,Hefei University,edu,7c9d8593cdf2f8ba9f27906b2b5827b145631a0b,citation,https://arxiv.org/pdf/1810.08534.pdf,MsCGAN: Multi-scale Conditional Generative Adversarial Networks for Person Image Generation,2018 +104,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,1565bf91f8fdfe5f5168a5050b1418debc662151,citation,https://arxiv.org/pdf/1711.03368.pdf,One-pass Person Re-identification by Sketch Online Discriminant Analysis,2017 +105,Australia,Market 1501,market_1501,-33.8809651,151.20107299,University of Technology Sydney,edu,592e555ebe4bd2d821230e7074d7e9626af716b0,citation,https://arxiv.org/pdf/1809.02681.pdf,Open Set Adversarial Examples,2018 +106,China,Market 1501,market_1501,40.0044795,116.370238,Chinese Academy of Sciences,edu,fcaa88dcb1a440ef09c4e5d724ed209bfc5d3367,citation,https://arxiv.org/pdf/1811.09928.pdf,PCGAN: Partition-Controlled Human Image Generation,2019 +107,China,Market 1501,market_1501,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,fcaa88dcb1a440ef09c4e5d724ed209bfc5d3367,citation,https://arxiv.org/pdf/1811.09928.pdf,PCGAN: Partition-Controlled Human Image Generation,2019 +108,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2fad06ed34169a5b1f736112364c58140577a6b4,citation,https://pdfs.semanticscholar.org/2fad/06ed34169a5b1f736112364c58140577a6b4.pdf,Pedestrian Color Naming via Convolutional Neural Network,2016 +109,China,Market 1501,market_1501,22.4162632,114.2109318,Chinese University of Hong Kong,edu,25bb4212af72d64ec20cac533f58f7af1472e057,citation,https://arxiv.org/pdf/1703.08837.pdf,Person Re-Identification by Camera Correlation Aware Feature Augmentation,2018 +110,China,Market 1501,market_1501,28.2290209,112.99483204,"National University of Defense Technology, China",mil,25bb4212af72d64ec20cac533f58f7af1472e057,citation,https://arxiv.org/pdf/1703.08837.pdf,Person Re-Identification by Camera Correlation Aware Feature Augmentation,2018 +111,China,Market 1501,market_1501,23.09461185,113.28788994,Sun Yat-Sen University,edu,25bb4212af72d64ec20cac533f58f7af1472e057,citation,https://arxiv.org/pdf/1703.08837.pdf,Person Re-Identification by Camera Correlation Aware Feature Augmentation,2018 +112,United Kingdom,Market 1501,market_1501,51.5247272,-0.03931035,Queen Mary University of London,edu,744cc8c69255cbe9d992315e456b9efb06f42e20,citation,https://arxiv.org/pdf/1705.04724.pdf,Person Re-Identification by Deep Joint Learning of Multi-Loss Classification,2017 diff --git a/site/datasets/verified/megaage.csv b/site/datasets/verified/megaage.csv index 04702674..baad68b9 100644 --- a/site/datasets/verified/megaage.csv +++ b/site/datasets/verified/megaage.csv @@ -1,2 +1,8 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,MegaAge,megaage,0.0,0.0,,,,main,,Quantifying Facial Age by Posterior of Age Comparisons,2017 +1,China,MegaAge,megaage,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,f3ec43a7b22f6e5414fec473acda8ffd843e7baf,citation,https://arxiv.org/pdf/1809.07447.pdf,A Coupled Evolutionary Network for Age Estimation,2018 +2,China,MegaAge,megaage,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,f3ec43a7b22f6e5414fec473acda8ffd843e7baf,citation,https://arxiv.org/pdf/1809.07447.pdf,A Coupled Evolutionary Network for Age Estimation,2018 +3,China,MegaAge,megaage,22.4162632,114.2109318,Chinese University of Hong Kong,edu,aaa2b45153051e23d5a35ccf9af8ecabc0fe24cd,citation,https://pdfs.semanticscholar.org/aaa2/b45153051e23d5a35ccf9af8ecabc0fe24cd.pdf,1 How Good can Human Predict Facial Age ?,2017 +4,China,MegaAge,megaage,39.993008,116.329882,SenseTime,company,aaa2b45153051e23d5a35ccf9af8ecabc0fe24cd,citation,https://pdfs.semanticscholar.org/aaa2/b45153051e23d5a35ccf9af8ecabc0fe24cd.pdf,1 How Good can Human Predict Facial Age ?,2017 +5,Taiwan,MegaAge,megaage,25.0421852,121.6145477,"Academia Sinica, Taiwan",edu,c62c07de196e95eaaf614fb150a4fa4ce49588b4,citation,https://pdfs.semanticscholar.org/c62c/07de196e95eaaf614fb150a4fa4ce49588b4.pdf,SSR-Net: A Compact Soft Stagewise Regression Network for Age Estimation,2018 +6,Taiwan,MegaAge,megaage,25.01682835,121.53846924,National Taiwan University,edu,c62c07de196e95eaaf614fb150a4fa4ce49588b4,citation,https://pdfs.semanticscholar.org/c62c/07de196e95eaaf614fb150a4fa4ce49588b4.pdf,SSR-Net: A Compact Soft Stagewise Regression Network for Age Estimation,2018 diff --git a/site/datasets/verified/megaface.csv b/site/datasets/verified/megaface.csv index d9f78ec3..4c38af0b 100644 --- a/site/datasets/verified/megaface.csv +++ b/site/datasets/verified/megaface.csv @@ -2,3 +2,65 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,t 0,,MegaFace,megaface,0.0,0.0,,,,main,,Level Playing Field for Million Scale Face Recognition,2017 1,Netherlands,MegaFace,megaface,53.21967825,6.56251482,University of Groningen,edu,8efda5708bbcf658d4f567e3866e3549fe045bbb,citation,https://pdfs.semanticscholar.org/8efd/a5708bbcf658d4f567e3866e3549fe045bbb.pdf,Pre-trained Deep Convolutional Neural Networks for Face Recognition,2018 2,United States,MegaFace,megaface,41.70456775,-86.23822026,University of Notre Dame,edu,e64c166dc5bb33bc61462a8b5ac92edb24d905a1,citation,https://arxiv.org/pdf/1811.01474.pdf,Fast Face Image Synthesis with Minimal Training.,2018 +3,China,MegaFace,megaface,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2401cd5606c6bc5390acc352d00c1685f0c8af60,citation,https://arxiv.org/pdf/1809.01407.pdf,Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition,2018 +4,China,MegaFace,megaface,39.993008,116.329882,SenseTime,company,2401cd5606c6bc5390acc352d00c1685f0c8af60,citation,https://arxiv.org/pdf/1809.01407.pdf,Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition,2018 +5,Singapore,MegaFace,megaface,1.3484104,103.68297965,Nanyang Technological University,edu,2401cd5606c6bc5390acc352d00c1685f0c8af60,citation,https://arxiv.org/pdf/1809.01407.pdf,Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition,2018 +6,United Kingdom,MegaFace,megaface,51.49887085,-0.17560797,Imperial College London,edu,40bb090a4e303f11168dce33ed992f51afe02ff7,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Deng_Marginal_Loss_for_CVPR_2017_paper.pdf,Marginal Loss for Deep Face Recognition,2017 +7,China,MegaFace,megaface,39.94976005,116.33629046,Beijing Jiaotong University,edu,d7cbedbee06293e78661335c7dd9059c70143a28,citation,https://arxiv.org/pdf/1804.07573.pdf,MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices,2018 +8,China,MegaFace,megaface,40.0044795,116.370238,Chinese Academy of Sciences,edu,1345fb7700389f9d02f203b3cb25ac3594855054,citation,,Hierarchical Training for Large Scale Face Recognition with Few Samples Per Subject,2018 +9,United States,MegaFace,megaface,45.57022705,-122.63709346,Concordia University,edu,db374308655256da1479c272582d7c7139c97173,citation,https://arxiv.org/pdf/1811.11080.pdf,MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices,2018 +10,United States,MegaFace,megaface,33.5866784,-101.87539204,Electrical and Computer Engineering,edu,db374308655256da1479c272582d7c7139c97173,citation,https://arxiv.org/pdf/1811.11080.pdf,MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices,2018 +11,United States,MegaFace,megaface,36.0678324,-94.1736551,University of Arkansas,edu,db374308655256da1479c272582d7c7139c97173,citation,https://arxiv.org/pdf/1811.11080.pdf,MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices,2018 +12,United Kingdom,MegaFace,megaface,51.49887085,-0.17560797,Imperial College London,edu,51992fa881541ca3a4520c1ff9100b83e2f1ad87,citation,https://arxiv.org/pdf/1801.07698.pdf,ArcFace: Additive Angular Margin Loss for Deep Face Recognition,2018 +13,China,MegaFace,megaface,40.0044795,116.370238,Chinese Academy of Sciences,edu,94f74c6314ffd02db581e8e887b5fd81ce288dbf,citation,https://arxiv.org/pdf/1511.02683.pdf,A Light CNN for Deep Face Representation With Noisy Labels,2018 +14,China,MegaFace,megaface,22.4162632,114.2109318,Chinese University of Hong Kong,edu,53840c83f7b6ae78d4310c5b84ab3fde1a33bc4f,citation,https://arxiv.org/pdf/1801.01687.pdf,Accelerated Training for Massive Classification via Dynamic Class Selection,2018 +15,China,MegaFace,megaface,39.993008,116.329882,SenseTime,company,53840c83f7b6ae78d4310c5b84ab3fde1a33bc4f,citation,https://arxiv.org/pdf/1801.01687.pdf,Accelerated Training for Massive Classification via Dynamic Class Selection,2018 +16,United States,MegaFace,megaface,38.99203005,-76.9461029,University of Maryland College Park,edu,7323b594d3a8508f809e276aa2d224c4e7ec5a80,citation,https://arxiv.org/pdf/1808.05508.pdf,An Experimental Evaluation of Covariates Effects on Unconstrained Face Verification,2018 +17,China,MegaFace,megaface,22.304572,114.17976285,Hong Kong Polytechnic University,edu,f60070d3a4d333aa1436e4c372b1feb5b316a7ba,citation,https://arxiv.org/pdf/1801.05678.pdf,Face Recognition via Centralized Coordinate Learning,2018 +18,United Kingdom,MegaFace,megaface,54.687254,-5.882736,Ulster University,edu,ddfde808af8dc8b737d115869d6cca780d050884,citation,https://arxiv.org/pdf/1805.06741.pdf,Minimum Margin Loss for Deep Face Recognition,2018 +19,China,MegaFace,megaface,39.9922379,116.30393816,Peking University,edu,4f0b641860d90dfa4c185670bf636149a2b2b717,citation,,Improve Cross-Domain Face Recognition with IBN-block,2018 +20,United States,MegaFace,megaface,40.4441619,-79.94272826,Carnegie Mellon University,edu,67a9659de0bf671fafccd7f39b7587f85fb6dfbd,citation,,Ring Loss: Convex Feature Normalization for Face Recognition,2018 +21,United States,MegaFace,megaface,41.70456775,-86.23822026,University of Notre Dame,edu,841855205818d3a6d6f85ec17a22515f4f062882,citation,https://arxiv.org/pdf/1805.11529.pdf,Low Resolution Face Recognition in the Wild,2018 +22,United Kingdom,MegaFace,megaface,51.5247272,-0.03931035,Queen Mary University of London,edu,2306b2a8fba28539306052764a77a0d0f5d1236a,citation,https://arxiv.org/pdf/1804.09691.pdf,Surveillance Face Recognition Challenge,2018 +23,United Kingdom,MegaFace,megaface,55.378051,-3.435973,"Vision Semantics Ltd, UK",edu,2306b2a8fba28539306052764a77a0d0f5d1236a,citation,https://arxiv.org/pdf/1804.09691.pdf,Surveillance Face Recognition Challenge,2018 +24,United States,MegaFace,megaface,42.366183,-71.092455,Mitsubishi Electric Research Laboratories,company,57246142814d7010d3592e3a39a1ed819dd01f3b,citation,https://pdfs.semanticscholar.org/5724/6142814d7010d3592e3a39a1ed819dd01f3b.pdf,Verification of Very Low-Resolution Faces Using An Identity-Preserving Deep Face Super-resolution Network,0 +25,China,MegaFace,megaface,22.4162632,114.2109318,Chinese University of Hong Kong,edu,f3a59d85b7458394e3c043d8277aa1ffe3cdac91,citation,https://arxiv.org/pdf/1802.09900.pdf,Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints,2018 +26,United States,MegaFace,megaface,39.86948105,-84.87956905,Indiana University,edu,f3a59d85b7458394e3c043d8277aa1ffe3cdac91,citation,https://arxiv.org/pdf/1802.09900.pdf,Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints,2018 +27,Singapore,MegaFace,megaface,1.3484104,103.68297965,Nanyang Technological University,edu,9e31e77f9543ab42474ba4e9330676e18c242e72,citation,https://arxiv.org/pdf/1807.11649.pdf,The Devil of Face Recognition is in the Noise,2018 +28,China,MegaFace,megaface,39.993008,116.329882,SenseTime,company,9e31e77f9543ab42474ba4e9330676e18c242e72,citation,https://arxiv.org/pdf/1807.11649.pdf,The Devil of Face Recognition is in the Noise,2018 +29,United States,MegaFace,megaface,32.87935255,-117.23110049,"University of California, San Diego",edu,9e31e77f9543ab42474ba4e9330676e18c242e72,citation,https://arxiv.org/pdf/1807.11649.pdf,The Devil of Face Recognition is in the Noise,2018 +30,United States,MegaFace,megaface,22.5447154,113.9357164,Tencent,company,7a7fddb3020e0c2dd4e3fe275329eb10f1cfbb8a,citation,https://arxiv.org/pdf/1810.07599.pdf,Orthogonal Deep Features Decomposition for Age-Invariant Face Recognition,2018 +31,United States,MegaFace,megaface,47.6423318,-122.1369302,Microsoft,company,6cacda04a541d251e8221d70ac61fda88fb61a70,citation,https://arxiv.org/pdf/1707.05574.pdf,One-shot Face Recognition by Promoting Underrepresented Classes,2017 +32,Czech Republic,MegaFace,megaface,49.20172,16.6033168,Brno University of Technology,edu,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +33,Germany,MegaFace,megaface,48.5670466,13.4517835,University of Passau,edu,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +34,Germany,MegaFace,megaface,50.7171497,7.12825184,"Deutsche Welle, Bonn, Germany",edu,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +35,Italy,MegaFace,megaface,44.6531692,10.8586228,"Expert Systems, Modena, Italy",company,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +36,Spain,MegaFace,megaface,40.4486372,-3.7192798,"GSI Universidad Politécnica de Madrid, Madrid, Spain",edu,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +37,Ireland,MegaFace,megaface,53.27639715,-9.05829961,National University of Ireland Galway,edu,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +38,Spain,MegaFace,megaface,40.4402995,-3.7870076,"Paradigma Digital, Madrid, Spain",company,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +39,Czech Republic,MegaFace,megaface,49.2238302,16.5982602,"Phonexia, Brno-Krlovo Pole, Czech Republic",edu,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +40,Ireland,MegaFace,megaface,53.3498053,-6.2603097,"Siren Solutions, Dublin, Ireland",company,b55e70df03d9b80c91446a97957bc95772dcc45b,citation,,MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis,2018 +41,China,MegaFace,megaface,22.5447154,113.9357164,"Tencent AI Lab, Shenzhen, China",company,1174b869c325222c3446d616975842e8d2989cf2,citation,https://arxiv.org/pdf/1801.09414.pdf,CosFace: Large Margin Cosine Loss for Deep Face Recognition,2018 +42,United States,MegaFace,megaface,33.776033,-84.39884086,Georgia Institute of Technology,edu,bd8f77b7d3b9d272f7a68defc1412f73e5ac3135,citation,https://arxiv.org/pdf/1704.08063.pdf,SphereFace: Deep Hypersphere Embedding for Face Recognition,2017 +43,United States,MegaFace,megaface,40.4441619,-79.94272826,Carnegie Mellon University,edu,bd8f77b7d3b9d272f7a68defc1412f73e5ac3135,citation,https://arxiv.org/pdf/1704.08063.pdf,SphereFace: Deep Hypersphere Embedding for Face Recognition,2017 +44,China,MegaFace,megaface,23.09461185,113.28788994,Sun Yat-Sen University,edu,bd8f77b7d3b9d272f7a68defc1412f73e5ac3135,citation,https://arxiv.org/pdf/1704.08063.pdf,SphereFace: Deep Hypersphere Embedding for Face Recognition,2017 +45,China,MegaFace,megaface,30.672721,104.098806,University of Electronic Science and Technology of China,edu,93af36da08bf99e68c9b0d36e141ed8154455ac2,citation,https://pdfs.semanticscholar.org/93af/36da08bf99e68c9b0d36e141ed8154455ac2.pdf,A Dditive M Argin S Oftmax for F Ace V Erification,2018 +46,United States,MegaFace,megaface,33.776033,-84.39884086,Georgia Institute of Technology,edu,93af36da08bf99e68c9b0d36e141ed8154455ac2,citation,https://pdfs.semanticscholar.org/93af/36da08bf99e68c9b0d36e141ed8154455ac2.pdf,A Dditive M Argin S Oftmax for F Ace V Erification,2018 +47,United States,MegaFace,megaface,45.57022705,-122.63709346,Concordia University,edu,eb8519cec0d7a781923f68fdca0891713cb81163,citation,https://arxiv.org/pdf/1703.08617.pdf,Temporal Non-volume Preserving Approach to Facial Age-Progression and Age-Invariant Face Recognition,2017 +48,United States,MegaFace,megaface,40.4441619,-79.94272826,Carnegie Mellon University,edu,eb8519cec0d7a781923f68fdca0891713cb81163,citation,https://arxiv.org/pdf/1703.08617.pdf,Temporal Non-volume Preserving Approach to Facial Age-Progression and Age-Invariant Face Recognition,2017 +49,Portugal,MegaFace,megaface,40.277859,-7.508983,University of Beira Interior,edu,e11bc0f7c73c04d38b7fb80bd1ca886495a4d43c,citation,http://www.di.ubi.pt/~hugomcp/doc/Leopard_TIFS.pdf,“A Leopard Cannot Change Its Spots”: Improving Face Recognition Using 3D-Based Caricatures,2019 +50,United States,MegaFace,megaface,39.3299013,-76.6205177,Johns Hopkins University,edu,672fae3da801b2a0d2bad65afdbbbf1b2320623e,citation,https://arxiv.org/pdf/1609.07042.pdf,Pose-Selective Max Pooling for Measuring Similarity,2016 +51,China,MegaFace,megaface,22.53521465,113.9315911,Shenzhen University,edu,a32878e85941b5392d58d28e5248f94e16e25d78,citation,https://arxiv.org/pdf/1801.06445.pdf,Quality Classified Image Analysis with Application to Face Detection and Recognition,2018 +52,China,MegaFace,megaface,22.4162632,114.2109318,Chinese University of Hong Kong,edu,380d5138cadccc9b5b91c707ba0a9220b0f39271,citation,https://arxiv.org/pdf/1806.00194.pdf,Deep Imbalanced Learning for Face Recognition and Attribute Prediction,2018 +53,United States,MegaFace,megaface,40.4432741,-79.9456995,Robotics Institute at Carnegie Mellon University,edu,380d5138cadccc9b5b91c707ba0a9220b0f39271,citation,https://arxiv.org/pdf/1806.00194.pdf,Deep Imbalanced Learning for Face Recognition and Attribute Prediction,2018 +54,Israel,MegaFace,megaface,32.7767783,35.0231271,Technion-Israel Institute of Technology,edu,d00787e215bd74d32d80a6c115c4789214da5edb,citation,https://pdfs.semanticscholar.org/d007/87e215bd74d32d80a6c115c4789214da5edb.pdf,Faster and Lighter Online Sparse Dictionary Learning Project report,0 +55,China,MegaFace,megaface,39.9808333,116.34101249,Beihang University,edu,0a23bdc55fb0d04acdac4d3ea0a9994623133562,citation,https://arxiv.org/pdf/1806.03018.pdf,Large-scale Bisample Learning on ID vs. Spot Face Recognition,2018 +56,United States,MegaFace,megaface,45.57022705,-122.63709346,Concordia University,edu,8e0becfc5fe3ecdd2ac93fabe34634827b21ef2b,citation,https://arxiv.org/pdf/1711.10520.pdf,Learning from Longitudinal Face Demonstration - Where Tractable Deep Modeling Meets Inverse Reinforcement Learning,2017 +57,United States,MegaFace,megaface,40.4437954,-79.9465522,"CyLab, Carnegie Mellon, Pittsburgh, USA",edu,8e0becfc5fe3ecdd2ac93fabe34634827b21ef2b,citation,https://arxiv.org/pdf/1711.10520.pdf,Learning from Longitudinal Face Demonstration - Where Tractable Deep Modeling Meets Inverse Reinforcement Learning,2017 +58,United States,MegaFace,megaface,33.776033,-84.39884086,Georgia Institute of Technology,edu,9fc17fa5708584fa848164461f82a69e97f6ed69,citation,,Additive Margin Softmax for Face Verification,2018 +59,China,MegaFace,megaface,30.672721,104.098806,University of Electronic Science and Technology of China,edu,9fc17fa5708584fa848164461f82a69e97f6ed69,citation,,Additive Margin Softmax for Face Verification,2018 +60,Italy,MegaFace,megaface,45.1867156,9.1561041,University of Pavia,edu,746c0205fdf191a737df7af000eaec9409ede73f,citation,,Investigating Nuisances in DCNN-Based Face Recognition,2018 +61,Italy,MegaFace,megaface,43.7776426,11.259765,University of Florence,edu,746c0205fdf191a737df7af000eaec9409ede73f,citation,,Investigating Nuisances in DCNN-Based Face Recognition,2018 +62,United States,MegaFace,megaface,47.6543238,-122.30800894,University of Washington,edu,28d4e027c7e90b51b7d8908fce68128d1964668a,citation,,Level Playing Field for Million Scale Face Recognition,2017 +63,China,MegaFace,megaface,31.30104395,121.50045497,Fudan University,edu,c5e37630d0672e4d44f7dee83ac2c1528be41c2e,citation,,Multi-task Deep Neural Network for Joint Face Recognition and Facial Attribute Prediction,2017 +64,United States,MegaFace,megaface,39.65404635,-79.96475355,West Virginia University,edu,b1b7603a70860cbe5ff7b963976b5e6f780c88fc,citation,,A Deep Face Identification Network Enhanced by Facial Attributes Prediction,2018 diff --git a/site/datasets/verified/msceleb.csv b/site/datasets/verified/msceleb.csv index d1a7ec8c..be5b063c 100644 --- a/site/datasets/verified/msceleb.csv +++ b/site/datasets/verified/msceleb.csv @@ -3,125 +3,93 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,t 1,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2011d4da646f794456bebb617d1500ddf71989ed,citation,https://pdfs.semanticscholar.org/2011/d4da646f794456bebb617d1500ddf71989ed.pdf,Transductive Centroid Projection for Semi-supervised Large-Scale Recognition,2018 2,China,MsCeleb,msceleb,39.993008,116.329882,SenseTime,company,2011d4da646f794456bebb617d1500ddf71989ed,citation,https://pdfs.semanticscholar.org/2011/d4da646f794456bebb617d1500ddf71989ed.pdf,Transductive Centroid Projection for Semi-supervised Large-Scale Recognition,2018 3,United States,MsCeleb,msceleb,39.2899685,-76.62196103,University of Maryland,edu,23dd8d17ce09c22d367e4d62c1ccf507bcbc64da,citation,https://pdfs.semanticscholar.org/23dd/8d17ce09c22d367e4d62c1ccf507bcbc64da.pdf,Deep Density Clustering of Unconstrained Faces ( Supplementary Material ),2018 -4,United States,MsCeleb,msceleb,37.3936717,-122.0807262,Facebook,company,628a3f027b7646f398c68a680add48c7969ab1d9,citation,https://pdfs.semanticscholar.org/628a/3f027b7646f398c68a680add48c7969ab1d9.pdf,Plan for Final Year Project : HKU-Face : A Large Scale Dataset for Deep Face Recognition,2017 -5,United States,MsCeleb,msceleb,37.4219999,-122.0840575,Google,company,628a3f027b7646f398c68a680add48c7969ab1d9,citation,https://pdfs.semanticscholar.org/628a/3f027b7646f398c68a680add48c7969ab1d9.pdf,Plan for Final Year Project : HKU-Face : A Large Scale Dataset for Deep Face Recognition,2017 -6,France,MsCeleb,msceleb,46.1476461,-1.1549415,University of La Rochelle,edu,153fbae25efd061f9046970071d0cfe739a35a0e,citation,,FaceLiveNet: End-to-End Networks Combining Face Verification with Interactive Facial Expression-Based Liveness Detection,2018 -7,China,MsCeleb,msceleb,26.89887,112.590435,University of South China,edu,98518fc368d7e1478cef40f5f8fd4468763645ad,citation,http://downloads.hindawi.com/journals/cin/2018/4512473.pdf,A Community Detection Approach to Cleaning Extremely Large Face Database,2018 -8,China,MsCeleb,msceleb,28.2290209,112.99483204,"National University of Defense Technology, China",mil,98518fc368d7e1478cef40f5f8fd4468763645ad,citation,http://downloads.hindawi.com/journals/cin/2018/4512473.pdf,A Community Detection Approach to Cleaning Extremely Large Face Database,2018 -9,China,MsCeleb,msceleb,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,6cdbbced12bff53bcbdde3cdb6d20b4bd02a9d6c,citation,https://arxiv.org/pdf/1811.12026.pdf,Attacks on State-of-the-Art Face Recognition using Attentional Adversarial Attack Generative Network,2018 -10,China,MsCeleb,msceleb,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,e47f4a127f41c055fb7893ddc295932ead783c63,citation,https://arxiv.org/pdf/1709.03675.pdf,Adversarial Discriminative Heterogeneous Face Recognition,2018 -11,China,MsCeleb,msceleb,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,e47f4a127f41c055fb7893ddc295932ead783c63,citation,https://arxiv.org/pdf/1709.03675.pdf,Adversarial Discriminative Heterogeneous Face Recognition,2018 -12,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2401cd5606c6bc5390acc352d00c1685f0c8af60,citation,https://arxiv.org/pdf/1809.01407.pdf,Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition,2018 -13,China,MsCeleb,msceleb,39.993008,116.329882,SenseTime,company,2401cd5606c6bc5390acc352d00c1685f0c8af60,citation,https://arxiv.org/pdf/1809.01407.pdf,Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition,2018 -14,Singapore,MsCeleb,msceleb,1.3484104,103.68297965,Nanyang Technological University,edu,2401cd5606c6bc5390acc352d00c1685f0c8af60,citation,https://arxiv.org/pdf/1809.01407.pdf,Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition,2018 -15,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,b446bcd7fb78adfe346cf7a01a38e4f43760f363,citation,https://pdfs.semanticscholar.org/b446/bcd7fb78adfe346cf7a01a38e4f43760f363.pdf,To appear in ICB 2018 Longitudinal Study of Child Face Recognition,2017 -16,United Kingdom,MsCeleb,msceleb,51.3791442,-2.3252332,University of Bath,edu,26567da544239cc6628c5696b0b10539144cbd57,citation,https://arxiv.org/pdf/1811.12784.pdf,The GAN that Warped: Semantic Attribute Editing with Unpaired Data,2018 -17,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,40bb090a4e303f11168dce33ed992f51afe02ff7,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w33/papers/Deng_Marginal_Loss_for_CVPR_2017_paper.pdf,Marginal Loss for Deep Face Recognition,2017 -18,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,4cdb6144d56098b819076a8572a664a2c2d27f72,citation,https://arxiv.org/pdf/1806.01196.pdf,Face Synthesis for Eyeglass-Robust Face Recognition,2018 -19,China,MsCeleb,msceleb,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,4cdb6144d56098b819076a8572a664a2c2d27f72,citation,https://arxiv.org/pdf/1806.01196.pdf,Face Synthesis for Eyeglass-Robust Face Recognition,2018 -20,United States,MsCeleb,msceleb,39.2899685,-76.62196103,University of Maryland,edu,872dfdeccf99bbbed7c8f1ea08afb2d713ebe085,citation,https://arxiv.org/pdf/1703.09507.pdf,L2-constrained Softmax Loss for Discriminative Face Verification,2017 -21,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,3011b5fce49112228711a9e5f92d6f191687c1ea,citation,https://arxiv.org/pdf/1803.09014.pdf,Feature Transfer Learning for Deep Face Recognition with Long-Tail Data,2018 -22,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,1929863fff917ee7f6dc428fc1ce732777668eca,citation,https://arxiv.org/pdf/1712.04695.pdf,UV-GAN: Adversarial Facial UV Map Completion for Pose-Invariant Face Recognition,2018 -23,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,d949fadc9b6c5c8b067fa42265ad30945f9caa99,citation,https://arxiv.org/pdf/1710.00870.pdf,Rethinking Feature Discrimination and Polymerization for Large-scale Recognition,2017 -24,China,MsCeleb,msceleb,31.30104395,121.50045497,Fudan University,edu,5a259f2f5337435f841d39dada832ab24e7b3325,citation,,Face Recognition via Active Annotation and Learning,2016 -25,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,5a259f2f5337435f841d39dada832ab24e7b3325,citation,,Face Recognition via Active Annotation and Learning,2016 -26,China,MsCeleb,msceleb,39.993008,116.329882,SenseTime,company,c72a2ea819df9b0e8cd267eebcc6528b8741e03d,citation,https://arxiv.org/pdf/1708.09687.pdf,Quantifying Facial Age by Posterior of Age Comparisons,2017 -27,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,c72a2ea819df9b0e8cd267eebcc6528b8741e03d,citation,https://arxiv.org/pdf/1708.09687.pdf,Quantifying Facial Age by Posterior of Age Comparisons,2017 -28,United States,MsCeleb,msceleb,39.2899685,-76.62196103,University of Maryland,edu,b6f758be954d34817d4ebaa22b30c63a4b8ddb35,citation,https://arxiv.org/pdf/1703.04835.pdf,A Proximity-Aware Hierarchical Clustering of Faces,2017 -29,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,19fa871626df604639550c6445d2f76cd369dd13,citation,https://arxiv.org/pdf/1805.02283.pdf,DocFace: Matching ID Document Photos to Selfies,2018 -30,United States,MsCeleb,msceleb,32.87935255,-117.23110049,"University of California, San Diego",edu,d35534f3f59631951011539da2fe83f2844ca245,citation,https://arxiv.org/pdf/1705.07904.pdf,Semantically Decomposing the Latent Spaces of Generative Adversarial Networks,2017 -31,United States,MsCeleb,msceleb,37.43131385,-122.16936535,Stanford University,edu,d35534f3f59631951011539da2fe83f2844ca245,citation,https://arxiv.org/pdf/1705.07904.pdf,Semantically Decomposing the Latent Spaces of Generative Adversarial Networks,2017 -32,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,d35534f3f59631951011539da2fe83f2844ca245,citation,https://arxiv.org/pdf/1705.07904.pdf,Semantically Decomposing the Latent Spaces of Generative Adversarial Networks,2017 -33,Canada,MsCeleb,msceleb,49.2767454,-122.91777375,Simon Fraser University,edu,b301fd2fc33f24d6f75224e7c0991f4f04b64a65,citation,https://arxiv.org/pdf/1803.06340.pdf,Faces as Lighting Probes via Unsupervised Deep Highlight Extraction,2018 -34,China,MsCeleb,msceleb,28.2290209,112.99483204,"National University of Defense Technology, China",mil,b301fd2fc33f24d6f75224e7c0991f4f04b64a65,citation,https://arxiv.org/pdf/1803.06340.pdf,Faces as Lighting Probes via Unsupervised Deep Highlight Extraction,2018 -35,United States,MsCeleb,msceleb,42.3614256,-71.0812092,Microsoft Research Asia,company,b301fd2fc33f24d6f75224e7c0991f4f04b64a65,citation,https://arxiv.org/pdf/1803.06340.pdf,Faces as Lighting Probes via Unsupervised Deep Highlight Extraction,2018 -36,United Kingdom,MsCeleb,msceleb,51.7534538,-1.25400997,University of Oxford,edu,70c59dc3470ae867016f6ab0e008ac8ba03774a1,citation,https://arxiv.org/pdf/1710.08092.pdf,VGGFace2: A Dataset for Recognising Faces across Pose and Age,2018 -37,China,MsCeleb,msceleb,39.9041999,116.4073963,"Beijing, China",edu,7fa4e972da46735971aad52413d17c4014c49e6e,citation,https://arxiv.org/pdf/1709.02940.pdf,How to Train Triplet Networks with 100K Identities?,2017 -38,China,MsCeleb,msceleb,39.94976005,116.33629046,Beijing Jiaotong University,edu,d7cbedbee06293e78661335c7dd9059c70143a28,citation,https://arxiv.org/pdf/1804.07573.pdf,MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices,2018 -39,Singapore,MsCeleb,msceleb,1.2962018,103.77689944,National University of Singapore,edu,fca9ebaa30d69ccec8bb577c31693c936c869e72,citation,https://arxiv.org/pdf/1809.00338.pdf,Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition,2018 -40,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,fca9ebaa30d69ccec8bb577c31693c936c869e72,citation,https://arxiv.org/pdf/1809.00338.pdf,Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition,2018 -41,Japan,MsCeleb,msceleb,35.6992503,139.7721568,"Hitachi, Ltd., Tokyo, Japan",company,3b4da93fbdf7ae520fa00d39ffa694e850b85162,citation,,Face-Voice Matching using Cross-modal Embeddings,2018 -42,China,MsCeleb,msceleb,30.19331415,120.11930822,Zhejiang University,edu,85860d38c66a5cf2e6ffd6475a3a2ba096ea2920,citation,,Celeb-500K: A Large Training Dataset for Face Recognition,2018 -43,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,6fed504da4e192fe4c2d452754d23d3db4a4e5e3,citation,https://arxiv.org/pdf/1702.06890.pdf,Learning Deep Features via Congenerous Cosine Loss for Person Recognition,2017 -44,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,6f5309d8cc76d3d300b72745887addd2a2480ba8,citation,,KinNet: Fine-to-Coarse Deep Metric Learning for Kinship Verification,2017 -45,China,MsCeleb,msceleb,40.00229045,116.32098908,Tsinghua University,edu,09ad80c4e80e1e02afb8fa4cb6dab260fb66df53,citation,,Feature Learning for One-Shot Face Recognition,2018 -46,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,c71217b2b111a51a31cf1107c71d250348d1ff68,citation,https://arxiv.org/pdf/1703.09912.pdf,One Network to Solve Them All — Solving Linear Inverse Problems Using Deep Projection Models,2017 -47,United Kingdom,MsCeleb,msceleb,51.7534538,-1.25400997,University of Oxford,edu,05ee231749c9ce97f036c71c1d2d599d660a8c81,citation,https://arxiv.org/pdf/1810.09951.pdf,GhostVLAD for set-based face recognition,2018 -48,United States,MsCeleb,msceleb,45.57022705,-122.63709346,Concordia University,edu,db374308655256da1479c272582d7c7139c97173,citation,https://arxiv.org/pdf/1811.11080.pdf,MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices,2018 -49,United States,MsCeleb,msceleb,33.5866784,-101.87539204,Electrical and Computer Engineering,edu,db374308655256da1479c272582d7c7139c97173,citation,https://arxiv.org/pdf/1811.11080.pdf,MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices,2018 -50,United States,MsCeleb,msceleb,36.0678324,-94.1736551,University of Arkansas,edu,db374308655256da1479c272582d7c7139c97173,citation,https://arxiv.org/pdf/1811.11080.pdf,MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices,2018 -51,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,de7d36173f9ca0e89e7a1991d541aed7c65127ea,citation,https://arxiv.org/pdf/1812.01288.pdf,FaceFeat-GAN: a Two-Stage Approach for Identity-Preserving Face Synthesis,2018 -52,China,MsCeleb,msceleb,22.59805605,113.98533784,Shenzhen Institutes of Advanced Technology,edu,de7d36173f9ca0e89e7a1991d541aed7c65127ea,citation,https://arxiv.org/pdf/1812.01288.pdf,FaceFeat-GAN: a Two-Stage Approach for Identity-Preserving Face Synthesis,2018 -53,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,212608e00fc1e8912ff845ee7a4a67f88ba938fc,citation,https://arxiv.org/pdf/1704.02450.pdf,Coupled Deep Learning for Heterogeneous Face Recognition,2018 -54,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,1fd5d08394a3278ef0a89639e9bfec7cb482e0bf,citation,https://arxiv.org/pdf/1804.03487.pdf,Exploring Disentangled Feature Representation Beyond Face Identification,2018 -55,China,MsCeleb,msceleb,39.993008,116.329882,SenseTime,company,1fd5d08394a3278ef0a89639e9bfec7cb482e0bf,citation,https://arxiv.org/pdf/1804.03487.pdf,Exploring Disentangled Feature Representation Beyond Face Identification,2018 -56,United States,MsCeleb,msceleb,40.8722825,-73.89489171,City University of New York,edu,f74917fc0e55f4f5682909dcf6929abd19d33e2e,citation,https://pdfs.semanticscholar.org/f749/17fc0e55f4f5682909dcf6929abd19d33e2e.pdf,GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER,2018 -57,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,f74917fc0e55f4f5682909dcf6929abd19d33e2e,citation,https://pdfs.semanticscholar.org/f749/17fc0e55f4f5682909dcf6929abd19d33e2e.pdf,GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER,2018 -58,United States,MsCeleb,msceleb,47.6423318,-122.1369302,Microsoft,company,f74917fc0e55f4f5682909dcf6929abd19d33e2e,citation,https://pdfs.semanticscholar.org/f749/17fc0e55f4f5682909dcf6929abd19d33e2e.pdf,GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER,2018 -59,China,MsCeleb,msceleb,32.0565957,118.77408833,Nanjing University,edu,8ff8c64288a2f7e4e8bf8fda865820b04ab3dbe8,citation,https://pdfs.semanticscholar.org/0056/92b9fa6728df3a7f14578c43410867bba425.pdf,Age Estimation Using Expectation of Label Distribution Learning,2018 -60,China,MsCeleb,msceleb,32.0575279,118.78682252,Southeast University,edu,8ff8c64288a2f7e4e8bf8fda865820b04ab3dbe8,citation,https://pdfs.semanticscholar.org/0056/92b9fa6728df3a7f14578c43410867bba425.pdf,Age Estimation Using Expectation of Label Distribution Learning,2018 -61,United States,MsCeleb,msceleb,42.4505507,-76.4783513,Cornell University,edu,dec0c26855da90876c405e9fd42830c3051c2f5f,citation,https://pdfs.semanticscholar.org/dec0/c26855da90876c405e9fd42830c3051c2f5f.pdf,Supplementary Material : Learning Compositional Visual Concepts with Mutual Consistency,2018 -62,France,MsCeleb,msceleb,48.8476037,2.2639934,"Université Paris-Saclay, France",edu,96e318f8ff91ba0b10348d4de4cb7c2142eb8ba9,citation,,State-of-the-art face recognition performance using publicly available software and datasets,2018 -63,United States,MsCeleb,msceleb,29.7207902,-95.34406271,University of Houston,edu,38d8ff137ff753f04689e6b76119a44588e143f3,citation,https://arxiv.org/pdf/1709.06532.pdf,When 3D-Aided 2D Face Recognition Meets Deep Learning: An extended UR2D for Pose-Invariant Face Recognition,2017 -64,United States,MsCeleb,msceleb,38.0333742,-84.5017758,University of Kentucky,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 -65,United States,MsCeleb,msceleb,34.0224149,-118.28634407,University of Southern California,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 -66,China,MsCeleb,msceleb,45.7413921,126.62552755,Harbin Institute of Technology,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 -67,China,MsCeleb,msceleb,30.289532,120.009886,Hangzhou Normal University,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 -68,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,51992fa881541ca3a4520c1ff9100b83e2f1ad87,citation,https://arxiv.org/pdf/1801.07698.pdf,ArcFace: Additive Angular Margin Loss for Deep Face Recognition,2018 -69,United States,MsCeleb,msceleb,30.40550035,-91.18620474,Louisiana State University,edu,5b9c6ca84268cb283941ae28b73989c0cf7e2ac2,citation,,A Pipeline to Improve Face Recognition Datasets and Applications,2018 -70,Italy,MsCeleb,msceleb,45.814548,8.827665,University of Insubria,edu,5b9c6ca84268cb283941ae28b73989c0cf7e2ac2,citation,,A Pipeline to Improve Face Recognition Datasets and Applications,2018 -71,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,c9efcd8e32dced6efa2bba64789df8d0a8e4996a,citation,,Deep Convolutional Neural Network with Independent Softmax for Large Scale Face Recognition,2016 -72,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,9b0489f2d5739213ef8c3e2e18739c4353c3a3b7,citation,https://arxiv.org/pdf/1801.06665.pdf,Visual Data Augmentation through Learning,2018 -73,United Kingdom,MsCeleb,msceleb,51.59029705,-0.22963221,Middlesex University,edu,9b0489f2d5739213ef8c3e2e18739c4353c3a3b7,citation,https://arxiv.org/pdf/1801.06665.pdf,Visual Data Augmentation through Learning,2018 -74,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,ad2cb5c255e555d9767d526721a4c7053fa2ac58,citation,https://arxiv.org/pdf/1711.03990.pdf,Longitudinal Study of Child Face Recognition,2018 -75,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,9e182e0cd9d70f876f1be7652c69373bcdf37fb4,citation,https://arxiv.org/pdf/1807.07860.pdf,Talking Face Generation by Adversarially Disentangled Audio-Visual Representation,2018 -76,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,06bd34951305d9f36eb29cf4532b25272da0e677,citation,https://arxiv.org/pdf/1809.07586.pdf,"A Fast and Accurate System for Face Detection, Identification, and Verification",2018 -77,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,94f74c6314ffd02db581e8e887b5fd81ce288dbf,citation,https://arxiv.org/pdf/1511.02683.pdf,A Light CNN for Deep Face Representation With Noisy Labels,2018 -78,Spain,MsCeleb,msceleb,40.4167754,-3.7037902,"Computer Vision Group (www.vision4uav.com), Centro de Automática y Robótica (CAR) UPM-CSIC, Universidad Politécnica de Madrid, José Gutiérrez Abascal 2, 28006, Spain",edu,726f76f11e904d7fcb12736c276a0b00eb5cde49,citation,https://arxiv.org/pdf/1901.05903.pdf,A Performance Comparison of Loss Functions for Deep Face Recognition,2019 -79,India,MsCeleb,msceleb,13.5568171,80.0261283,"Indian Institute of Information Technology, Sri City, India",edu,726f76f11e904d7fcb12736c276a0b00eb5cde49,citation,https://arxiv.org/pdf/1901.05903.pdf,A Performance Comparison of Loss Functions for Deep Face Recognition,2019 -80,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,83447d47bb2837b831b982ebf9e63616742bfdec,citation,https://arxiv.org/pdf/1812.04058.pdf,An Automatic System for Unconstrained Video-Based Face Recognition,2018 -81,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,7323b594d3a8508f809e276aa2d224c4e7ec5a80,citation,https://arxiv.org/pdf/1808.05508.pdf,An Experimental Evaluation of Covariates Effects on Unconstrained Face Verification,2018 -82,United States,MsCeleb,msceleb,43.7192587,10.4207947,"CNR ISTI-Institute of Information Science and Technologies, Pisa, Italy",edu,266766818dbc5a4ca1161ae2bc14c9e269ddc490,citation,https://pdfs.semanticscholar.org/2667/66818dbc5a4ca1161ae2bc14c9e269ddc490.pdf,Boosting a Low-Cost Smart Home Environment with Usage and Access Control Rules,2018 -83,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,944ea33211d67663e04d0181843db634e42cb2ca,citation,https://arxiv.org/pdf/1804.01159.pdf,Crystal Loss and Quality Pooling for Unconstrained Face Verification and Recognition.,2018 -84,Taiwan,MsCeleb,msceleb,25.01682835,121.53846924,National Taiwan University,edu,f15b7c317f106816bf444ac4ffb6c280cd6392c7,citation,http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w1/Zhang_Deep_Disguised_Faces_CVPR_2018_paper.pdf,Deep Disguised Faces Recognition,2018 -85,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,a50fa5048c61209149de0711b5f1b1806b43da00,citation,http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w1/Bansal_Deep_Features_for_CVPR_2018_paper.pdf,Deep Features for Recognizing Disguised Faces in the Wild,2018 -86,China,MsCeleb,msceleb,40.00229045,116.32098908,Tsinghua University,edu,19d53bb35baf6ab02368756412800c218a2df71c,citation,https://arxiv.org/pdf/1711.09515.pdf,DeepDeblur: Fast one-step blurry face images restoration.,2017 -87,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,12ba7c6f559a69fbfaacf61bfb2f8431505b09a0,citation,https://arxiv.org/pdf/1809.05620.pdf,DocFace+: ID Document to Selfie Matching,2018 -88,South Korea,MsCeleb,msceleb,37.5600406,126.9369248,Yonsei University,edu,d8526863f35b29cbf8ac2ae756eaae0d2930ffb1,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Choe_Face_Generation_for_ICCV_2017_paper.pdf,Face Generation for Low-Shot Learning Using Generative Adversarial Networks,2017 -89,China,MsCeleb,msceleb,38.880381,121.529021,Dailian University of Technology,edu,59fc69b3bc4759eef1347161e1248e886702f8f7,citation,https://pdfs.semanticscholar.org/59fc/69b3bc4759eef1347161e1248e886702f8f7.pdf,Final Report of Final Year Project HKU-Face : A Large Scale Dataset for Deep Face Recognition,2018 -90,Germany,MsCeleb,msceleb,52.381515,9.720171,"Leibniz Information Centre for Science and Technology, Hannover, Germany",edu,5209758096819efee15751c8875121bd27f2ee78,citation,https://arxiv.org/pdf/1806.08246.pdf,Finding Person Relations in Image Data of the Internet Archive,2018 -91,Germany,MsCeleb,msceleb,52.381515,9.720171,Leibniz Universität Hannover,edu,5209758096819efee15751c8875121bd27f2ee78,citation,https://arxiv.org/pdf/1806.08246.pdf,Finding Person Relations in Image Data of the Internet Archive,2018 -92,China,MsCeleb,msceleb,35.86166,104.195397,"Megvii Inc. (Face++), China",company,4874daed0f6a42d03011ed86e5ab46f231b02c13,citation,https://arxiv.org/pdf/1808.06210.pdf,GridFace: Face Rectification via Learning Local Homography Transformations,2018 -93,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,a89cbc90bbb4477a48aec185f2a112ea7ebe9b4d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Xu_High_Performance_Large_ICCV_2017_paper.pdf,High Performance Large Scale Face Recognition with Multi-cognition Softmax and Feature Retrieval,2017 -94,Singapore,MsCeleb,msceleb,1.2962018,103.77689944,National University of Singapore,edu,a89cbc90bbb4477a48aec185f2a112ea7ebe9b4d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Xu_High_Performance_Large_ICCV_2017_paper.pdf,High Performance Large Scale Face Recognition with Multi-cognition Softmax and Feature Retrieval,2017 -95,Singapore,MsCeleb,msceleb,1.3392609,103.8916077,Panasonic Singapore,company,a89cbc90bbb4477a48aec185f2a112ea7ebe9b4d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Xu_High_Performance_Large_ICCV_2017_paper.pdf,High Performance Large Scale Face Recognition with Multi-cognition Softmax and Feature Retrieval,2017 -96,United States,MsCeleb,msceleb,40.8722825,-73.89489171,City University of New York,edu,32aeb90992f6cf8494b1b5c67f4b912feef05e9c,citation,https://arxiv.org/pdf/1802.00853.pdf,Incremental Classifier Learning with Generative Adversarial Networks,2018 -97,United States,MsCeleb,msceleb,47.6423318,-122.1369302,Microsoft,company,32aeb90992f6cf8494b1b5c67f4b912feef05e9c,citation,https://arxiv.org/pdf/1802.00853.pdf,Incremental Classifier Learning with Generative Adversarial Networks,2018 -98,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,32aeb90992f6cf8494b1b5c67f4b912feef05e9c,citation,https://arxiv.org/pdf/1802.00853.pdf,Incremental Classifier Learning with Generative Adversarial Networks,2018 -99,Singapore,MsCeleb,msceleb,1.2962018,103.77689944,National University of Singapore,edu,c808c784237f167c78a87cc5a9d48152579c27a4,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Cheng_Know_You_at_ICCV_2017_paper.pdf,Know You at One Glance: A Compact Vector Representation for Low-Shot Learning,2017 -100,Singapore,MsCeleb,msceleb,1.3392609,103.8916077,Panasonic Singapore,company,c808c784237f167c78a87cc5a9d48152579c27a4,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Cheng_Know_You_at_ICCV_2017_paper.pdf,Know You at One Glance: A Compact Vector Representation for Low-Shot Learning,2017 -101,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,332548fd2e52b27e062bd6dcc1db0953ced6ed48,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Wu_Low-Shot_Face_Recognition_ICCV_2017_paper.pdf,Low-Shot Face Recognition with Hybrid Classifiers,2017 -102,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,98b2f21db344b8b9f7747feaf86f92558595990c,citation,https://pdfs.semanticscholar.org/98b2/f21db344b8b9f7747feaf86f92558595990c.pdf,PACES OF G ENERATIVE A DVERSARIAL N ETWORKS,2018 -103,United States,MsCeleb,msceleb,37.43131385,-122.16936535,Stanford University,edu,98b2f21db344b8b9f7747feaf86f92558595990c,citation,https://pdfs.semanticscholar.org/98b2/f21db344b8b9f7747feaf86f92558595990c.pdf,PACES OF G ENERATIVE A DVERSARIAL N ETWORKS,2018 -104,United States,MsCeleb,msceleb,32.87935255,-117.23110049,"University of California, San Diego",edu,98b2f21db344b8b9f7747feaf86f92558595990c,citation,https://pdfs.semanticscholar.org/98b2/f21db344b8b9f7747feaf86f92558595990c.pdf,PACES OF G ENERATIVE A DVERSARIAL N ETWORKS,2018 -105,China,MsCeleb,msceleb,22.5283157,113.94481,Shenzhen Institute of Wuhan University,edu,e13360cda1ebd6fa5c3f3386c0862f292e4dbee4,citation,https://arxiv.org/pdf/1611.08976.pdf,Range Loss for Deep Face Recognition with Long-Tailed Training Data,2016 -106,Australia,MsCeleb,msceleb,-33.8832376,151.2004942,Southern University of Science and Technology,edu,e13360cda1ebd6fa5c3f3386c0862f292e4dbee4,citation,https://arxiv.org/pdf/1611.08976.pdf,Range Loss for Deep Face Recognition with Long-Tailed Training Data,2016 -107,China,MsCeleb,msceleb,36.20304395,117.05842113,Tianjin University,edu,e13360cda1ebd6fa5c3f3386c0862f292e4dbee4,citation,https://arxiv.org/pdf/1611.08976.pdf,Range Loss for Deep Face Recognition with Long-Tailed Training Data,2016 -108,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,b26d5d929cc3c0d14da058961ddd024f4c9690f5,citation,https://arxiv.org/pdf/1805.08657.pdf,Robust Conditional Generative Adversarial Networks,2018 -109,France,MsCeleb,msceleb,46.1464423,-1.1570872,La Rochelle University,edu,5c54e0f46330787c4fac48aecced9a8f8e37658a,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w23/Ming_Simple_Triplet_Loss_ICCV_2017_paper.pdf,Simple Triplet Loss Based on Intra/Inter-Class Metric Learning for Face Verification,2017 -110,China,MsCeleb,msceleb,39.9922379,116.30393816,Peking University,edu,4f0b641860d90dfa4c185670bf636149a2b2b717,citation,,Improve Cross-Domain Face Recognition with IBN-block,2018 -111,China,MsCeleb,msceleb,31.83907195,117.26420748,University of Science and Technology of China,edu,c5b324f7f9abdffc1be83f640674beda81b74315,citation,,Towards Open-Set Identity Preserving Face Synthesis,2018 -112,Italy,MsCeleb,msceleb,44.6451046,10.9279268,University of Modena and Reggio Emilia,edu,ff44d8938c52cfdca48c80f8e1618bbcbf91cb2a,citation,https://pdfs.semanticscholar.org/ff44/d8938c52cfdca48c80f8e1618bbcbf91cb2a.pdf,Towards Video Captioning with Naming: A Novel Dataset and a Multi-modal Approach,2017 -113,France,MsCeleb,msceleb,45.7833631,4.76877036,Ecole Centrale de Lyon,edu,727d03100d4a8e12620acd7b1d1972bbee54f0e6,citation,https://arxiv.org/pdf/1706.04264.pdf,von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification,2017 -114,France,MsCeleb,msceleb,48.832493,2.267474,Safran Identity and Security,company,727d03100d4a8e12620acd7b1d1972bbee54f0e6,citation,https://arxiv.org/pdf/1706.04264.pdf,von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification,2017 -115,China,MsCeleb,msceleb,39.980196,116.333305,"CASIA, Center for Research on Intelligent Perception and Computing, Beijing, 100190, China",edu,3ac09c2589178dac0b6a2ea2edf04b7629672d81,citation,https://arxiv.org/pdf/1708.02412.pdf,Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition,2018 -116,China,MsCeleb,msceleb,39.979203,116.33287,"CASIA, National Laboratory of Pattern Recognition",edu,3ac09c2589178dac0b6a2ea2edf04b7629672d81,citation,https://arxiv.org/pdf/1708.02412.pdf,Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition,2018 -117,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,3ac09c2589178dac0b6a2ea2edf04b7629672d81,citation,https://arxiv.org/pdf/1708.02412.pdf,Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition,2018 -118,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,b35ff9985aaee9371588330bcef0dfc88d1401d7,citation,,Deep Density Clustering of Unconstrained Faces,2018 -119,United States,MsCeleb,msceleb,30.6108365,-96.352128,Texas A&M University,edu,e36fdb50844132fc7925550398e68e7ae95467de,citation,,Face Verification with Disguise Variations via Deep Disguise Recognizer,2018 -120,United States,MsCeleb,msceleb,39.65404635,-79.96475355,West Virginia University,edu,e36fdb50844132fc7925550398e68e7ae95467de,citation,,Face Verification with Disguise Variations via Deep Disguise Recognizer,2018 -121,United States,MsCeleb,msceleb,42.4505507,-76.4783513,Cornell University,edu,9ccf528ef8df99372ce6286ffbb0bf6f9a505cca,citation,,Learning Compositional Visual Concepts with Mutual Consistency,2018 -122,United States,MsCeleb,msceleb,40.3442079,-74.5924599,"Siemens Corporate Research, Princeton, NJ",edu,9ccf528ef8df99372ce6286ffbb0bf6f9a505cca,citation,,Learning Compositional Visual Concepts with Mutual Consistency,2018 -123,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,3827f1cab643a57e3cd22fbffbf19dd5e8a298a8,citation,,One-Shot Face Recognition via Generative Learning,2018 -124,China,MsCeleb,msceleb,39.9106327,116.3356321,Chinese Academy of Science,edu,20f87ed94a423b5d8599d85d1f2f80bab8902107,citation,,Pose-Guided Photorealistic Face Rotation,2018 -125,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,67a9659de0bf671fafccd7f39b7587f85fb6dfbd,citation,,Ring Loss: Convex Feature Normalization for Face Recognition,2018 +4,France,MsCeleb,msceleb,46.1476461,-1.1549415,University of La Rochelle,edu,153fbae25efd061f9046970071d0cfe739a35a0e,citation,,FaceLiveNet: End-to-End Networks Combining Face Verification with Interactive Facial Expression-Based Liveness Detection,2018 +5,China,MsCeleb,msceleb,26.89887,112.590435,University of South China,edu,98518fc368d7e1478cef40f5f8fd4468763645ad,citation,http://downloads.hindawi.com/journals/cin/2018/4512473.pdf,A Community Detection Approach to Cleaning Extremely Large Face Database,2018 +6,China,MsCeleb,msceleb,28.2290209,112.99483204,"National University of Defense Technology, China",mil,98518fc368d7e1478cef40f5f8fd4468763645ad,citation,http://downloads.hindawi.com/journals/cin/2018/4512473.pdf,A Community Detection Approach to Cleaning Extremely Large Face Database,2018 +7,China,MsCeleb,msceleb,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,6cdbbced12bff53bcbdde3cdb6d20b4bd02a9d6c,citation,https://arxiv.org/pdf/1811.12026.pdf,Attacks on State-of-the-Art Face Recognition using Attentional Adversarial Attack Generative Network,2018 +8,China,MsCeleb,msceleb,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,e47f4a127f41c055fb7893ddc295932ead783c63,citation,https://arxiv.org/pdf/1709.03675.pdf,Adversarial Discriminative Heterogeneous Face Recognition,2018 +9,China,MsCeleb,msceleb,39.9082804,116.2458527,University of Chinese Academy of Sciences,edu,e47f4a127f41c055fb7893ddc295932ead783c63,citation,https://arxiv.org/pdf/1709.03675.pdf,Adversarial Discriminative Heterogeneous Face Recognition,2018 +10,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,b446bcd7fb78adfe346cf7a01a38e4f43760f363,citation,https://pdfs.semanticscholar.org/b446/bcd7fb78adfe346cf7a01a38e4f43760f363.pdf,To appear in ICB 2018 Longitudinal Study of Child Face Recognition,2017 +11,United Kingdom,MsCeleb,msceleb,51.3791442,-2.3252332,University of Bath,edu,26567da544239cc6628c5696b0b10539144cbd57,citation,https://arxiv.org/pdf/1811.12784.pdf,The GAN that Warped: Semantic Attribute Editing with Unpaired Data,2018 +12,United States,MsCeleb,msceleb,39.2899685,-76.62196103,University of Maryland,edu,872dfdeccf99bbbed7c8f1ea08afb2d713ebe085,citation,https://arxiv.org/pdf/1703.09507.pdf,L2-constrained Softmax Loss for Discriminative Face Verification,2017 +13,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,3011b5fce49112228711a9e5f92d6f191687c1ea,citation,https://arxiv.org/pdf/1803.09014.pdf,Feature Transfer Learning for Deep Face Recognition with Long-Tail Data,2018 +14,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,1929863fff917ee7f6dc428fc1ce732777668eca,citation,https://arxiv.org/pdf/1712.04695.pdf,UV-GAN: Adversarial Facial UV Map Completion for Pose-Invariant Face Recognition,2018 +15,United States,MsCeleb,msceleb,39.2899685,-76.62196103,University of Maryland,edu,b6f758be954d34817d4ebaa22b30c63a4b8ddb35,citation,https://arxiv.org/pdf/1703.04835.pdf,A Proximity-Aware Hierarchical Clustering of Faces,2017 +16,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,19fa871626df604639550c6445d2f76cd369dd13,citation,https://arxiv.org/pdf/1805.02283.pdf,DocFace: Matching ID Document Photos to Selfies,2018 +17,United States,MsCeleb,msceleb,32.87935255,-117.23110049,"University of California, San Diego",edu,d35534f3f59631951011539da2fe83f2844ca245,citation,https://arxiv.org/pdf/1705.07904.pdf,Semantically Decomposing the Latent Spaces of Generative Adversarial Networks,2017 +18,United States,MsCeleb,msceleb,37.43131385,-122.16936535,Stanford University,edu,d35534f3f59631951011539da2fe83f2844ca245,citation,https://arxiv.org/pdf/1705.07904.pdf,Semantically Decomposing the Latent Spaces of Generative Adversarial Networks,2017 +19,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,d35534f3f59631951011539da2fe83f2844ca245,citation,https://arxiv.org/pdf/1705.07904.pdf,Semantically Decomposing the Latent Spaces of Generative Adversarial Networks,2017 +20,Canada,MsCeleb,msceleb,49.2767454,-122.91777375,Simon Fraser University,edu,b301fd2fc33f24d6f75224e7c0991f4f04b64a65,citation,https://arxiv.org/pdf/1803.06340.pdf,Faces as Lighting Probes via Unsupervised Deep Highlight Extraction,2018 +21,China,MsCeleb,msceleb,28.2290209,112.99483204,"National University of Defense Technology, China",mil,b301fd2fc33f24d6f75224e7c0991f4f04b64a65,citation,https://arxiv.org/pdf/1803.06340.pdf,Faces as Lighting Probes via Unsupervised Deep Highlight Extraction,2018 +22,United States,MsCeleb,msceleb,42.3614256,-71.0812092,Microsoft Research Asia,company,b301fd2fc33f24d6f75224e7c0991f4f04b64a65,citation,https://arxiv.org/pdf/1803.06340.pdf,Faces as Lighting Probes via Unsupervised Deep Highlight Extraction,2018 +23,China,MsCeleb,msceleb,39.9041999,116.4073963,"Beijing, China",edu,7fa4e972da46735971aad52413d17c4014c49e6e,citation,https://arxiv.org/pdf/1709.02940.pdf,How to Train Triplet Networks with 100K Identities?,2017 +24,Singapore,MsCeleb,msceleb,1.2962018,103.77689944,National University of Singapore,edu,fca9ebaa30d69ccec8bb577c31693c936c869e72,citation,https://arxiv.org/pdf/1809.00338.pdf,Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition,2018 +25,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,fca9ebaa30d69ccec8bb577c31693c936c869e72,citation,https://arxiv.org/pdf/1809.00338.pdf,Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition,2018 +26,Japan,MsCeleb,msceleb,35.6992503,139.7721568,"Hitachi, Ltd., Tokyo, Japan",company,3b4da93fbdf7ae520fa00d39ffa694e850b85162,citation,,Face-Voice Matching using Cross-modal Embeddings,2018 +27,China,MsCeleb,msceleb,30.19331415,120.11930822,Zhejiang University,edu,85860d38c66a5cf2e6ffd6475a3a2ba096ea2920,citation,,Celeb-500K: A Large Training Dataset for Face Recognition,2018 +28,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,6f5309d8cc76d3d300b72745887addd2a2480ba8,citation,,KinNet: Fine-to-Coarse Deep Metric Learning for Kinship Verification,2017 +29,China,MsCeleb,msceleb,40.00229045,116.32098908,Tsinghua University,edu,09ad80c4e80e1e02afb8fa4cb6dab260fb66df53,citation,,Feature Learning for One-Shot Face Recognition,2018 +30,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,c71217b2b111a51a31cf1107c71d250348d1ff68,citation,https://arxiv.org/pdf/1703.09912.pdf,One Network to Solve Them All — Solving Linear Inverse Problems Using Deep Projection Models,2017 +31,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,de7d36173f9ca0e89e7a1991d541aed7c65127ea,citation,https://arxiv.org/pdf/1812.01288.pdf,FaceFeat-GAN: a Two-Stage Approach for Identity-Preserving Face Synthesis,2018 +32,China,MsCeleb,msceleb,22.59805605,113.98533784,Shenzhen Institutes of Advanced Technology,edu,de7d36173f9ca0e89e7a1991d541aed7c65127ea,citation,https://arxiv.org/pdf/1812.01288.pdf,FaceFeat-GAN: a Two-Stage Approach for Identity-Preserving Face Synthesis,2018 +33,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,212608e00fc1e8912ff845ee7a4a67f88ba938fc,citation,https://arxiv.org/pdf/1704.02450.pdf,Coupled Deep Learning for Heterogeneous Face Recognition,2018 +34,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,1fd5d08394a3278ef0a89639e9bfec7cb482e0bf,citation,https://arxiv.org/pdf/1804.03487.pdf,Exploring Disentangled Feature Representation Beyond Face Identification,2018 +35,China,MsCeleb,msceleb,39.993008,116.329882,SenseTime,company,1fd5d08394a3278ef0a89639e9bfec7cb482e0bf,citation,https://arxiv.org/pdf/1804.03487.pdf,Exploring Disentangled Feature Representation Beyond Face Identification,2018 +36,United States,MsCeleb,msceleb,40.8722825,-73.89489171,City University of New York,edu,f74917fc0e55f4f5682909dcf6929abd19d33e2e,citation,https://pdfs.semanticscholar.org/f749/17fc0e55f4f5682909dcf6929abd19d33e2e.pdf,GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER,2018 +37,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,f74917fc0e55f4f5682909dcf6929abd19d33e2e,citation,https://pdfs.semanticscholar.org/f749/17fc0e55f4f5682909dcf6929abd19d33e2e.pdf,GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER,2018 +38,United States,MsCeleb,msceleb,47.6423318,-122.1369302,Microsoft,company,f74917fc0e55f4f5682909dcf6929abd19d33e2e,citation,https://pdfs.semanticscholar.org/f749/17fc0e55f4f5682909dcf6929abd19d33e2e.pdf,GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER,2018 +39,China,MsCeleb,msceleb,32.0565957,118.77408833,Nanjing University,edu,8ff8c64288a2f7e4e8bf8fda865820b04ab3dbe8,citation,https://pdfs.semanticscholar.org/0056/92b9fa6728df3a7f14578c43410867bba425.pdf,Age Estimation Using Expectation of Label Distribution Learning,2018 +40,China,MsCeleb,msceleb,32.0575279,118.78682252,Southeast University,edu,8ff8c64288a2f7e4e8bf8fda865820b04ab3dbe8,citation,https://pdfs.semanticscholar.org/0056/92b9fa6728df3a7f14578c43410867bba425.pdf,Age Estimation Using Expectation of Label Distribution Learning,2018 +41,United States,MsCeleb,msceleb,42.4505507,-76.4783513,Cornell University,edu,dec0c26855da90876c405e9fd42830c3051c2f5f,citation,https://pdfs.semanticscholar.org/dec0/c26855da90876c405e9fd42830c3051c2f5f.pdf,Supplementary Material : Learning Compositional Visual Concepts with Mutual Consistency,2018 +42,France,MsCeleb,msceleb,48.8476037,2.2639934,"Université Paris-Saclay, France",edu,96e318f8ff91ba0b10348d4de4cb7c2142eb8ba9,citation,,State-of-the-art face recognition performance using publicly available software and datasets,2018 +43,United States,MsCeleb,msceleb,29.7207902,-95.34406271,University of Houston,edu,38d8ff137ff753f04689e6b76119a44588e143f3,citation,https://arxiv.org/pdf/1709.06532.pdf,When 3D-Aided 2D Face Recognition Meets Deep Learning: An extended UR2D for Pose-Invariant Face Recognition,2017 +44,United States,MsCeleb,msceleb,38.0333742,-84.5017758,University of Kentucky,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 +45,United States,MsCeleb,msceleb,34.0224149,-118.28634407,University of Southern California,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 +46,China,MsCeleb,msceleb,45.7413921,126.62552755,Harbin Institute of Technology,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 +47,China,MsCeleb,msceleb,30.289532,120.009886,Hangzhou Normal University,edu,455a7e03a0c5ab618d0e86a06c9910ac179f0479,citation,https://arxiv.org/pdf/1807.08772.pdf,Identity Preserving Face Completion for Large Ocular Region Occlusion,2018 +48,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,c9efcd8e32dced6efa2bba64789df8d0a8e4996a,citation,,Deep Convolutional Neural Network with Independent Softmax for Large Scale Face Recognition,2016 +49,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,9b0489f2d5739213ef8c3e2e18739c4353c3a3b7,citation,https://arxiv.org/pdf/1801.06665.pdf,Visual Data Augmentation through Learning,2018 +50,United Kingdom,MsCeleb,msceleb,51.59029705,-0.22963221,Middlesex University,edu,9b0489f2d5739213ef8c3e2e18739c4353c3a3b7,citation,https://arxiv.org/pdf/1801.06665.pdf,Visual Data Augmentation through Learning,2018 +51,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,ad2cb5c255e555d9767d526721a4c7053fa2ac58,citation,https://arxiv.org/pdf/1711.03990.pdf,Longitudinal Study of Child Face Recognition,2018 +52,China,MsCeleb,msceleb,22.4162632,114.2109318,Chinese University of Hong Kong,edu,9e182e0cd9d70f876f1be7652c69373bcdf37fb4,citation,https://arxiv.org/pdf/1807.07860.pdf,Talking Face Generation by Adversarially Disentangled Audio-Visual Representation,2018 +53,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,83447d47bb2837b831b982ebf9e63616742bfdec,citation,https://arxiv.org/pdf/1812.04058.pdf,An Automatic System for Unconstrained Video-Based Face Recognition,2018 +54,United States,MsCeleb,msceleb,43.7192587,10.4207947,"CNR ISTI-Institute of Information Science and Technologies, Pisa, Italy",edu,266766818dbc5a4ca1161ae2bc14c9e269ddc490,citation,https://pdfs.semanticscholar.org/2667/66818dbc5a4ca1161ae2bc14c9e269ddc490.pdf,Boosting a Low-Cost Smart Home Environment with Usage and Access Control Rules,2018 +55,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,944ea33211d67663e04d0181843db634e42cb2ca,citation,https://arxiv.org/pdf/1804.01159.pdf,Crystal Loss and Quality Pooling for Unconstrained Face Verification and Recognition.,2018 +56,Taiwan,MsCeleb,msceleb,25.01682835,121.53846924,National Taiwan University,edu,f15b7c317f106816bf444ac4ffb6c280cd6392c7,citation,http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w1/Zhang_Deep_Disguised_Faces_CVPR_2018_paper.pdf,Deep Disguised Faces Recognition,2018 +57,China,MsCeleb,msceleb,40.00229045,116.32098908,Tsinghua University,edu,19d53bb35baf6ab02368756412800c218a2df71c,citation,https://arxiv.org/pdf/1711.09515.pdf,DeepDeblur: Fast one-step blurry face images restoration.,2017 +58,United States,MsCeleb,msceleb,42.718568,-84.47791571,Michigan State University,edu,12ba7c6f559a69fbfaacf61bfb2f8431505b09a0,citation,https://arxiv.org/pdf/1809.05620.pdf,DocFace+: ID Document to Selfie Matching,2018 +59,South Korea,MsCeleb,msceleb,37.5600406,126.9369248,Yonsei University,edu,d8526863f35b29cbf8ac2ae756eaae0d2930ffb1,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Choe_Face_Generation_for_ICCV_2017_paper.pdf,Face Generation for Low-Shot Learning Using Generative Adversarial Networks,2017 +60,Germany,MsCeleb,msceleb,52.381515,9.720171,"Leibniz Information Centre for Science and Technology, Hannover, Germany",edu,5209758096819efee15751c8875121bd27f2ee78,citation,https://arxiv.org/pdf/1806.08246.pdf,Finding Person Relations in Image Data of the Internet Archive,2018 +61,Germany,MsCeleb,msceleb,52.381515,9.720171,Leibniz Universität Hannover,edu,5209758096819efee15751c8875121bd27f2ee78,citation,https://arxiv.org/pdf/1806.08246.pdf,Finding Person Relations in Image Data of the Internet Archive,2018 +62,China,MsCeleb,msceleb,35.86166,104.195397,"Megvii Inc. (Face++), China",company,4874daed0f6a42d03011ed86e5ab46f231b02c13,citation,https://arxiv.org/pdf/1808.06210.pdf,GridFace: Face Rectification via Learning Local Homography Transformations,2018 +63,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,a89cbc90bbb4477a48aec185f2a112ea7ebe9b4d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Xu_High_Performance_Large_ICCV_2017_paper.pdf,High Performance Large Scale Face Recognition with Multi-cognition Softmax and Feature Retrieval,2017 +64,Singapore,MsCeleb,msceleb,1.2962018,103.77689944,National University of Singapore,edu,a89cbc90bbb4477a48aec185f2a112ea7ebe9b4d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Xu_High_Performance_Large_ICCV_2017_paper.pdf,High Performance Large Scale Face Recognition with Multi-cognition Softmax and Feature Retrieval,2017 +65,Singapore,MsCeleb,msceleb,1.3392609,103.8916077,Panasonic Singapore,company,a89cbc90bbb4477a48aec185f2a112ea7ebe9b4d,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Xu_High_Performance_Large_ICCV_2017_paper.pdf,High Performance Large Scale Face Recognition with Multi-cognition Softmax and Feature Retrieval,2017 +66,United States,MsCeleb,msceleb,40.8722825,-73.89489171,City University of New York,edu,32aeb90992f6cf8494b1b5c67f4b912feef05e9c,citation,https://arxiv.org/pdf/1802.00853.pdf,Incremental Classifier Learning with Generative Adversarial Networks,2018 +67,United States,MsCeleb,msceleb,47.6423318,-122.1369302,Microsoft,company,32aeb90992f6cf8494b1b5c67f4b912feef05e9c,citation,https://arxiv.org/pdf/1802.00853.pdf,Incremental Classifier Learning with Generative Adversarial Networks,2018 +68,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,32aeb90992f6cf8494b1b5c67f4b912feef05e9c,citation,https://arxiv.org/pdf/1802.00853.pdf,Incremental Classifier Learning with Generative Adversarial Networks,2018 +69,Singapore,MsCeleb,msceleb,1.2962018,103.77689944,National University of Singapore,edu,c808c784237f167c78a87cc5a9d48152579c27a4,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Cheng_Know_You_at_ICCV_2017_paper.pdf,Know You at One Glance: A Compact Vector Representation for Low-Shot Learning,2017 +70,Singapore,MsCeleb,msceleb,1.3392609,103.8916077,Panasonic Singapore,company,c808c784237f167c78a87cc5a9d48152579c27a4,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Cheng_Know_You_at_ICCV_2017_paper.pdf,Know You at One Glance: A Compact Vector Representation for Low-Shot Learning,2017 +71,United States,MsCeleb,msceleb,42.3383668,-71.08793524,Northeastern University,edu,332548fd2e52b27e062bd6dcc1db0953ced6ed48,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w27/Wu_Low-Shot_Face_Recognition_ICCV_2017_paper.pdf,Low-Shot Face Recognition with Hybrid Classifiers,2017 +72,United States,MsCeleb,msceleb,40.4441619,-79.94272826,Carnegie Mellon University,edu,98b2f21db344b8b9f7747feaf86f92558595990c,citation,https://pdfs.semanticscholar.org/98b2/f21db344b8b9f7747feaf86f92558595990c.pdf,PACES OF G ENERATIVE A DVERSARIAL N ETWORKS,2018 +73,United States,MsCeleb,msceleb,37.43131385,-122.16936535,Stanford University,edu,98b2f21db344b8b9f7747feaf86f92558595990c,citation,https://pdfs.semanticscholar.org/98b2/f21db344b8b9f7747feaf86f92558595990c.pdf,PACES OF G ENERATIVE A DVERSARIAL N ETWORKS,2018 +74,United States,MsCeleb,msceleb,32.87935255,-117.23110049,"University of California, San Diego",edu,98b2f21db344b8b9f7747feaf86f92558595990c,citation,https://pdfs.semanticscholar.org/98b2/f21db344b8b9f7747feaf86f92558595990c.pdf,PACES OF G ENERATIVE A DVERSARIAL N ETWORKS,2018 +75,China,MsCeleb,msceleb,22.5283157,113.94481,Shenzhen Institute of Wuhan University,edu,e13360cda1ebd6fa5c3f3386c0862f292e4dbee4,citation,https://arxiv.org/pdf/1611.08976.pdf,Range Loss for Deep Face Recognition with Long-Tailed Training Data,2016 +76,Australia,MsCeleb,msceleb,-33.8832376,151.2004942,Southern University of Science and Technology,edu,e13360cda1ebd6fa5c3f3386c0862f292e4dbee4,citation,https://arxiv.org/pdf/1611.08976.pdf,Range Loss for Deep Face Recognition with Long-Tailed Training Data,2016 +77,China,MsCeleb,msceleb,36.20304395,117.05842113,Tianjin University,edu,e13360cda1ebd6fa5c3f3386c0862f292e4dbee4,citation,https://arxiv.org/pdf/1611.08976.pdf,Range Loss for Deep Face Recognition with Long-Tailed Training Data,2016 +78,United Kingdom,MsCeleb,msceleb,51.49887085,-0.17560797,Imperial College London,edu,b26d5d929cc3c0d14da058961ddd024f4c9690f5,citation,https://arxiv.org/pdf/1805.08657.pdf,Robust Conditional Generative Adversarial Networks,2018 +79,France,MsCeleb,msceleb,46.1464423,-1.1570872,La Rochelle University,edu,5c54e0f46330787c4fac48aecced9a8f8e37658a,citation,http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w23/Ming_Simple_Triplet_Loss_ICCV_2017_paper.pdf,Simple Triplet Loss Based on Intra/Inter-Class Metric Learning for Face Verification,2017 +80,China,MsCeleb,msceleb,31.83907195,117.26420748,University of Science and Technology of China,edu,c5b324f7f9abdffc1be83f640674beda81b74315,citation,,Towards Open-Set Identity Preserving Face Synthesis,2018 +81,Italy,MsCeleb,msceleb,44.6451046,10.9279268,University of Modena and Reggio Emilia,edu,ff44d8938c52cfdca48c80f8e1618bbcbf91cb2a,citation,https://pdfs.semanticscholar.org/ff44/d8938c52cfdca48c80f8e1618bbcbf91cb2a.pdf,Towards Video Captioning with Naming: A Novel Dataset and a Multi-modal Approach,2017 +82,France,MsCeleb,msceleb,45.7833631,4.76877036,Ecole Centrale de Lyon,edu,727d03100d4a8e12620acd7b1d1972bbee54f0e6,citation,https://arxiv.org/pdf/1706.04264.pdf,von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification,2017 +83,France,MsCeleb,msceleb,48.832493,2.267474,Safran Identity and Security,company,727d03100d4a8e12620acd7b1d1972bbee54f0e6,citation,https://arxiv.org/pdf/1706.04264.pdf,von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification,2017 +84,China,MsCeleb,msceleb,39.980196,116.333305,"CASIA, Center for Research on Intelligent Perception and Computing, Beijing, 100190, China",edu,3ac09c2589178dac0b6a2ea2edf04b7629672d81,citation,https://arxiv.org/pdf/1708.02412.pdf,Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition,2018 +85,China,MsCeleb,msceleb,39.979203,116.33287,"CASIA, National Laboratory of Pattern Recognition",edu,3ac09c2589178dac0b6a2ea2edf04b7629672d81,citation,https://arxiv.org/pdf/1708.02412.pdf,Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition,2018 +86,China,MsCeleb,msceleb,40.0044795,116.370238,Chinese Academy of Sciences,edu,3ac09c2589178dac0b6a2ea2edf04b7629672d81,citation,https://arxiv.org/pdf/1708.02412.pdf,Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition,2018 +87,United States,MsCeleb,msceleb,38.99203005,-76.9461029,University of Maryland College Park,edu,b35ff9985aaee9371588330bcef0dfc88d1401d7,citation,,Deep Density Clustering of Unconstrained Faces,2018 +88,United States,MsCeleb,msceleb,30.6108365,-96.352128,Texas A&M University,edu,e36fdb50844132fc7925550398e68e7ae95467de,citation,,Face Verification with Disguise Variations via Deep Disguise Recognizer,2018 +89,United States,MsCeleb,msceleb,39.65404635,-79.96475355,West Virginia University,edu,e36fdb50844132fc7925550398e68e7ae95467de,citation,,Face Verification with Disguise Variations via Deep Disguise Recognizer,2018 +90,China,MsCeleb,msceleb,39.9106327,116.3356321,Chinese Academy of Science,edu,20f87ed94a423b5d8599d85d1f2f80bab8902107,citation,,Pose-Guided Photorealistic Face Rotation,2018 +91,China,MsCeleb,msceleb,39.98177,116.330086,National Laboratory of Pattern Recognition,edu,c7c8d150ece08b12e3abdb6224000c07a6ce7d47,citation,https://arxiv.org/pdf/1611.05271.pdf,DeMeshNet: Blind Face Inpainting for Deep MeshFace Verification,2018 +92,South Korea,MsCeleb,msceleb,36.0138857,129.3231836,POSTECH,edu,e6b45d5a86092bbfdcd6c3c54cda3d6c3ac6b227,citation,https://arxiv.org/pdf/1808.04976.pdf,Pairwise Relational Networks for Face Recognition,2018 +93,China,MsCeleb,msceleb,30.318764,120.363977,China Jiliang University,edu,406c5aeca71011fd8f8bd233744a81b53ccf635a,citation,,Scalable softmax loss for face verification,2017 diff --git a/site/datasets/verified/oxford_town_centre.csv b/site/datasets/verified/oxford_town_centre.csv index 8fb0f336..fb59174d 100644 --- a/site/datasets/verified/oxford_town_centre.csv +++ b/site/datasets/verified/oxford_town_centre.csv @@ -17,98 +17,99 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,t 15,United States,TownCentre,oxford_town_centre,37.3641651,-120.4254615,University of California at Merced,edu,b28eb219db9370cf20063288225cc2f3e6e5f984,citation,http://faculty.ucmerced.edu/mhyang/papers/iccv15_pose.pdf,Fast and Accurate Head Pose Estimation via Random Projection Forests,2015 16,Austria,TownCentre,oxford_town_centre,47.05821,15.46019568,Graz University of Technology,edu,356ec17af375b63a015d590562381a62f352f7d5,citation,http://lrs.icg.tugraz.at/pubs/possegger_cvpr14.pdf,Occlusion Geodesics for Online Multi-object Tracking,2014 17,United States,TownCentre,oxford_town_centre,45.57022705,-122.63709346,Concordia University,edu,b53289f3f3b17dad91fa4fd25d09fdbc14f8c8cc,citation,http://faculty.ucmerced.edu/mhyang/papers/cviu16_MOT.pdf,Online multi-object tracking via robust collaborative model and sample selection,2017 -18,United States,TownCentre,oxford_town_centre,37.8718992,-122.2585399,University of California,edu,b53289f3f3b17dad91fa4fd25d09fdbc14f8c8cc,citation,http://faculty.ucmerced.edu/mhyang/papers/cviu16_MOT.pdf,Online multi-object tracking via robust collaborative model and sample selection,2017 -19,United States,TownCentre,oxford_town_centre,28.59899755,-81.19712501,University of Central Florida,edu,920246280e7e70900762ddfa7c41a79ec4517350,citation,http://crcv-web.eecs.ucf.edu/papers/eccv2012/MPMPT-ECCV12.pdf,(MP) 2 T: multiple people multiple parts tracker,2012 -20,United States,TownCentre,oxford_town_centre,37.8718992,-122.2585399,University of California,edu,14d5bd23667db4413a7f362565be21d462d3fc93,citation,http://alumni.cs.ucr.edu/~zqin001/cvpr2014.pdf,An Online Learned Elementary Grouping Model for Multi-target Tracking,2014 -21,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,9070045c1a9564a5f25b42f3facc7edf4c302483,citation,http://virtualhumans.mpi-inf.mpg.de/papers/lealPonsmollICCVW2011/lealPonsmollICCVW2011.pdf,Everybody needs somebody: Modeling social and grouping behavior on a linear programming multiple people tracker,2011 -22,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,2323cb559c9e18673db836ffc283c27e4a002ed9,citation,http://arxiv.org/pdf/1605.04502v1.pdf,Joint Learning of Convolutional Neural Networks and Temporally Constrained Metrics for Tracklet Association,2016 -23,China,TownCentre,oxford_town_centre,39.905838,116.375516,"Huawei Technologies, Beijing, China",company,434627a03d4433b0df03058724524c3ac1c07478,citation,http://jianghz.com/pubs/mtt_tip_final.pdf,Online Multi-Target Tracking With Unified Handling of Complex Scenarios,2015 -24,China,TownCentre,oxford_town_centre,34.250803,108.983693,Xi’an Jiaotong University,edu,434627a03d4433b0df03058724524c3ac1c07478,citation,http://jianghz.com/pubs/mtt_tip_final.pdf,Online Multi-Target Tracking With Unified Handling of Complex Scenarios,2015 -25,United States,TownCentre,oxford_town_centre,28.59899755,-81.19712501,University of Central Florida,edu,084352b63e98d3b3310521fb3bda8cb4a77a0254,citation,http://crcv.ucf.edu/papers/1439.pdf,Part-based multiple-person tracking with partial occlusion handling,2012 -26,United States,TownCentre,oxford_town_centre,39.5469449,-119.81346566,University of Nevada,edu,084352b63e98d3b3310521fb3bda8cb4a77a0254,citation,http://crcv.ucf.edu/papers/1439.pdf,Part-based multiple-person tracking with partial occlusion handling,2012 -27,United Kingdom,TownCentre,oxford_town_centre,55.7782474,-4.1040988,University of the West of Scotland,edu,32b9be86de4f82c5a43da2a1a0a892515da8910d,citation,http://users.informatik.haw-hamburg.de/~ubicomp/arbeiten/papers/ICISP2014.pdf,Robust False Positive Detection for Real-Time Multi-target Tracking,2014 -28,Italy,TownCentre,oxford_town_centre,43.7776426,11.259765,"Università degli Studi di Firenze, Firenze",edu,2914a20df10f3bb55c5d4764ece85101c1a3e5a8,citation,http://www.micc.unifi.it/seidenari/wp-content/papercite-data/pdf/icpr_16.pdf,User interest profiling using tracking-free coarse gaze estimation,2016 -29,United States,TownCentre,oxford_town_centre,40.4441619,-79.94272826,Carnegie Mellon University,edu,1f4fed0183048d9014e22a72fd50e1e5fbe0777c,citation,https://pdfs.semanticscholar.org/6b7b/1760ed23ef15ec210b2d6795fdf9ad36d0e2.pdf,A Game-Theoretic Approach to Multi-Pedestrian Activity Forecasting,2016 -30,United States,TownCentre,oxford_town_centre,37.43131385,-122.16936535,Stanford University,edu,1f4fed0183048d9014e22a72fd50e1e5fbe0777c,citation,https://pdfs.semanticscholar.org/6b7b/1760ed23ef15ec210b2d6795fdf9ad36d0e2.pdf,A Game-Theoretic Approach to Multi-Pedestrian Activity Forecasting,2016 -31,United States,TownCentre,oxford_town_centre,42.3354481,-71.16813864,Boston College,edu,869df5e8221129850e81e77d4dc36e6c0f854fe6,citation,https://arxiv.org/pdf/1601.03094.pdf,A metric for sets of trajectories that is practical and mathematically consistent,2016 -32,United States,TownCentre,oxford_town_centre,34.1579742,-118.2894729,Disney Research,company,d8bc2e2537cecbe6e751d4791837251a249cd06d,citation,http://www.cse.psu.edu/~rtc12/Papers/wacv2016CarrCollins.pdf,Assessing tracking performance in complex scenarios using mean time between failures,2016 -33,United States,TownCentre,oxford_town_centre,40.7982133,-77.8599084,The Pennsylvania State University,edu,d8bc2e2537cecbe6e751d4791837251a249cd06d,citation,http://www.cse.psu.edu/~rtc12/Papers/wacv2016CarrCollins.pdf,Assessing tracking performance in complex scenarios using mean time between failures,2016 -34,United States,TownCentre,oxford_town_centre,28.59899755,-81.19712501,University of Central Florida,edu,2dfba157e0b5db5becb99b3c412ac729cf3bb32d,citation,https://pdfs.semanticscholar.org/7fb2/f6ce372db950f26f9395721651d6c6aa7b76.pdf,Automatic Detection and Tracking of Pedestrians in Videos with Various Crowd Densities,2012 -35,India,TownCentre,oxford_town_centre,12.9914929,80.2336907,"IIT Madras, India",edu,37f2e03c7cbec9ffc35eac51578e7e8fdfee3d4e,citation,http://www.cse.iitm.ac.in/~amittal/wacv2015_review.pdf,Co-operative Pedestrians Group Tracking in Crowded Scenes Using an MST Approach,2015 -36,United Kingdom,TownCentre,oxford_town_centre,55.91029135,-3.32345777,Heriot-Watt University,edu,b8af24279c58a718091817236f878c805a7843e1,citation,https://pdfs.semanticscholar.org/b8af/24279c58a718091817236f878c805a7843e1.pdf,Context Aware Anomalous Behaviour Detection in Crowded Surveillance,2013 -37,Russia,TownCentre,oxford_town_centre,55.8067104,37.5416381,"Faculty of Computer Science, Moscow, Russia",edu,224547337e1ace6411a69c2e06ce538bc67923f7,citation,https://pdfs.semanticscholar.org/2245/47337e1ace6411a69c2e06ce538bc67923f7.pdf,Convolutional Neural Network for Camera Pose Estimation from Object Detections,2017 -38,Germany,TownCentre,oxford_town_centre,48.7468939,9.0805141,Max Planck Institute for Intelligent Systems,edu,b6d0e461535116a675a0354e7da65b2c1d2958d4,citation,https://arxiv.org/pdf/1805.03430.pdf,Deep Directional Statistics: Pose Estimation with Uncertainty Quantification,2018 -39,United States,TownCentre,oxford_town_centre,38.7768106,-94.9442982,Amazon,company,b6d0e461535116a675a0354e7da65b2c1d2958d4,citation,https://arxiv.org/pdf/1805.03430.pdf,Deep Directional Statistics: Pose Estimation with Uncertainty Quantification,2018 -40,United States,TownCentre,oxford_town_centre,47.6423318,-122.1369302,Microsoft,company,b6d0e461535116a675a0354e7da65b2c1d2958d4,citation,https://arxiv.org/pdf/1805.03430.pdf,Deep Directional Statistics: Pose Estimation with Uncertainty Quantification,2018 -41,United Kingdom,TownCentre,oxford_town_centre,55.91029135,-3.32345777,Heriot-Watt University,edu,70be5432677c0fbe000ac0c28dda351a950e0536,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2014/W14/papers/Leach_Detecting_Social_Groups_2014_CVPR_paper.pdf,Detecting Social Groups in Crowded Surveillance Videos Using Visual Attention,2014 -42,Switzerland,TownCentre,oxford_town_centre,47.376313,8.5476699,ETH Zurich,edu,9458642e7645bfd865911140ee8413e2f5f9fcd6,citation,https://pdfs.semanticscholar.org/9458/642e7645bfd865911140ee8413e2f5f9fcd6.pdf,Efficient Multiple People Tracking Using Minimum Cost Arborescences,2014 -43,United Kingdom,TownCentre,oxford_town_centre,54.6141723,-5.9002151,Queen's University Belfast,edu,2a7935706d43c01789d43a81a1d391418f220a0a,citation,https://pure.qub.ac.uk/portal/files/31960902/285.pdf,Enhancing Linear Programming with Motion Modeling for Multi-target Tracking,2015 -44,Sri Lanka,TownCentre,oxford_town_centre,6.7970862,79.9019094,University of Moratuwa,edu,b183914d0b16647a41f0bfd4af64bf94a83a2b14,citation,http://iwinlab.eng.usf.edu/papers/Extensible%20video%20surveillance%20software%20with%20simultaneous%20event%20detection%20for%20low%20and%20high%20density%20crowd%20analysis.pdf,Extensible video surveillance software with simultaneous event detection for low and high density crowd analysis,2014 -45,United States,TownCentre,oxford_town_centre,33.5866784,-101.87539204,Electrical and Computer Engineering,edu,fa5aca45965e312362d2d75a69312a0678fdf5d7,citation,https://pdfs.semanticscholar.org/fa5a/ca45965e312362d2d75a69312a0678fdf5d7.pdf,Fast and Accurate Head Pose Estimation via Random Projection Forests : Supplementary Material,2015 -46,United States,TownCentre,oxford_town_centre,37.3641651,-120.4254615,University of California at Merced,edu,fa5aca45965e312362d2d75a69312a0678fdf5d7,citation,https://pdfs.semanticscholar.org/fa5a/ca45965e312362d2d75a69312a0678fdf5d7.pdf,Fast and Accurate Head Pose Estimation via Random Projection Forests : Supplementary Material,2015 -47,Australia,TownCentre,oxford_town_centre,-32.8892352,151.6998983,"University of Newcastle, Australia",edu,2feb7c57d51df998aafa6f3017662263a91625b4,citation,https://pdfs.semanticscholar.org/d344/9eaaf392fd07b676e744410049f4095b4b5c.pdf,Feature Selection for Intelligent Transportation Systems,2014 -48,Germany,TownCentre,oxford_town_centre,49.01546,8.4257999,Fraunhofer,company,1f82eebadc3ffa41820ad1a0f53770247fc96dcd,citation,https://pdfs.semanticscholar.org/c5ac/81b17b8fcc028f375fbbd090b558ba9a437a.pdf,Using Trajectories derived by Dense Optical Flows as a Spatial Component in Background Subtraction,2016 -49,United States,TownCentre,oxford_town_centre,42.3583961,-71.09567788,MIT,edu,b18f94c5296a9cebe9e779d50d193fd180f78ed9,citation,https://arxiv.org/pdf/1604.01431.pdf,Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,2017 -50,United Kingdom,TownCentre,oxford_town_centre,51.7520849,-1.2516646,Oxford University,edu,b18f94c5296a9cebe9e779d50d193fd180f78ed9,citation,https://arxiv.org/pdf/1604.01431.pdf,Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,2017 -51,United States,TownCentre,oxford_town_centre,37.43131385,-122.16936535,Stanford University,edu,b18f94c5296a9cebe9e779d50d193fd180f78ed9,citation,https://arxiv.org/pdf/1604.01431.pdf,Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,2017 -52,Netherlands,TownCentre,oxford_town_centre,52.3553655,4.9501644,University of Amsterdam,edu,687ec23addf5a1279e49cc46b78e3245af94ac7b,citation,https://pdfs.semanticscholar.org/687e/c23addf5a1279e49cc46b78e3245af94ac7b.pdf,UvA-DARE ( Digital Academic Repository ) Visual Tracking : An Experimental Survey Smeulders,2013 -53,Italy,TownCentre,oxford_town_centre,45.1847248,9.1582069,"Italian Institute of Technology, Genova, Italy",edu,5ab9f00a707a55f4955b378981ad425aa1cb8ea3,citation,https://arxiv.org/pdf/1901.02000.pdf,Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets,2019 -54,Germany,TownCentre,oxford_town_centre,48.1820038,11.5978282,"OSRAM GmbH, Germany",company,5ab9f00a707a55f4955b378981ad425aa1cb8ea3,citation,https://arxiv.org/pdf/1901.02000.pdf,Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets,2019 -55,Italy,TownCentre,oxford_town_centre,45.437398,11.003376,University of Verona,edu,5ab9f00a707a55f4955b378981ad425aa1cb8ea3,citation,https://arxiv.org/pdf/1901.02000.pdf,Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets,2019 -56,United Kingdom,TownCentre,oxford_town_centre,51.7534538,-1.25400997,University of Oxford,edu,3ed9730e5ec8716e8cdf55f207ef973a9c854574,citation,https://arxiv.org/pdf/1612.05234.pdf,Visual Compiler: Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator,2016 -57,United States,TownCentre,oxford_town_centre,29.7207902,-95.34406271,University of Houston,edu,58eba9930b63cc14715368acf40017293b8dc94f,citation,https://pdfs.semanticscholar.org/7508/ac08dd7b9694bcfe71a617df7fcf3df80952.pdf,What Do I See? Modeling Human Visual Perception for Multi-person Tracking,2014 -58,United States,TownCentre,oxford_town_centre,29.7207902,-95.34406271,University of Houston,edu,a0b489eeb4f7fd2249da756d829e179a6718d9d1,citation,,"""Seeing is Believing"": Pedestrian Trajectory Forecasting Using Visual Frustum of Attention",2018 -59,Belgium,TownCentre,oxford_town_centre,50.8779545,4.7002953,"KULeuven, EAVISE",edu,4ec4392246a7760d189cd6ea48a81664cd2fe4bf,citation,https://pdfs.semanticscholar.org/4ec4/392246a7760d189cd6ea48a81664cd2fe4bf.pdf,GPU Accelerated ACF Detector,2018 -60,United States,TownCentre,oxford_town_centre,40.7982133,-77.8599084,The Pennsylvania State University,edu,6e32c368a6157fb911c9363dc3e967a7fb2ad9f7,citation,https://pdfs.semanticscholar.org/8268/d68f6aa510a765466b2c7f2ba2ea34a48c51.pdf,Hybrid Stochastic / Deterministic Optimization for Tracking Sports Players and Pedestrians,2014 -61,United States,TownCentre,oxford_town_centre,40.4439789,-79.9464634,Disney Research Pittsburgh,edu,6e32c368a6157fb911c9363dc3e967a7fb2ad9f7,citation,https://pdfs.semanticscholar.org/8268/d68f6aa510a765466b2c7f2ba2ea34a48c51.pdf,Hybrid Stochastic / Deterministic Optimization for Tracking Sports Players and Pedestrians,2014 -62,India,TownCentre,oxford_town_centre,13.0304619,77.5646862,"M.S. Ramaiah Institute of Technology, Bangalore, India",edu,6f089f9959cc711e16f1ebe0c6251aaf8a65959a,citation,https://pdfs.semanticscholar.org/6f08/9f9959cc711e16f1ebe0c6251aaf8a65959a.pdf,Improvement in object detection using Super Pixels,2016 -63,United States,TownCentre,oxford_town_centre,38.99203005,-76.9461029,University of Maryland College Park,edu,4e82908e6482d973c280deb79c254631a60f1631,citation,https://pdfs.semanticscholar.org/4e82/908e6482d973c280deb79c254631a60f1631.pdf,Improving Efficiency and Scalability in Visual Surveillance Applications,2013 -64,United States,TownCentre,oxford_town_centre,37.8718992,-122.2585399,University of California,edu,38b5a83f7941fea5fd82466f8ce1ce4ed7749f59,citation,http://rlair.cs.ucr.edu/papers/docs/grouptracking.pdf,Improving multi-target tracking via social grouping,2012 -65,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,13caf4d2e0a4b6fcfcd4b9e8e2341b8ebd38258d,citation,https://arxiv.org/pdf/1605.04502.pdf,Joint Learning of Siamese CNNs and Temporally Constrained Metrics for Tracklet Association,2016 -66,United States,TownCentre,oxford_town_centre,35.9049122,-79.0469134,The University of North Carolina at Chapel Hill,edu,45e459462a80af03e1bb51a178648c10c4250925,citation,https://arxiv.org/pdf/1606.08998.pdf,LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning,2016 -67,China,TownCentre,oxford_town_centre,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,c0262e24324a6a4e6af5bd99fc79e2eb802519b3,citation,https://arxiv.org/pdf/1611.03968.pdf,Learning Scene-specific Object Detectors Based on a Generative-Discriminative Model with Minimal Supervision,2016 -68,China,TownCentre,oxford_town_centre,30.527151,114.400762,China University of Geosciences,edu,c0262e24324a6a4e6af5bd99fc79e2eb802519b3,citation,https://arxiv.org/pdf/1611.03968.pdf,Learning Scene-specific Object Detectors Based on a Generative-Discriminative Model with Minimal Supervision,2016 -69,China,TownCentre,oxford_town_centre,32.0565957,118.77408833,Nanjing University,edu,c0262e24324a6a4e6af5bd99fc79e2eb802519b3,citation,https://arxiv.org/pdf/1611.03968.pdf,Learning Scene-specific Object Detectors Based on a Generative-Discriminative Model with Minimal Supervision,2016 -70,United Kingdom,TownCentre,oxford_town_centre,51.5247272,-0.03931035,Queen Mary University of London,edu,1883387726897d94b663cc4de4df88e5c31df285,citation,http://www.eecs.qmul.ac.uk/~andrea/papers/2014_TIP_MultiTargetTrackingEvaluation_Tahir_Poiesi_Cavallaro.pdf,Measures of Effective Video Tracking,2014 -71,United States,TownCentre,oxford_town_centre,35.9113971,-79.0504529,University of North Carolina at Chapel Hill,edu,8d2bf6ecbfda94f57000b84509bf77f4c47c1c66,citation,https://arxiv.org/pdf/1707.09100.pdf,MixedPeds: Pedestrian Detection in Unannotated Videos Using Synthetically Generated Human-Agents for Training,2018 -72,United States,TownCentre,oxford_town_centre,37.8718992,-122.2585399,University of California,edu,b506aa23949b6d1f0c868ad03aaaeb5e5f7f6b57,citation,http://rlair.cs.ucr.edu/papers/docs/zqin-phd.pdf,Modeling Social and Temporal Context for Video Analysis,2015 -73,Australia,TownCentre,oxford_town_centre,-34.920603,138.6062277,Adelaide University,edu,5bae9822d703c585a61575dced83fa2f4dea1c6d,citation,https://arxiv.org/pdf/1504.01942.pdf,MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,2015 -74,Switzerland,TownCentre,oxford_town_centre,47.376313,8.5476699,ETH Zurich,edu,5bae9822d703c585a61575dced83fa2f4dea1c6d,citation,https://arxiv.org/pdf/1504.01942.pdf,MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,2015 -75,Germany,TownCentre,oxford_town_centre,49.8748277,8.6563281,TU Darmstadt,edu,5bae9822d703c585a61575dced83fa2f4dea1c6d,citation,https://arxiv.org/pdf/1504.01942.pdf,MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,2015 -76,United States,TownCentre,oxford_town_centre,37.8718992,-122.2585399,University of California,edu,e6d48d23308a9e0a215f7b5ba6ae30ee5d2f0ef5,citation,https://pdfs.semanticscholar.org/e6d4/8d23308a9e0a215f7b5ba6ae30ee5d2f0ef5.pdf,Multi-person Tracking by Online Learned Grouping Model with Non-linear Motion Context,2015 -77,France,TownCentre,oxford_town_centre,45.217886,5.807369,INRIA,edu,fc30d7dbf4c3cdd377d8cd4e7eeabd5d73814b8f,citation,https://pdfs.semanticscholar.org/fc30/d7dbf4c3cdd377d8cd4e7eeabd5d73814b8f.pdf,Multiple Object Tracking by Efficient Graph Partitioning,2014 -78,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,290eda31bc13cbd5933acec8b6a25b3e3761c788,citation,https://arxiv.org/pdf/1411.7935.pdf,Multiple object tracking with context awareness,2014 -79,Czech Republic,TownCentre,oxford_town_centre,49.20172,16.6033168,Brno University of Technology,edu,dc53c4bb04e787a0d45dd761ba2101cc51c17b82,citation,https://pdfs.semanticscholar.org/dc53/c4bb04e787a0d45dd761ba2101cc51c17b82.pdf,Multiple-Person Tracking by Detection,2016 -80,Germany,TownCentre,oxford_town_centre,48.1820038,11.5978282,"OSRAM GmbH, Germany",company,943b1b92b5bdee0b5770418c645a4a17bded1ccf,citation,https://arxiv.org/pdf/1805.00652.pdf,MX-LSTM: Mixing Tracklets and Vislets to Jointly Forecast Trajectories and Head Poses,2018 -81,Italy,TownCentre,oxford_town_centre,45.437398,11.003376,University of Verona,edu,943b1b92b5bdee0b5770418c645a4a17bded1ccf,citation,https://arxiv.org/pdf/1805.00652.pdf,MX-LSTM: Mixing Tracklets and Vislets to Jointly Forecast Trajectories and Head Poses,2018 -82,France,TownCentre,oxford_town_centre,48.8422058,2.3451689,"INRIA / Ecole Normale Supérieure, France",edu,47119c99f5aa1e47bbeb86de0f955e7c500e6a93,citation,https://arxiv.org/pdf/1408.3304.pdf,On pairwise costs for network flow multi-object tracking,2015 -83,United States,TownCentre,oxford_town_centre,42.3504253,-71.10056114,Boston University,edu,1ae3dd081b93c46cda4d72100d8b1d59eb585157,citation,https://pdfs.semanticscholar.org/fea1/0f39b0a77035fb549fc580fd951384b79f9b.pdf,Online Motion Agreement Tracking,2013 -84,Malaysia,TownCentre,oxford_town_centre,4.3400673,101.1429799,Universiti Tunku Abdul Rahman,edu,e1f815c50a6c0c6d790c60a1348393264f829e60,citation,https://pdfs.semanticscholar.org/e1f8/15c50a6c0c6d790c60a1348393264f829e60.pdf,PEDESTRIAN DETECTION AND TRACKING IN SURVEILLANCE VIDEO By PENNY CHONG,2016 -85,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,422d352a7d26fef692a3cd24466bfb5b4526efea,citation,https://pdfs.semanticscholar.org/422d/352a7d26fef692a3cd24466bfb5b4526efea.pdf,Pedestrian interaction in tracking : the social force model and global optimization methods,2012 -86,Sweden,TownCentre,oxford_town_centre,57.6897063,11.9741654,Chalmers University of Technology,edu,367b5b814aa991329c2ae7f8793909ad8c0a56f1,citation,https://arxiv.org/pdf/1211.0191.pdf,Performance evaluation of random set based pedestrian tracking algorithms,2013 -87,Japan,TownCentre,oxford_town_centre,35.5152072,134.1733553,Tottori University,edu,9d89f1bc88fd65e90b31a2129719384796bed17a,citation,http://vision.unipv.it/CV/materiale2016-17/2nd%20Choice/0225.pdf,Person re-identification using co-occurrence attributes of physical and adhered human characteristics,2016 -88,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,48705017d91a157949cfaaeb19b826014899a36b,citation,https://pdfs.semanticscholar.org/4870/5017d91a157949cfaaeb19b826014899a36b.pdf,PROBABILISTIC MULTI-PERSON TRACKING USING DYNAMIC BAYES NETWORKS,2015 -89,Italy,TownCentre,oxford_town_centre,39.2173657,9.1149218,"Università degli Studi di Cagliari, Italy",edu,7c1f47ca50a8a55f93bf69791d9df2f994019758,citation,http://veprints.unica.it/1295/1/PhD_ThesisPalaF.pdf,Re-identification and semantic retrieval of pedestrians in video surveillance scenarios,2016 -90,United Kingdom,TownCentre,oxford_town_centre,51.5247272,-0.03931035,Queen Mary University of London,edu,3a28059df29b74775f77fd20a15dc6b5fe857556,citation,https://pdfs.semanticscholar.org/3a28/059df29b74775f77fd20a15dc6b5fe857556.pdf,Riccardo Mazzon PhD Thesis 2013,2013 -91,Brazil,TownCentre,oxford_town_centre,-30.0338248,-51.218828,Federal University of Rio Grande do Sul,edu,057517452369751bd63d83902ea91558d58161da,citation,http://inf.ufrgs.br/~gfuhr/papers/102095_3.pdf,Robust Patch-Based Pedestrian Tracking Using Monocular Calibrated Cameras,2012 -92,China,TownCentre,oxford_town_centre,28.727339,115.816633,Jiangxi University of Finance and Economics,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 -93,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 -94,China,TownCentre,oxford_town_centre,34.250803,108.983693,Xi’an Jiaotong University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 -95,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,7c132e0a2b7e13c78784287af38ad74378da31e5,citation,https://pdfs.semanticscholar.org/7c13/2e0a2b7e13c78784287af38ad74378da31e5.pdf,Salient Parts based Multi-people Tracking,2015 -96,China,TownCentre,oxford_town_centre,40.0044795,116.370238,Chinese Academy of Sciences,edu,679136c2844eeddca34e98e483aca1ff6ef5e902,citation,https://arxiv.org/pdf/1712.08745.pdf,Scene-Specific Pedestrian Detection Based on Parallel Vision,2017 -97,China,TownCentre,oxford_town_centre,34.250803,108.983693,Xi’an Jiaotong University,edu,679136c2844eeddca34e98e483aca1ff6ef5e902,citation,https://arxiv.org/pdf/1712.08745.pdf,Scene-Specific Pedestrian Detection Based on Parallel Vision,2017 -98,China,TownCentre,oxford_town_centre,40.0044795,116.370238,Chinese Academy of Sciences,edu,57e9b0d3ab6295e914d5a30cfaa3b2c81189abc1,citation,https://arxiv.org/pdf/1611.07544.pdf,Self-Learning Scene-Specific Pedestrian Detectors Using a Progressive Latent Model,2017 -99,United States,TownCentre,oxford_town_centre,35.9990522,-78.9290629,Duke University,edu,57e9b0d3ab6295e914d5a30cfaa3b2c81189abc1,citation,https://arxiv.org/pdf/1611.07544.pdf,Self-Learning Scene-Specific Pedestrian Detectors Using a Progressive Latent Model,2017 -100,Switzerland,TownCentre,oxford_town_centre,47.3764534,8.54770931,ETH Zürich,edu,70b42bbd76e6312d39ea06b8a0c24beb4a93e022,citation,http://www.tnt.uni-hannover.de/papers/data/1075/WACV2015_Abstract.pdf,Solving Multiple People Tracking in a Minimum Cost Arborescence,2015 -101,United States,TownCentre,oxford_town_centre,42.718568,-84.47791571,Michigan State University,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 -102,United Kingdom,TownCentre,oxford_town_centre,51.7534538,-1.25400997,University of Oxford,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 -103,Japan,TownCentre,oxford_town_centre,36.05238585,140.11852361,Institute of Industrial Science,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 -104,United States,TownCentre,oxford_town_centre,40.4441619,-79.94272826,Carnegie Mellon University,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 -105,Sweden,TownCentre,oxford_town_centre,57.7172004,11.9218558,"Volvo Construction Equipment, Göthenburg, Sweden",company,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 -106,United States,TownCentre,oxford_town_centre,35.9990522,-78.9290629,Duke University,edu,64e0690dd176a93de9d4328f6e31fc4afe1e7536,citation,https://pdfs.semanticscholar.org/64e0/690dd176a93de9d4328f6e31fc4afe1e7536.pdf,Tracking Multiple People Online and in Real Time,2014 -107,Switzerland,TownCentre,oxford_town_centre,47.3764534,8.54770931,ETH Zürich,edu,64c78c8bf779a27e819fd9d5dba91247ab5a902b,citation,https://arxiv.org/pdf/1607.07304.pdf,Tracking with multi-level features.,2016 -108,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,64c78c8bf779a27e819fd9d5dba91247ab5a902b,citation,https://arxiv.org/pdf/1607.07304.pdf,Tracking with multi-level features.,2016 -109,Germany,TownCentre,oxford_town_centre,48.14955455,11.56775314,Technical University Munich,edu,64c78c8bf779a27e819fd9d5dba91247ab5a902b,citation,https://arxiv.org/pdf/1607.07304.pdf,Tracking with multi-level features.,2016 -110,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,7d3698c0e828d05f147682b0f5bfcd3b681ff205,citation,https://arxiv.org/pdf/1511.06654.pdf,Tracklet Association by Online Target-Specific Metric Learning and Coherent Dynamics Estimation,2017 -111,Australia,TownCentre,oxford_town_centre,-35.2809368,149.1300092,"NICTA, Canberra",edu,f0cc615b14c97482faa9c47eb855303c71ff03a7,citation,https://pdfs.semanticscholar.org/f0cc/615b14c97482faa9c47eb855303c71ff03a7.pdf,Tracklet clustering for robust multiple object tracking using distance dependent Chinese restaurant processes,2016 -112,Germany,TownCentre,oxford_town_centre,52.5180641,13.3250425,TU Berlin,edu,c4cd19cf41a2f5cd543d81b94afe6cc42785920a,citation,http://elvera.nue.tu-berlin.de/files/1491Bochinski2016.pdf,Training a convolutional neural network for multi-class object detection using solely virtual world data,2016 +18,United States,TownCentre,oxford_town_centre,28.59899755,-81.19712501,University of Central Florida,edu,920246280e7e70900762ddfa7c41a79ec4517350,citation,http://crcv-web.eecs.ucf.edu/papers/eccv2012/MPMPT-ECCV12.pdf,(MP) 2 T: multiple people multiple parts tracker,2012 +19,United States,TownCentre,oxford_town_centre,33.98071305,-117.33261035,"University of California, Riverside",edu,14d5bd23667db4413a7f362565be21d462d3fc93,citation,http://alumni.cs.ucr.edu/~zqin001/cvpr2014.pdf,An Online Learned Elementary Grouping Model for Multi-target Tracking,2014 +20,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,9070045c1a9564a5f25b42f3facc7edf4c302483,citation,http://virtualhumans.mpi-inf.mpg.de/papers/lealPonsmollICCVW2011/lealPonsmollICCVW2011.pdf,Everybody needs somebody: Modeling social and grouping behavior on a linear programming multiple people tracker,2011 +21,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,2323cb559c9e18673db836ffc283c27e4a002ed9,citation,http://arxiv.org/pdf/1605.04502v1.pdf,Joint Learning of Convolutional Neural Networks and Temporally Constrained Metrics for Tracklet Association,2016 +22,China,TownCentre,oxford_town_centre,39.905838,116.375516,"Huawei Technologies, Beijing, China",company,434627a03d4433b0df03058724524c3ac1c07478,citation,http://jianghz.com/pubs/mtt_tip_final.pdf,Online Multi-Target Tracking With Unified Handling of Complex Scenarios,2015 +23,China,TownCentre,oxford_town_centre,34.250803,108.983693,Xi’an Jiaotong University,edu,434627a03d4433b0df03058724524c3ac1c07478,citation,http://jianghz.com/pubs/mtt_tip_final.pdf,Online Multi-Target Tracking With Unified Handling of Complex Scenarios,2015 +24,United States,TownCentre,oxford_town_centre,28.59899755,-81.19712501,University of Central Florida,edu,084352b63e98d3b3310521fb3bda8cb4a77a0254,citation,http://crcv.ucf.edu/papers/1439.pdf,Part-based multiple-person tracking with partial occlusion handling,2012 +25,United States,TownCentre,oxford_town_centre,39.5469449,-119.81346566,University of Nevada,edu,084352b63e98d3b3310521fb3bda8cb4a77a0254,citation,http://crcv.ucf.edu/papers/1439.pdf,Part-based multiple-person tracking with partial occlusion handling,2012 +26,United Kingdom,TownCentre,oxford_town_centre,55.7782474,-4.1040988,University of the West of Scotland,edu,32b9be86de4f82c5a43da2a1a0a892515da8910d,citation,http://users.informatik.haw-hamburg.de/~ubicomp/arbeiten/papers/ICISP2014.pdf,Robust False Positive Detection for Real-Time Multi-target Tracking,2014 +27,Italy,TownCentre,oxford_town_centre,43.7776426,11.259765,"Università degli Studi di Firenze, Firenze",edu,2914a20df10f3bb55c5d4764ece85101c1a3e5a8,citation,http://www.micc.unifi.it/seidenari/wp-content/papercite-data/pdf/icpr_16.pdf,User interest profiling using tracking-free coarse gaze estimation,2016 +28,United States,TownCentre,oxford_town_centre,40.4441619,-79.94272826,Carnegie Mellon University,edu,1f4fed0183048d9014e22a72fd50e1e5fbe0777c,citation,https://pdfs.semanticscholar.org/6b7b/1760ed23ef15ec210b2d6795fdf9ad36d0e2.pdf,A Game-Theoretic Approach to Multi-Pedestrian Activity Forecasting,2016 +29,United States,TownCentre,oxford_town_centre,37.43131385,-122.16936535,Stanford University,edu,1f4fed0183048d9014e22a72fd50e1e5fbe0777c,citation,https://pdfs.semanticscholar.org/6b7b/1760ed23ef15ec210b2d6795fdf9ad36d0e2.pdf,A Game-Theoretic Approach to Multi-Pedestrian Activity Forecasting,2016 +30,United States,TownCentre,oxford_town_centre,42.3354481,-71.16813864,Boston College,edu,869df5e8221129850e81e77d4dc36e6c0f854fe6,citation,https://arxiv.org/pdf/1601.03094.pdf,A metric for sets of trajectories that is practical and mathematically consistent,2016 +31,United States,TownCentre,oxford_town_centre,34.1579742,-118.2894729,Disney Research,company,d8bc2e2537cecbe6e751d4791837251a249cd06d,citation,http://www.cse.psu.edu/~rtc12/Papers/wacv2016CarrCollins.pdf,Assessing tracking performance in complex scenarios using mean time between failures,2016 +32,United States,TownCentre,oxford_town_centre,40.7982133,-77.8599084,The Pennsylvania State University,edu,d8bc2e2537cecbe6e751d4791837251a249cd06d,citation,http://www.cse.psu.edu/~rtc12/Papers/wacv2016CarrCollins.pdf,Assessing tracking performance in complex scenarios using mean time between failures,2016 +33,United States,TownCentre,oxford_town_centre,28.59899755,-81.19712501,University of Central Florida,edu,2dfba157e0b5db5becb99b3c412ac729cf3bb32d,citation,https://pdfs.semanticscholar.org/7fb2/f6ce372db950f26f9395721651d6c6aa7b76.pdf,Automatic Detection and Tracking of Pedestrians in Videos with Various Crowd Densities,2012 +34,India,TownCentre,oxford_town_centre,12.9914929,80.2336907,"IIT Madras, India",edu,37f2e03c7cbec9ffc35eac51578e7e8fdfee3d4e,citation,http://www.cse.iitm.ac.in/~amittal/wacv2015_review.pdf,Co-operative Pedestrians Group Tracking in Crowded Scenes Using an MST Approach,2015 +35,United Kingdom,TownCentre,oxford_town_centre,55.91029135,-3.32345777,Heriot-Watt University,edu,b8af24279c58a718091817236f878c805a7843e1,citation,https://pdfs.semanticscholar.org/b8af/24279c58a718091817236f878c805a7843e1.pdf,Context Aware Anomalous Behaviour Detection in Crowded Surveillance,2013 +36,Russia,TownCentre,oxford_town_centre,55.8067104,37.5416381,"Faculty of Computer Science, Moscow, Russia",edu,224547337e1ace6411a69c2e06ce538bc67923f7,citation,https://pdfs.semanticscholar.org/2245/47337e1ace6411a69c2e06ce538bc67923f7.pdf,Convolutional Neural Network for Camera Pose Estimation from Object Detections,2017 +37,Germany,TownCentre,oxford_town_centre,48.7468939,9.0805141,Max Planck Institute for Intelligent Systems,edu,b6d0e461535116a675a0354e7da65b2c1d2958d4,citation,https://arxiv.org/pdf/1805.03430.pdf,Deep Directional Statistics: Pose Estimation with Uncertainty Quantification,2018 +38,United States,TownCentre,oxford_town_centre,38.7768106,-94.9442982,Amazon,company,b6d0e461535116a675a0354e7da65b2c1d2958d4,citation,https://arxiv.org/pdf/1805.03430.pdf,Deep Directional Statistics: Pose Estimation with Uncertainty Quantification,2018 +39,United States,TownCentre,oxford_town_centre,47.6423318,-122.1369302,Microsoft,company,b6d0e461535116a675a0354e7da65b2c1d2958d4,citation,https://arxiv.org/pdf/1805.03430.pdf,Deep Directional Statistics: Pose Estimation with Uncertainty Quantification,2018 +40,United Kingdom,TownCentre,oxford_town_centre,55.91029135,-3.32345777,Heriot-Watt University,edu,70be5432677c0fbe000ac0c28dda351a950e0536,citation,http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2014/W14/papers/Leach_Detecting_Social_Groups_2014_CVPR_paper.pdf,Detecting Social Groups in Crowded Surveillance Videos Using Visual Attention,2014 +41,Switzerland,TownCentre,oxford_town_centre,47.376313,8.5476699,ETH Zurich,edu,9458642e7645bfd865911140ee8413e2f5f9fcd6,citation,https://pdfs.semanticscholar.org/9458/642e7645bfd865911140ee8413e2f5f9fcd6.pdf,Efficient Multiple People Tracking Using Minimum Cost Arborescences,2014 +42,United Kingdom,TownCentre,oxford_town_centre,54.6141723,-5.9002151,Queen's University Belfast,edu,2a7935706d43c01789d43a81a1d391418f220a0a,citation,https://pure.qub.ac.uk/portal/files/31960902/285.pdf,Enhancing Linear Programming with Motion Modeling for Multi-target Tracking,2015 +43,Sri Lanka,TownCentre,oxford_town_centre,6.7970862,79.9019094,University of Moratuwa,edu,b183914d0b16647a41f0bfd4af64bf94a83a2b14,citation,http://iwinlab.eng.usf.edu/papers/Extensible%20video%20surveillance%20software%20with%20simultaneous%20event%20detection%20for%20low%20and%20high%20density%20crowd%20analysis.pdf,Extensible video surveillance software with simultaneous event detection for low and high density crowd analysis,2014 +44,United States,TownCentre,oxford_town_centre,33.5866784,-101.87539204,Electrical and Computer Engineering,edu,fa5aca45965e312362d2d75a69312a0678fdf5d7,citation,https://pdfs.semanticscholar.org/fa5a/ca45965e312362d2d75a69312a0678fdf5d7.pdf,Fast and Accurate Head Pose Estimation via Random Projection Forests : Supplementary Material,2015 +45,United States,TownCentre,oxford_town_centre,37.3641651,-120.4254615,University of California at Merced,edu,fa5aca45965e312362d2d75a69312a0678fdf5d7,citation,https://pdfs.semanticscholar.org/fa5a/ca45965e312362d2d75a69312a0678fdf5d7.pdf,Fast and Accurate Head Pose Estimation via Random Projection Forests : Supplementary Material,2015 +46,Australia,TownCentre,oxford_town_centre,-32.8892352,151.6998983,"University of Newcastle, Australia",edu,2feb7c57d51df998aafa6f3017662263a91625b4,citation,https://pdfs.semanticscholar.org/d344/9eaaf392fd07b676e744410049f4095b4b5c.pdf,Feature Selection for Intelligent Transportation Systems,2014 +47,Germany,TownCentre,oxford_town_centre,49.01546,8.4257999,Fraunhofer,company,1f82eebadc3ffa41820ad1a0f53770247fc96dcd,citation,https://pdfs.semanticscholar.org/c5ac/81b17b8fcc028f375fbbd090b558ba9a437a.pdf,Using Trajectories derived by Dense Optical Flows as a Spatial Component in Background Subtraction,2016 +48,United States,TownCentre,oxford_town_centre,42.3583961,-71.09567788,MIT,edu,b18f94c5296a9cebe9e779d50d193fd180f78ed9,citation,https://arxiv.org/pdf/1604.01431.pdf,Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,2017 +49,United Kingdom,TownCentre,oxford_town_centre,51.7520849,-1.2516646,Oxford University,edu,b18f94c5296a9cebe9e779d50d193fd180f78ed9,citation,https://arxiv.org/pdf/1604.01431.pdf,Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,2017 +50,United States,TownCentre,oxford_town_centre,37.43131385,-122.16936535,Stanford University,edu,b18f94c5296a9cebe9e779d50d193fd180f78ed9,citation,https://arxiv.org/pdf/1604.01431.pdf,Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,2017 +51,Netherlands,TownCentre,oxford_town_centre,52.3553655,4.9501644,University of Amsterdam,edu,687ec23addf5a1279e49cc46b78e3245af94ac7b,citation,https://pdfs.semanticscholar.org/687e/c23addf5a1279e49cc46b78e3245af94ac7b.pdf,UvA-DARE ( Digital Academic Repository ) Visual Tracking : An Experimental Survey Smeulders,2013 +52,Italy,TownCentre,oxford_town_centre,45.1847248,9.1582069,"Italian Institute of Technology, Genova, Italy",edu,5ab9f00a707a55f4955b378981ad425aa1cb8ea3,citation,https://arxiv.org/pdf/1901.02000.pdf,Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets,2019 +53,Germany,TownCentre,oxford_town_centre,48.1820038,11.5978282,"OSRAM GmbH, Germany",company,5ab9f00a707a55f4955b378981ad425aa1cb8ea3,citation,https://arxiv.org/pdf/1901.02000.pdf,Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets,2019 +54,Italy,TownCentre,oxford_town_centre,45.437398,11.003376,University of Verona,edu,5ab9f00a707a55f4955b378981ad425aa1cb8ea3,citation,https://arxiv.org/pdf/1901.02000.pdf,Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets,2019 +55,United Kingdom,TownCentre,oxford_town_centre,51.7534538,-1.25400997,University of Oxford,edu,3ed9730e5ec8716e8cdf55f207ef973a9c854574,citation,https://arxiv.org/pdf/1612.05234.pdf,Visual Compiler: Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator,2016 +56,United States,TownCentre,oxford_town_centre,29.7207902,-95.34406271,University of Houston,edu,58eba9930b63cc14715368acf40017293b8dc94f,citation,https://pdfs.semanticscholar.org/7508/ac08dd7b9694bcfe71a617df7fcf3df80952.pdf,What Do I See? Modeling Human Visual Perception for Multi-person Tracking,2014 +57,United States,TownCentre,oxford_town_centre,29.7207902,-95.34406271,University of Houston,edu,a0b489eeb4f7fd2249da756d829e179a6718d9d1,citation,,"""Seeing is Believing"": Pedestrian Trajectory Forecasting Using Visual Frustum of Attention",2018 +58,Belgium,TownCentre,oxford_town_centre,50.8779545,4.7002953,"KULeuven, EAVISE",edu,4ec4392246a7760d189cd6ea48a81664cd2fe4bf,citation,https://pdfs.semanticscholar.org/4ec4/392246a7760d189cd6ea48a81664cd2fe4bf.pdf,GPU Accelerated ACF Detector,2018 +59,United States,TownCentre,oxford_town_centre,40.7982133,-77.8599084,The Pennsylvania State University,edu,6e32c368a6157fb911c9363dc3e967a7fb2ad9f7,citation,https://pdfs.semanticscholar.org/8268/d68f6aa510a765466b2c7f2ba2ea34a48c51.pdf,Hybrid Stochastic / Deterministic Optimization for Tracking Sports Players and Pedestrians,2014 +60,United States,TownCentre,oxford_town_centre,40.4439789,-79.9464634,Disney Research Pittsburgh,edu,6e32c368a6157fb911c9363dc3e967a7fb2ad9f7,citation,https://pdfs.semanticscholar.org/8268/d68f6aa510a765466b2c7f2ba2ea34a48c51.pdf,Hybrid Stochastic / Deterministic Optimization for Tracking Sports Players and Pedestrians,2014 +61,India,TownCentre,oxford_town_centre,13.0304619,77.5646862,"M.S. Ramaiah Institute of Technology, Bangalore, India",edu,6f089f9959cc711e16f1ebe0c6251aaf8a65959a,citation,https://pdfs.semanticscholar.org/6f08/9f9959cc711e16f1ebe0c6251aaf8a65959a.pdf,Improvement in object detection using Super Pixels,2016 +62,United States,TownCentre,oxford_town_centre,38.99203005,-76.9461029,University of Maryland College Park,edu,4e82908e6482d973c280deb79c254631a60f1631,citation,https://pdfs.semanticscholar.org/4e82/908e6482d973c280deb79c254631a60f1631.pdf,Improving Efficiency and Scalability in Visual Surveillance Applications,2013 +63,United States,TownCentre,oxford_town_centre,33.98071305,-117.33261035,"University of California, Riverside",edu,38b5a83f7941fea5fd82466f8ce1ce4ed7749f59,citation,http://rlair.cs.ucr.edu/papers/docs/grouptracking.pdf,Improving multi-target tracking via social grouping,2012 +64,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,13caf4d2e0a4b6fcfcd4b9e8e2341b8ebd38258d,citation,https://arxiv.org/pdf/1605.04502.pdf,Joint Learning of Siamese CNNs and Temporally Constrained Metrics for Tracklet Association,2016 +65,United States,TownCentre,oxford_town_centre,35.9049122,-79.0469134,The University of North Carolina at Chapel Hill,edu,45e459462a80af03e1bb51a178648c10c4250925,citation,https://arxiv.org/pdf/1606.08998.pdf,LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning,2016 +66,China,TownCentre,oxford_town_centre,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,c0262e24324a6a4e6af5bd99fc79e2eb802519b3,citation,https://arxiv.org/pdf/1611.03968.pdf,Learning Scene-specific Object Detectors Based on a Generative-Discriminative Model with Minimal Supervision,2016 +67,China,TownCentre,oxford_town_centre,30.527151,114.400762,China University of Geosciences,edu,c0262e24324a6a4e6af5bd99fc79e2eb802519b3,citation,https://arxiv.org/pdf/1611.03968.pdf,Learning Scene-specific Object Detectors Based on a Generative-Discriminative Model with Minimal Supervision,2016 +68,China,TownCentre,oxford_town_centre,32.0565957,118.77408833,Nanjing University,edu,c0262e24324a6a4e6af5bd99fc79e2eb802519b3,citation,https://arxiv.org/pdf/1611.03968.pdf,Learning Scene-specific Object Detectors Based on a Generative-Discriminative Model with Minimal Supervision,2016 +69,United Kingdom,TownCentre,oxford_town_centre,51.5247272,-0.03931035,Queen Mary University of London,edu,1883387726897d94b663cc4de4df88e5c31df285,citation,http://www.eecs.qmul.ac.uk/~andrea/papers/2014_TIP_MultiTargetTrackingEvaluation_Tahir_Poiesi_Cavallaro.pdf,Measures of Effective Video Tracking,2014 +70,United States,TownCentre,oxford_town_centre,35.9113971,-79.0504529,University of North Carolina at Chapel Hill,edu,8d2bf6ecbfda94f57000b84509bf77f4c47c1c66,citation,https://arxiv.org/pdf/1707.09100.pdf,MixedPeds: Pedestrian Detection in Unannotated Videos Using Synthetically Generated Human-Agents for Training,2018 +71,Australia,TownCentre,oxford_town_centre,-34.920603,138.6062277,Adelaide University,edu,5bae9822d703c585a61575dced83fa2f4dea1c6d,citation,https://arxiv.org/pdf/1504.01942.pdf,MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,2015 +72,Switzerland,TownCentre,oxford_town_centre,47.376313,8.5476699,ETH Zurich,edu,5bae9822d703c585a61575dced83fa2f4dea1c6d,citation,https://arxiv.org/pdf/1504.01942.pdf,MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,2015 +73,Germany,TownCentre,oxford_town_centre,49.8748277,8.6563281,TU Darmstadt,edu,5bae9822d703c585a61575dced83fa2f4dea1c6d,citation,https://arxiv.org/pdf/1504.01942.pdf,MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking,2015 +74,United States,TownCentre,oxford_town_centre,33.98071305,-117.33261035,"University of California, Riverside",edu,e6d48d23308a9e0a215f7b5ba6ae30ee5d2f0ef5,citation,https://pdfs.semanticscholar.org/e6d4/8d23308a9e0a215f7b5ba6ae30ee5d2f0ef5.pdf,Multi-person Tracking by Online Learned Grouping Model with Non-linear Motion Context,2015 +75,France,TownCentre,oxford_town_centre,45.217886,5.807369,INRIA,edu,fc30d7dbf4c3cdd377d8cd4e7eeabd5d73814b8f,citation,https://pdfs.semanticscholar.org/fc30/d7dbf4c3cdd377d8cd4e7eeabd5d73814b8f.pdf,Multiple Object Tracking by Efficient Graph Partitioning,2014 +76,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,290eda31bc13cbd5933acec8b6a25b3e3761c788,citation,https://arxiv.org/pdf/1411.7935.pdf,Multiple object tracking with context awareness,2014 +77,Czech Republic,TownCentre,oxford_town_centre,49.20172,16.6033168,Brno University of Technology,edu,dc53c4bb04e787a0d45dd761ba2101cc51c17b82,citation,https://pdfs.semanticscholar.org/dc53/c4bb04e787a0d45dd761ba2101cc51c17b82.pdf,Multiple-Person Tracking by Detection,2016 +78,Germany,TownCentre,oxford_town_centre,48.1820038,11.5978282,"OSRAM GmbH, Germany",company,943b1b92b5bdee0b5770418c645a4a17bded1ccf,citation,https://arxiv.org/pdf/1805.00652.pdf,MX-LSTM: Mixing Tracklets and Vislets to Jointly Forecast Trajectories and Head Poses,2018 +79,Italy,TownCentre,oxford_town_centre,45.437398,11.003376,University of Verona,edu,943b1b92b5bdee0b5770418c645a4a17bded1ccf,citation,https://arxiv.org/pdf/1805.00652.pdf,MX-LSTM: Mixing Tracklets and Vislets to Jointly Forecast Trajectories and Head Poses,2018 +80,France,TownCentre,oxford_town_centre,48.8422058,2.3451689,"INRIA / Ecole Normale Supérieure, France",edu,47119c99f5aa1e47bbeb86de0f955e7c500e6a93,citation,https://arxiv.org/pdf/1408.3304.pdf,On pairwise costs for network flow multi-object tracking,2015 +81,United States,TownCentre,oxford_town_centre,42.3504253,-71.10056114,Boston University,edu,1ae3dd081b93c46cda4d72100d8b1d59eb585157,citation,https://pdfs.semanticscholar.org/fea1/0f39b0a77035fb549fc580fd951384b79f9b.pdf,Online Motion Agreement Tracking,2013 +82,Malaysia,TownCentre,oxford_town_centre,4.3400673,101.1429799,Universiti Tunku Abdul Rahman,edu,e1f815c50a6c0c6d790c60a1348393264f829e60,citation,https://pdfs.semanticscholar.org/e1f8/15c50a6c0c6d790c60a1348393264f829e60.pdf,PEDESTRIAN DETECTION AND TRACKING IN SURVEILLANCE VIDEO By PENNY CHONG,2016 +83,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,422d352a7d26fef692a3cd24466bfb5b4526efea,citation,https://pdfs.semanticscholar.org/422d/352a7d26fef692a3cd24466bfb5b4526efea.pdf,Pedestrian interaction in tracking : the social force model and global optimization methods,2012 +84,Sweden,TownCentre,oxford_town_centre,57.6897063,11.9741654,Chalmers University of Technology,edu,367b5b814aa991329c2ae7f8793909ad8c0a56f1,citation,https://arxiv.org/pdf/1211.0191.pdf,Performance evaluation of random set based pedestrian tracking algorithms,2013 +85,Japan,TownCentre,oxford_town_centre,35.5152072,134.1733553,Tottori University,edu,9d89f1bc88fd65e90b31a2129719384796bed17a,citation,http://vision.unipv.it/CV/materiale2016-17/2nd%20Choice/0225.pdf,Person re-identification using co-occurrence attributes of physical and adhered human characteristics,2016 +86,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,48705017d91a157949cfaaeb19b826014899a36b,citation,https://pdfs.semanticscholar.org/4870/5017d91a157949cfaaeb19b826014899a36b.pdf,PROBABILISTIC MULTI-PERSON TRACKING USING DYNAMIC BAYES NETWORKS,2015 +87,Italy,TownCentre,oxford_town_centre,39.2173657,9.1149218,"Università degli Studi di Cagliari, Italy",edu,7c1f47ca50a8a55f93bf69791d9df2f994019758,citation,http://veprints.unica.it/1295/1/PhD_ThesisPalaF.pdf,Re-identification and semantic retrieval of pedestrians in video surveillance scenarios,2016 +88,United Kingdom,TownCentre,oxford_town_centre,51.5247272,-0.03931035,Queen Mary University of London,edu,3a28059df29b74775f77fd20a15dc6b5fe857556,citation,https://pdfs.semanticscholar.org/3a28/059df29b74775f77fd20a15dc6b5fe857556.pdf,Riccardo Mazzon PhD Thesis 2013,2013 +89,Brazil,TownCentre,oxford_town_centre,-30.0338248,-51.218828,Federal University of Rio Grande do Sul,edu,057517452369751bd63d83902ea91558d58161da,citation,http://inf.ufrgs.br/~gfuhr/papers/102095_3.pdf,Robust Patch-Based Pedestrian Tracking Using Monocular Calibrated Cameras,2012 +90,China,TownCentre,oxford_town_centre,28.727339,115.816633,Jiangxi University of Finance and Economics,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 +91,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 +92,China,TownCentre,oxford_town_centre,34.250803,108.983693,Xi’an Jiaotong University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 +93,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,7c132e0a2b7e13c78784287af38ad74378da31e5,citation,https://pdfs.semanticscholar.org/7c13/2e0a2b7e13c78784287af38ad74378da31e5.pdf,Salient Parts based Multi-people Tracking,2015 +94,China,TownCentre,oxford_town_centre,40.0044795,116.370238,Chinese Academy of Sciences,edu,679136c2844eeddca34e98e483aca1ff6ef5e902,citation,https://arxiv.org/pdf/1712.08745.pdf,Scene-Specific Pedestrian Detection Based on Parallel Vision,2017 +95,China,TownCentre,oxford_town_centre,34.250803,108.983693,Xi’an Jiaotong University,edu,679136c2844eeddca34e98e483aca1ff6ef5e902,citation,https://arxiv.org/pdf/1712.08745.pdf,Scene-Specific Pedestrian Detection Based on Parallel Vision,2017 +96,China,TownCentre,oxford_town_centre,40.0044795,116.370238,Chinese Academy of Sciences,edu,57e9b0d3ab6295e914d5a30cfaa3b2c81189abc1,citation,https://arxiv.org/pdf/1611.07544.pdf,Self-Learning Scene-Specific Pedestrian Detectors Using a Progressive Latent Model,2017 +97,United States,TownCentre,oxford_town_centre,35.9990522,-78.9290629,Duke University,edu,57e9b0d3ab6295e914d5a30cfaa3b2c81189abc1,citation,https://arxiv.org/pdf/1611.07544.pdf,Self-Learning Scene-Specific Pedestrian Detectors Using a Progressive Latent Model,2017 +98,Switzerland,TownCentre,oxford_town_centre,47.3764534,8.54770931,ETH Zürich,edu,70b42bbd76e6312d39ea06b8a0c24beb4a93e022,citation,http://www.tnt.uni-hannover.de/papers/data/1075/WACV2015_Abstract.pdf,Solving Multiple People Tracking in a Minimum Cost Arborescence,2015 +99,United States,TownCentre,oxford_town_centre,42.718568,-84.47791571,Michigan State University,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 +100,United Kingdom,TownCentre,oxford_town_centre,51.7534538,-1.25400997,University of Oxford,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 +101,Japan,TownCentre,oxford_town_centre,36.05238585,140.11852361,Institute of Industrial Science,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 +102,United States,TownCentre,oxford_town_centre,40.4441619,-79.94272826,Carnegie Mellon University,edu,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 +103,Sweden,TownCentre,oxford_town_centre,57.7172004,11.9218558,"Volvo Construction Equipment, Göthenburg, Sweden",company,acf0db156406ddad1ace2ff2696cb60d0a04cf7c,citation,http://hal.cse.msu.edu/assets/pdfs/papers/2018-ijcv-visual-compiler.pdf,Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator for Static Video Surveillance,2018 +104,United States,TownCentre,oxford_town_centre,35.9990522,-78.9290629,Duke University,edu,64e0690dd176a93de9d4328f6e31fc4afe1e7536,citation,https://pdfs.semanticscholar.org/64e0/690dd176a93de9d4328f6e31fc4afe1e7536.pdf,Tracking Multiple People Online and in Real Time,2014 +105,Switzerland,TownCentre,oxford_town_centre,47.3764534,8.54770931,ETH Zürich,edu,64c78c8bf779a27e819fd9d5dba91247ab5a902b,citation,https://arxiv.org/pdf/1607.07304.pdf,Tracking with multi-level features.,2016 +106,Germany,TownCentre,oxford_town_centre,52.381515,9.720171,Leibniz Universität Hannover,edu,64c78c8bf779a27e819fd9d5dba91247ab5a902b,citation,https://arxiv.org/pdf/1607.07304.pdf,Tracking with multi-level features.,2016 +107,Germany,TownCentre,oxford_town_centre,48.14955455,11.56775314,Technical University Munich,edu,64c78c8bf779a27e819fd9d5dba91247ab5a902b,citation,https://arxiv.org/pdf/1607.07304.pdf,Tracking with multi-level features.,2016 +108,Singapore,TownCentre,oxford_town_centre,1.3484104,103.68297965,Nanyang Technological University,edu,7d3698c0e828d05f147682b0f5bfcd3b681ff205,citation,https://arxiv.org/pdf/1511.06654.pdf,Tracklet Association by Online Target-Specific Metric Learning and Coherent Dynamics Estimation,2017 +109,Australia,TownCentre,oxford_town_centre,-35.2809368,149.1300092,"NICTA, Canberra",edu,f0cc615b14c97482faa9c47eb855303c71ff03a7,citation,https://pdfs.semanticscholar.org/f0cc/615b14c97482faa9c47eb855303c71ff03a7.pdf,Tracklet clustering for robust multiple object tracking using distance dependent Chinese restaurant processes,2016 +110,Germany,TownCentre,oxford_town_centre,52.5180641,13.3250425,TU Berlin,edu,c4cd19cf41a2f5cd543d81b94afe6cc42785920a,citation,http://elvera.nue.tu-berlin.de/files/1491Bochinski2016.pdf,Training a convolutional neural network for multi-class object detection using solely virtual world data,2016 +111,Egypt,TownCentre,oxford_town_centre,29.956063,31.255471,AvidBeam,company,2d81cf3214281af85eb1d9d270a897d62302e88e,citation,,High density people estimation in video surveillance,2017 +112,Egypt,TownCentre,oxford_town_centre,29.9866381,31.4414218,Faculty of Media Engineering & Technology German University in Cairo,edu,2d81cf3214281af85eb1d9d270a897d62302e88e,citation,,High density people estimation in video surveillance,2017 +113,Egypt,TownCentre,oxford_town_centre,29.9866381,31.4414218,German University in Cairo,edu,2d81cf3214281af85eb1d9d270a897d62302e88e,citation,,High density people estimation in video surveillance,2017 diff --git a/site/datasets/verified/penn_fudan.csv b/site/datasets/verified/penn_fudan.csv index 10427ed0..d63535a2 100644 --- a/site/datasets/verified/penn_fudan.csv +++ b/site/datasets/verified/penn_fudan.csv @@ -1,2 +1,4 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,Penn Fudan,penn_fudan,0.0,0.0,,,,main,,Object Detection Combining Recognition and Segmentation,2007 +1,Turkey,Penn Fudan,penn_fudan,41.0082376,28.9783589,"Elektronik ve Haberleşme Mühendisliği Bölümü, NETAŞ Telekomünikasyon A.Ş, İstanbul, Türkiye",edu,92b2386e11164738d9285117ae647b4788da2c31,citation,,Pedestrian detection with multiple classifiers on still images,2018 +2,Turkey,Penn Fudan,penn_fudan,41.0288022,28.8900143,"Elektronik ve Haberleşme Mühendisliği Bölümü, Yıldız Teknik Üniversitesi, İstanbul, Türkiye",edu,92b2386e11164738d9285117ae647b4788da2c31,citation,,Pedestrian detection with multiple classifiers on still images,2018 diff --git a/site/datasets/verified/pipa.csv b/site/datasets/verified/pipa.csv index 3acdccff..1124eebc 100644 --- a/site/datasets/verified/pipa.csv +++ b/site/datasets/verified/pipa.csv @@ -1,2 +1,47 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,PIPA,pipa,0.0,0.0,,,,main,,Beyond frontal faces: Improving Person Recognition using multiple cues,2015 +1,Australia,PIPA,pipa,-35.2776999,149.118527,Australian National University,edu,9ce12c9f1d1661f56908edc8ef3848e91b24d557,citation,https://arxiv.org/pdf/1810.13103.pdf,Query Adaptive Late Fusion for Image Retrieval,2018 +2,China,PIPA,pipa,40.00229045,116.32098908,Tsinghua University,edu,9ce12c9f1d1661f56908edc8ef3848e91b24d557,citation,https://arxiv.org/pdf/1810.13103.pdf,Query Adaptive Late Fusion for Image Retrieval,2018 +3,Singapore,PIPA,pipa,1.2962018,103.77689944,National University of Singapore,edu,5f771fed91c8e4b666489ba2384d0705bcf75030,citation,https://arxiv.org/pdf/1804.03287.pdf,Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing,2018 +4,China,PIPA,pipa,28.2290209,112.99483204,"National University of Defense Technology, China",mil,5f771fed91c8e4b666489ba2384d0705bcf75030,citation,https://arxiv.org/pdf/1804.03287.pdf,Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing,2018 +5,United States,PIPA,pipa,42.3702265,-71.0768929,"Philips Research, Bethesda, MD, USA",company,c76251049b370f8258d6bbb944c696c30b8bbb85,citation,http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w41/Xue_Clothing_Change_Aware_CVPR_2018_paper.pdf,Clothing Change Aware Person Identification,2018 +6,United States,PIPA,pipa,40.47913175,-74.43168868,Rutgers University,edu,c76251049b370f8258d6bbb944c696c30b8bbb85,citation,http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w41/Xue_Clothing_Change_Aware_CVPR_2018_paper.pdf,Clothing Change Aware Person Identification,2018 +7,United States,PIPA,pipa,33.9928298,-81.02685168,University of South Carolina,edu,c76251049b370f8258d6bbb944c696c30b8bbb85,citation,http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w41/Xue_Clothing_Change_Aware_CVPR_2018_paper.pdf,Clothing Change Aware Person Identification,2018 +8,China,PIPA,pipa,22.4162632,114.2109318,Chinese University of Hong Kong,edu,d949fadc9b6c5c8b067fa42265ad30945f9caa99,citation,https://arxiv.org/pdf/1710.00870.pdf,Rethinking Feature Discrimination and Polymerization for Large-scale Recognition,2017 +9,China,PIPA,pipa,22.4162632,114.2109318,Chinese University of Hong Kong,edu,6fed504da4e192fe4c2d452754d23d3db4a4e5e3,citation,https://arxiv.org/pdf/1702.06890.pdf,Learning Deep Features via Congenerous Cosine Loss for Person Recognition,2017 +10,China,PIPA,pipa,23.09461185,113.28788994,Sun Yat-Sen University,edu,30f464c09779c6210397204901d025c0def1fe10,citation,https://arxiv.org/pdf/1807.00504.pdf,Deep Reasoning with Knowledge Graph for Social Relationship Understanding,2018 +11,China,PIPA,pipa,39.993008,116.329882,SenseTime,company,30f464c09779c6210397204901d025c0def1fe10,citation,https://arxiv.org/pdf/1807.00504.pdf,Deep Reasoning with Knowledge Graph for Social Relationship Understanding,2018 +12,United States,PIPA,pipa,40.742252,-74.0270949,Stevens Institute of Technology,edu,1e1d7cbbef67e9e042a3a0a9a1bcefcc4a9adacf,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Li_A_Multi-Level_Contextual_CVPR_2016_paper.pdf,A Multi-level Contextual Model for Person Recognition in Photo Albums,2016 +13,Singapore,PIPA,pipa,1.2962018,103.77689944,National University of Singapore,edu,b5968e7bb23f5f03213178c22fd2e47af3afa04c,citation,https://arxiv.org/pdf/1705.07206.pdf,Multiple-Human Parsing in the Wild,2017 +14,China,PIPA,pipa,39.94976005,116.33629046,Beijing Jiaotong University,edu,b5968e7bb23f5f03213178c22fd2e47af3afa04c,citation,https://arxiv.org/pdf/1705.07206.pdf,Multiple-Human Parsing in the Wild,2017 +15,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,23429ef60e7a9c0e2f4d81ed1b4e47cc2616522f,citation,https://arxiv.org/pdf/1704.06456.pdf,A Domain Based Approach to Social Relation Recognition,2017 +16,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,bfc04ce7752fac884cf5a78b30ededfd5a0ad109,citation,https://arxiv.org/pdf/1804.04779.pdf,A Hybrid Model for Identity Obfuscation by Face Replacement,2018 +17,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,b68150bfdec373ed8e025f448b7a3485c16e3201,citation,https://arxiv.org/pdf/1703.09471.pdf,Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective,2017 +18,United States,PIPA,pipa,42.4505507,-76.4783513,Cornell University,edu,6c8dfa770fe4acffaabeae4b6092c2fd5ee2c545,citation,https://arxiv.org/pdf/1805.04049.pdf,Exploiting Unintended Feature Leakage in Collaborative Learning,2018 +19,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,bc27434e376db89fe0e6ef2d2fabc100d2575ec6,citation,https://arxiv.org/pdf/1607.08438.pdf,Faceless Person Recognition; Privacy Implications in Social Media,2016 +20,Switzerland,PIPA,pipa,46.5190557,6.5667576,EPFL,edu,1451e7b11e66c86104f9391b80d9fb422fb11c01,citation,https://pdfs.semanticscholar.org/1451/e7b11e66c86104f9391b80d9fb422fb11c01.pdf,Image privacy protection with secure JPEG transmorphing,2017 +21,United States,PIPA,pipa,42.4505507,-76.4783513,Cornell University,edu,8bdf6f03bde08c424c214188b35be8b2dec7cdea,citation,https://arxiv.org/pdf/1805.04049.pdf,Inference Attacks Against Collaborative Learning,2018 +22,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,0c59071ddd33849bd431165bc2d21bbe165a81e0,citation,https://arxiv.org/pdf/1509.03502.pdf,Person Recognition in Personal Photo Collections,2015 +23,India,PIPA,pipa,17.4450981,78.3497678,IIIT Hyderabad,edu,d0441970a9f19751e6c047b364f580c30bf9754a,citation,https://arxiv.org/pdf/1705.10120.pdf,Pose-Aware Person Recognition,2017 +24,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,3e0a1884448bfd7f416c6a45dfcdfc9f2e617268,citation,https://arxiv.org/pdf/1805.05838.pdf,Understanding and Controlling User Linkability in Decentralized Learning,2018 +25,China,PIPA,pipa,22.4162632,114.2109318,Chinese University of Hong Kong,edu,2fe7105ef8e61330a3ddc7f7b35955ca62fc1ab3,citation,https://arxiv.org/pdf/1806.03084.pdf,Unifying Identification and Context Learning for Person Recognition,2018 +26,United States,PIPA,pipa,37.8701543,-122.2712312,University of California at Berkeley,edu,d6a9ea9b40a7377c91c705f4c7f206a669a9eea2,citation,https://pdfs.semanticscholar.org/d6a9/ea9b40a7377c91c705f4c7f206a669a9eea2.pdf,Visual Representations for Fine-grained Categorization,2015 +27,United States,PIPA,pipa,42.44726,-76.480988,Facebook & Cornell University,company,0aaf785d7f21d2b5ad582b456896495d30b0a4e2,citation,,A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab,2018 +28,United States,PIPA,pipa,42.4505507,-76.4783513,Cornell University,edu,0aaf785d7f21d2b5ad582b456896495d30b0a4e2,citation,,A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab,2018 +29,United States,PIPA,pipa,37.3936717,-122.0807262,Facebook,company,0aaf785d7f21d2b5ad582b456896495d30b0a4e2,citation,,A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab,2018 +30,China,PIPA,pipa,39.9601488,116.35193921,Beijing University of Posts and Telecommunications,edu,d94d7ff6f46ad5cab5c20e6ac14c1de333711a0c,citation,http://mirlab.org/conference_papers/International_Conference/ICASSP%202017/pdfs/0003031.pdf,Face Album: Towards automatic photo management based on person identity on mobile phones,2017 +31,United States,PIPA,pipa,42.3702265,-71.0768929,"Philips Research, Bethesda, MD, USA",company,cfd4004054399f3a5f536df71f9b9987f060f434,citation,https://arxiv.org/pdf/1710.03224.pdf,Person Recognition in Social Media Photos,2018 +32,United States,PIPA,pipa,40.47913175,-74.43168868,Rutgers University,edu,cfd4004054399f3a5f536df71f9b9987f060f434,citation,https://arxiv.org/pdf/1710.03224.pdf,Person Recognition in Social Media Photos,2018 +33,United States,PIPA,pipa,33.9928298,-81.02685168,University of South Carolina,edu,cfd4004054399f3a5f536df71f9b9987f060f434,citation,https://arxiv.org/pdf/1710.03224.pdf,Person Recognition in Social Media Photos,2018 +34,Germany,PIPA,pipa,49.2579566,7.04577417,Max Planck Institute for Informatics,edu,2c92839418a64728438c351a42f6dc5ad0c6e686,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Masi_Pose-Aware_Face_Recognition_CVPR_2016_paper.pdf,Pose-Aware Face Recognition in the Wild,2016 +35,Singapore,PIPA,pipa,1.2962018,103.77689944,National University of Singapore,edu,6e50c32f7244e3556eb879f24b7de8410f3177f6,citation,https://arxiv.org/pdf/1812.05917.pdf,Visual Social Relationship Recognition,2018 +36,United States,PIPA,pipa,44.97399,-93.2277285,University of Minnesota-Twin Cities,edu,6e50c32f7244e3556eb879f24b7de8410f3177f6,citation,https://arxiv.org/pdf/1812.05917.pdf,Visual Social Relationship Recognition,2018 +37,United States,PIPA,pipa,40.4441619,-79.94272826,Carnegie Mellon University,edu,95d64ce5b0758bdc213962ce65ac89b31d9fb617,citation,,Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild,2018 +38,Israel,PIPA,pipa,32.77824165,34.99565673,Open University of Israel,edu,95d64ce5b0758bdc213962ce65ac89b31d9fb617,citation,,Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild,2018 +39,United States,PIPA,pipa,34.0224149,-118.28634407,University of Southern California,edu,95d64ce5b0758bdc213962ce65ac89b31d9fb617,citation,,Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild,2018 +40,India,PIPA,pipa,17.4454957,78.34854698,International Institute of Information Technology,edu,01e27c91c7cef926389f913d12410725e7dd35ab,citation,,Semi-supervised annotation of faces in image collection,2018 +41,Switzerland,PIPA,pipa,47.376313,8.5476699,ETH Zurich,edu,503906ca940fa3b01e39d05879c9b6a36524aaf5,citation,,Natural and Effective Obfuscation by Head Inpainting,2018 +42,Germany,PIPA,pipa,49.2578657,7.0457956,Max Planck Institute of Informatics,edu,503906ca940fa3b01e39d05879c9b6a36524aaf5,citation,,Natural and Effective Obfuscation by Head Inpainting,2018 +43,Belgium,PIPA,pipa,50.8784802,4.4348624,"Toyota Motor Europe (TME), Brussels 1140, Belgium",edu,503906ca940fa3b01e39d05879c9b6a36524aaf5,citation,,Natural and Effective Obfuscation by Head Inpainting,2018 +44,Singapore,PIPA,pipa,1.2966426,103.7763939,National University of Singapore & Qihoo 360 AI Institute,edu,af4759f5e636b5d9049010d5f0e2b0df2a69cd72,citation,,Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing,2018 +45,Singapore,PIPA,pipa,1.2962018,103.77689944,National University of Singapore,edu,af4759f5e636b5d9049010d5f0e2b0df2a69cd72,citation,,Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing,2018 diff --git a/site/datasets/verified/prid.csv b/site/datasets/verified/prid.csv index 622bae62..7b6e438f 100644 --- a/site/datasets/verified/prid.csv +++ b/site/datasets/verified/prid.csv @@ -1,2 +1,22 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,PRID,prid,0.0,0.0,,,,main,,Person Re-identification by Descriptive and Discriminative Classification,2011 +1,China,PRID,prid,22.4162632,114.2109318,Chinese University of Hong Kong,edu,dbb7b563e84903dad4953a8e9f23e3c54c6d7e78,citation,https://arxiv.org/pdf/1710.00983.pdf,Joint Person Re-identification and Camera Network Topology Inference in Multiple Cameras,2017 +2,China,PRID,prid,39.993008,116.329882,SenseTime,company,dbb7b563e84903dad4953a8e9f23e3c54c6d7e78,citation,https://arxiv.org/pdf/1710.00983.pdf,Joint Person Re-identification and Camera Network Topology Inference in Multiple Cameras,2017 +3,China,PRID,prid,23.09461185,113.28788994,Sun Yat-Sen University,edu,dbb7b563e84903dad4953a8e9f23e3c54c6d7e78,citation,https://arxiv.org/pdf/1710.00983.pdf,Joint Person Re-identification and Camera Network Topology Inference in Multiple Cameras,2017 +4,China,PRID,prid,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,147f31b603931c688687c6d64d330c9be2ab2f2f,citation,https://pdfs.semanticscholar.org/147f/31b603931c688687c6d64d330c9be2ab2f2f.pdf,Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification,0 +5,United States,PRID,prid,35.9042272,-78.85565763,"IBM Research, North Carolina",company,147f31b603931c688687c6d64d330c9be2ab2f2f,citation,https://pdfs.semanticscholar.org/147f/31b603931c688687c6d64d330c9be2ab2f2f.pdf,Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification,0 +6,United States,PRID,prid,42.0551164,-87.67581113,Northwestern University,edu,147f31b603931c688687c6d64d330c9be2ab2f2f,citation,https://pdfs.semanticscholar.org/147f/31b603931c688687c6d64d330c9be2ab2f2f.pdf,Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification,0 +7,United States,PRID,prid,41.2097516,-73.8026467,IBM T.J. Watson Research Center,company,147f31b603931c688687c6d64d330c9be2ab2f2f,citation,https://pdfs.semanticscholar.org/147f/31b603931c688687c6d64d330c9be2ab2f2f.pdf,Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification,0 +8,China,PRID,prid,30.5097537,114.4062881,Huazhong University of Science and Technology,edu,5ee96d5c4d467d00909472e3bc0d2c2d82ccb961,citation,https://arxiv.org/pdf/1708.02286.pdf,Jointly Attentive Spatial-Temporal Pooling Networks for Video-Based Person Re-identification,2017 +9,United States,PRID,prid,35.9042272,-78.85565763,"IBM Research, North Carolina",company,5ee96d5c4d467d00909472e3bc0d2c2d82ccb961,citation,https://arxiv.org/pdf/1708.02286.pdf,Jointly Attentive Spatial-Temporal Pooling Networks for Video-Based Person Re-identification,2017 +10,United States,PRID,prid,42.0551164,-87.67581113,Northwestern University,edu,5ee96d5c4d467d00909472e3bc0d2c2d82ccb961,citation,https://arxiv.org/pdf/1708.02286.pdf,Jointly Attentive Spatial-Temporal Pooling Networks for Video-Based Person Re-identification,2017 +11,United States,PRID,prid,41.2097516,-73.8026467,IBM T.J. Watson Research Center,company,5ee96d5c4d467d00909472e3bc0d2c2d82ccb961,citation,https://arxiv.org/pdf/1708.02286.pdf,Jointly Attentive Spatial-Temporal Pooling Networks for Video-Based Person Re-identification,2017 +12,China,PRID,prid,30.60903415,114.3514284,Wuhan University of Technology,edu,76616a2709c03ade176db31fa99c7c61970eba28,citation,https://pdfs.semanticscholar.org/7661/6a2709c03ade176db31fa99c7c61970eba28.pdf,Learning Heterogeneous Dictionary Pair with Feature Projection Matrix for Pedestrian Video Retrieval via Single Query Image,2017 +13,China,PRID,prid,32.105748,118.931701,Nanjing University of Posts and Telecommunications,edu,76616a2709c03ade176db31fa99c7c61970eba28,citation,https://pdfs.semanticscholar.org/7661/6a2709c03ade176db31fa99c7c61970eba28.pdf,Learning Heterogeneous Dictionary Pair with Feature Projection Matrix for Pedestrian Video Retrieval via Single Query Image,2017 +14,China,PRID,prid,39.9808333,116.34101249,Beihang University,edu,76616a2709c03ade176db31fa99c7c61970eba28,citation,https://pdfs.semanticscholar.org/7661/6a2709c03ade176db31fa99c7c61970eba28.pdf,Learning Heterogeneous Dictionary Pair with Feature Projection Matrix for Pedestrian Video Retrieval via Single Query Image,2017 +15,China,PRID,prid,45.7413921,126.62552755,Harbin Institute of Technology,edu,76616a2709c03ade176db31fa99c7c61970eba28,citation,https://pdfs.semanticscholar.org/7661/6a2709c03ade176db31fa99c7c61970eba28.pdf,Learning Heterogeneous Dictionary Pair with Feature Projection Matrix for Pedestrian Video Retrieval via Single Query Image,2017 +16,China,PRID,prid,34.808921,114.369752,Henan University,edu,76616a2709c03ade176db31fa99c7c61970eba28,citation,https://pdfs.semanticscholar.org/7661/6a2709c03ade176db31fa99c7c61970eba28.pdf,Learning Heterogeneous Dictionary Pair with Feature Projection Matrix for Pedestrian Video Retrieval via Single Query Image,2017 +17,China,PRID,prid,23.09461185,113.28788994,Sun Yat-Sen University,edu,76616a2709c03ade176db31fa99c7c61970eba28,citation,https://pdfs.semanticscholar.org/7661/6a2709c03ade176db31fa99c7c61970eba28.pdf,Learning Heterogeneous Dictionary Pair with Feature Projection Matrix for Pedestrian Video Retrieval via Single Query Image,2017 +18,China,PRID,prid,39.993008,116.329882,SenseTime,company,35c51c40338d5d547c34ae7ec2efa7a32479dafa,citation,https://arxiv.org/pdf/1807.05688.pdf,SCAN: Self-and-Collaborative Attention Network for Video Person Re-identification,2018 +19,China,PRID,prid,23.09461185,113.28788994,Sun Yat-Sen University,edu,35c51c40338d5d547c34ae7ec2efa7a32479dafa,citation,https://arxiv.org/pdf/1807.05688.pdf,SCAN: Self-and-Collaborative Attention Network for Video Person Re-identification,2018 +20,China,PRID,prid,22.4162632,114.2109318,Chinese University of Hong Kong,edu,35c51c40338d5d547c34ae7ec2efa7a32479dafa,citation,https://arxiv.org/pdf/1807.05688.pdf,SCAN: Self-and-Collaborative Attention Network for Video Person Re-identification,2018 diff --git a/site/datasets/verified/uccs.csv b/site/datasets/verified/uccs.csv index d7c84820..1cbefd32 100644 --- a/site/datasets/verified/uccs.csv +++ b/site/datasets/verified/uccs.csv @@ -1,9 +1,7 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,UCCS,uccs,0.0,0.0,,,,main,,Large scale unconstrained open set face database,2013 -1,United States,UCCS,uccs,41.70456775,-86.23822026,University of Notre Dame,edu,841855205818d3a6d6f85ec17a22515f4f062882,citation,https://arxiv.org/pdf/1805.11529.pdf,Low Resolution Face Recognition in the Wild,2018 -2,United States,UCCS,uccs,40.11571585,-88.22750772,Beckman Institute,edu,288d2704205d9ca68660b9f3a8fda17e18329c13,citation,https://arxiv.org/pdf/1601.04153.pdf,Studying Very Low Resolution Recognition Using Deep Networks,2016 -3,United States,UCCS,uccs,38.8920756,-104.79716389,"University of Colorado, Colorado Springs",edu,d4f1eb008eb80595bcfdac368e23ae9754e1e745,citation,,Unconstrained Face Detection and Open-Set Face Recognition Challenge,2017 -4,United Kingdom,UCCS,uccs,51.5247272,-0.03931035,Queen Mary University of London,edu,2306b2a8fba28539306052764a77a0d0f5d1236a,citation,https://arxiv.org/pdf/1804.09691.pdf,Surveillance Face Recognition Challenge,2018 -5,United Kingdom,UCCS,uccs,55.378051,-3.435973,"Vision Semantics Ltd, UK",edu,2306b2a8fba28539306052764a77a0d0f5d1236a,citation,https://arxiv.org/pdf/1804.09691.pdf,Surveillance Face Recognition Challenge,2018 -6,China,UCCS,uccs,39.9808333,116.34101249,Beihang University,edu,c50e498ede6f5216cffd0645e747ce67fae2096a,citation,https://arxiv.org/pdf/1811.09998.pdf,Low-Resolution Face Recognition in the Wild via Selective Knowledge Distillation,2018 -7,China,UCCS,uccs,39.97426,116.21589,"Institute of Information Engineering, CAS, Beijing, China",edu,c50e498ede6f5216cffd0645e747ce67fae2096a,citation,https://arxiv.org/pdf/1811.09998.pdf,Low-Resolution Face Recognition in the Wild via Selective Knowledge Distillation,2018 +1,United States,UCCS,uccs,40.11571585,-88.22750772,Beckman Institute,edu,288d2704205d9ca68660b9f3a8fda17e18329c13,citation,https://arxiv.org/pdf/1601.04153.pdf,Studying Very Low Resolution Recognition Using Deep Networks,2016 +2,United States,UCCS,uccs,38.8920756,-104.79716389,"University of Colorado, Colorado Springs",edu,d4f1eb008eb80595bcfdac368e23ae9754e1e745,citation,,Unconstrained Face Detection and Open-Set Face Recognition Challenge,2017 +3,United States,UCCS,uccs,41.70456775,-86.23822026,University of Notre Dame,edu,841855205818d3a6d6f85ec17a22515f4f062882,citation,https://arxiv.org/pdf/1805.11529.pdf,Low Resolution Face Recognition in the Wild,2018 +4,China,UCCS,uccs,39.9808333,116.34101249,Beihang University,edu,c50e498ede6f5216cffd0645e747ce67fae2096a,citation,https://arxiv.org/pdf/1811.09998.pdf,Low-Resolution Face Recognition in the Wild via Selective Knowledge Distillation,2018 +5,China,UCCS,uccs,39.97426,116.21589,"Institute of Information Engineering, CAS, Beijing, China",edu,c50e498ede6f5216cffd0645e747ce67fae2096a,citation,https://arxiv.org/pdf/1811.09998.pdf,Low-Resolution Face Recognition in the Wild via Selective Knowledge Distillation,2018 diff --git a/site/datasets/verified/used.csv b/site/datasets/verified/used.csv index 52c7be2f..c63ece10 100644 --- a/site/datasets/verified/used.csv +++ b/site/datasets/verified/used.csv @@ -1,2 +1,5 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,USED Social Event Dataset,used,0.0,0.0,,,,main,,USED: a large-scale social event detection dataset,2016 +1,Japan,USED Social Event Dataset,used,32.8164178,130.72703969,Kumamoto University,edu,d1bca67dd26d719b3e7a51acecd7c54c7b78b34a,citation,https://arxiv.org/pdf/1612.04062.pdf,Spatial Pyramid Convolutional Neural Network for Social Event Detection in Static Image,2016 +2,Italy,USED Social Event Dataset,used,46.0658836,11.1159894,University of Trento,edu,27f8b01e628f20ebfcb58d14ea40573d351bbaad,citation,https://pdfs.semanticscholar.org/27f8/b01e628f20ebfcb58d14ea40573d351bbaad.pdf,Events based Multimedia Indexing and Retrieval,2017 +3,Italy,USED Social Event Dataset,used,46.0658836,11.1159894,University of Trento,edu,4bf85ef995c684b841d0a5a002d175fadd922ff0,citation,,Ensemble of Deep Models for Event Recognition,2018 diff --git a/site/datasets/verified/voc.csv b/site/datasets/verified/voc.csv index 89a14200..75397740 100644 --- a/site/datasets/verified/voc.csv +++ b/site/datasets/verified/voc.csv @@ -1,2 +1,145 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,VOC,voc,0.0,0.0,,,,main,,The Pascal Visual Object Classes (VOC) Challenge,2009 +1,China,VOC,voc,28.2290209,112.99483204,"National University of Defense Technology, China",mil,ca4e0a2cd761f52e6c0bc06ef8ac79e3c7649083,citation,https://arxiv.org/pdf/1804.04606.pdf,Loss Rank Mining: A General Hard Example Mining Method for Real-time Detectors,2018 +2,United States,VOC,voc,39.0298587,-76.9638027,"U.S. Army Research Laboratory, Adelphi, MD, USA",mil,e7895feb2de9007ea1e47b0ea5952afd5af08b3d,citation,https://arxiv.org/pdf/1704.01069.pdf,ME R-CNN: Multi-Expert R-CNN for Object Detection,2017 +3,United States,VOC,voc,37.8718992,-122.2585399,"University of Califonia, Berkeley",edu,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013 +4,Israel,VOC,voc,32.1119889,34.80459702,Tel Aviv University,edu,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013 +5,Israel,VOC,voc,31.904187,34.807378,"Weizmann Institute, Rehovot, Israel",edu,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013 +6,Israel,VOC,voc,32.7940463,34.989571,"Yahoo Research Labs, Haifa, Israel",company,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013 +7,Netherlands,VOC,voc,52.3553655,4.9501644,University of Amsterdam,edu,19a3e5495b420c1f5da283bf39708a6e833a6cc5,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_020.pdf,Attributes and categories for generic instance search from one example,2015 +8,United States,VOC,voc,40.8419836,-73.94368971,Columbia University,edu,19a3e5495b420c1f5da283bf39708a6e833a6cc5,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_020.pdf,Attributes and categories for generic instance search from one example,2015 +9,China,VOC,voc,39.103355,117.164927,NanKai University,edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016 +10,United Kingdom,VOC,voc,51.7520849,-1.2516646,Oxford University,edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016 +11,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016 +12,United States,VOC,voc,32.87935255,-117.23110049,"University of California, San Diego",edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016 +13,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,46c82cfadd9f885f5480b2d7155f0985daf949fc,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Fouhey_3D_Shape_Attributes_CVPR_2016_paper.pdf,3D Shape Attributes,2016 +14,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,46c82cfadd9f885f5480b2d7155f0985daf949fc,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Fouhey_3D_Shape_Attributes_CVPR_2016_paper.pdf,3D Shape Attributes,2016 +15,United States,VOC,voc,47.6423318,-122.1369302,Microsoft,company,57642aa16d29bbd9f89f95e3f3dcb8291552db60,citation,http://www.cs.toronto.edu/~pekhimenko/Papers/iiswc18-tbd.pdf,Benchmarking and Analyzing Deep Neural Network Training,2018 +16,Canada,VOC,voc,49.25839375,-123.24658161,University of British Columbia,edu,57642aa16d29bbd9f89f95e3f3dcb8291552db60,citation,http://www.cs.toronto.edu/~pekhimenko/Papers/iiswc18-tbd.pdf,Benchmarking and Analyzing Deep Neural Network Training,2018 +17,Canada,VOC,voc,43.66333345,-79.39769975,University of Toronto,edu,57642aa16d29bbd9f89f95e3f3dcb8291552db60,citation,http://www.cs.toronto.edu/~pekhimenko/Papers/iiswc18-tbd.pdf,Benchmarking and Analyzing Deep Neural Network Training,2018 +18,China,VOC,voc,39.9808333,116.34101249,Beihang University,edu,df0e280cae018cebd5b16ad701ad101265c369fa,citation,https://arxiv.org/pdf/1509.02470.pdf,Deep Attributes from Context-Aware Regional Neural Codes,2015 +19,China,VOC,voc,39.966244,116.3270039,Intel Labs China,company,df0e280cae018cebd5b16ad701ad101265c369fa,citation,https://arxiv.org/pdf/1509.02470.pdf,Deep Attributes from Context-Aware Regional Neural Codes,2015 +20,United States,VOC,voc,40.8419836,-73.94368971,Columbia University,edu,df0e280cae018cebd5b16ad701ad101265c369fa,citation,https://arxiv.org/pdf/1509.02470.pdf,Deep Attributes from Context-Aware Regional Neural Codes,2015 +21,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,a63104ad235f98bc5ee0b44fefbcdb49e32c205a,citation,http://groups.inf.ed.ac.uk/calvin/Publications/Jammalamadaka12eccv.pdf,Has my algorithm succeeded? an evaluator for human pose estimators,2012 +22,Switzerland,VOC,voc,47.376313,8.5476699,ETH Zurich,edu,a63104ad235f98bc5ee0b44fefbcdb49e32c205a,citation,http://groups.inf.ed.ac.uk/calvin/Publications/Jammalamadaka12eccv.pdf,Has my algorithm succeeded? an evaluator for human pose estimators,2012 +23,United Kingdom,VOC,voc,55.94951105,-3.19534913,University of Edinburgh,edu,a63104ad235f98bc5ee0b44fefbcdb49e32c205a,citation,http://groups.inf.ed.ac.uk/calvin/Publications/Jammalamadaka12eccv.pdf,Has my algorithm succeeded? an evaluator for human pose estimators,2012 +24,China,VOC,voc,36.3693473,120.673818,Shandong University,edu,ddde8f2c0209f11c2579dfaa13ac4053dedbf2fe,citation,https://arxiv.org/pdf/1811.02804.pdf,Image smoothing via unsupervised learning,2018 +25,United States,VOC,voc,42.3614256,-71.0812092,Microsoft Research Asia,company,ddde8f2c0209f11c2579dfaa13ac4053dedbf2fe,citation,https://arxiv.org/pdf/1811.02804.pdf,Image smoothing via unsupervised learning,2018 +26,China,VOC,voc,39.9922379,116.30393816,Peking University,edu,ddde8f2c0209f11c2579dfaa13ac4053dedbf2fe,citation,https://arxiv.org/pdf/1811.02804.pdf,Image smoothing via unsupervised learning,2018 +27,United States,VOC,voc,32.87935255,-117.23110049,"University of California, San Diego",edu,16161051ee13dd3d836a39a280df822bf6442c84,citation,https://pdfs.semanticscholar.org/4bd3/f187f3e09483b1f0f92150a4a77409691b0f.pdf,Learning Efficient Object Detection Models with Knowledge Distillation,2017 +28,United States,VOC,voc,38.926761,-92.29193783,University of Missouri,edu,16161051ee13dd3d836a39a280df822bf6442c84,citation,https://pdfs.semanticscholar.org/4bd3/f187f3e09483b1f0f92150a4a77409691b0f.pdf,Learning Efficient Object Detection Models with Knowledge Distillation,2017 +29,United States,VOC,voc,37.3239177,-122.0129693,"NEC Labs, Cupertino, CA",company,16161051ee13dd3d836a39a280df822bf6442c84,citation,https://pdfs.semanticscholar.org/4bd3/f187f3e09483b1f0f92150a4a77409691b0f.pdf,Learning Efficient Object Detection Models with Knowledge Distillation,2017 +30,China,VOC,voc,39.966244,116.3270039,Intel Labs China,company,19d4855f064f0d53cb851e9342025bd8503922e2,citation,http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/IEEE_CVPR2013/data/Papers/4989d468.pdf,Learning SURF Cascade for Fast and Accurate Object Detection,2013 +31,China,VOC,voc,23.09461185,113.28788994,Sun Yat-Sen University,edu,ee098ed493af3abe873ce89354599e1f6bdf65be,citation,https://arxiv.org/pdf/1702.05839.pdf,Progressively Diffused Networks for Semantic Image Segmentation,2017 +32,China,VOC,voc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,ee098ed493af3abe873ce89354599e1f6bdf65be,citation,https://arxiv.org/pdf/1702.05839.pdf,Progressively Diffused Networks for Semantic Image Segmentation,2017 +33,China,VOC,voc,39.993008,116.329882,SenseTime,company,ee098ed493af3abe873ce89354599e1f6bdf65be,citation,https://arxiv.org/pdf/1702.05839.pdf,Progressively Diffused Networks for Semantic Image Segmentation,2017 +34,United States,VOC,voc,37.4092265,-122.0236615,Baidu,company,99f95595c45bd7a4fe2cffff07850754955e5e2a,citation,https://nicsefc.ee.tsinghua.edu.cn/media/publications/2015/IEEE%20TCAD_170.pdf,RRAM-Based Analog Approximate Computing,2015 +35,United States,VOC,voc,40.44415295,-79.96243993,University of Pittsburgh,edu,99f95595c45bd7a4fe2cffff07850754955e5e2a,citation,https://nicsefc.ee.tsinghua.edu.cn/media/publications/2015/IEEE%20TCAD_170.pdf,RRAM-Based Analog Approximate Computing,2015 +36,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,99f95595c45bd7a4fe2cffff07850754955e5e2a,citation,https://nicsefc.ee.tsinghua.edu.cn/media/publications/2015/IEEE%20TCAD_170.pdf,RRAM-Based Analog Approximate Computing,2015 +37,United States,VOC,voc,33.7756178,-84.396285,Georgia Tech,edu,5a0209515ab62e008efeca31f80fa0a97031cd9d,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_046.pdf,Dataset fingerprints: Exploring image collections through data mining,2015 +38,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,2c953b06c1c312e36f1fdb9919567b42c9322384,citation,http://people.csail.mit.edu/tomasz/papers/malisiewicz_iccv11.pdf,Ensemble of exemplar-SVMs for object detection and beyond,2011 +39,China,VOC,voc,40.0044795,116.370238,Chinese Academy of Sciences,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014 +40,United States,VOC,voc,29.58333105,-98.61944505,University of Texas at San Antonio,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014 +41,Singapore,VOC,voc,1.2962018,103.77689944,National University of Singapore,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014 +42,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014 +43,China,VOC,voc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,931282732f0be57f7fb895238e94bdda00a52cad,citation,https://pdfs.semanticscholar.org/9312/82732f0be57f7fb895238e94bdda00a52cad.pdf,Gated Bi-directional CNN for Object Detection,2016 +44,China,VOC,voc,39.993008,116.329882,SenseTime,company,931282732f0be57f7fb895238e94bdda00a52cad,citation,https://pdfs.semanticscholar.org/9312/82732f0be57f7fb895238e94bdda00a52cad.pdf,Gated Bi-directional CNN for Object Detection,2016 +45,Germany,VOC,voc,48.7468939,9.0805141,Max Planck Institute for Intelligent Systems,edu,cfa48bc1015b88809e362b4da19fe4459acb1d89,citation,https://pdfs.semanticscholar.org/cfa4/8bc1015b88809e362b4da19fe4459acb1d89.pdf,Learning to Filter Object Detections,2017 +46,United States,VOC,voc,47.6423318,-122.1369302,Microsoft,company,cfa48bc1015b88809e362b4da19fe4459acb1d89,citation,https://pdfs.semanticscholar.org/cfa4/8bc1015b88809e362b4da19fe4459acb1d89.pdf,Learning to Filter Object Detections,2017 +47,United States,VOC,voc,40.34829285,-74.66308325,Princeton University,edu,420c46d7cafcb841309f02ad04cf51cb1f190a48,citation,https://arxiv.org/pdf/1511.07122.pdf,Multi-Scale Context Aggregation by Dilated Convolutions,2015 +48,United States,VOC,voc,40.4439789,-79.9464634,Intel Labs,company,420c46d7cafcb841309f02ad04cf51cb1f190a48,citation,https://arxiv.org/pdf/1511.07122.pdf,Multi-Scale Context Aggregation by Dilated Convolutions,2015 +49,France,VOC,voc,48.708759,2.164006,"Center for Visual Computing, École Centrale Paris, France",edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013 +50,France,VOC,voc,48.840579,2.586968,"LIGM (UMR CNRS), École des Ponts ParisTech, Université Paris-Est, France",edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013 +51,Finland,VOC,voc,65.0592157,25.46632601,University of Oulu,edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013 +52,France,VOC,voc,48.7146403,2.2056539,"Équipe Galen, INRIA Saclay, Île-de-France, France",edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013 +53,United States,VOC,voc,42.3354481,-71.16813864,Boston College,edu,18ccd8bd64b50c1b6a83a71792fd808da7076bc9,citation,http://ttic.uchicago.edu/~mmaire/papers/pdf/seg_obj_iccv2011.pdf,Object detection and segmentation from joint embedding of parts and pixels,2011 +54,United States,VOC,voc,34.13710185,-118.12527487,California Institute of Technology,edu,18ccd8bd64b50c1b6a83a71792fd808da7076bc9,citation,http://ttic.uchicago.edu/~mmaire/papers/pdf/seg_obj_iccv2011.pdf,Object detection and segmentation from joint embedding of parts and pixels,2011 +55,Japan,VOC,voc,34.7275714,135.2371,Kobe University,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019 +56,China,VOC,voc,26.0252776,119.2117845,Fujian Normal University,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019 +57,United Kingdom,VOC,voc,53.94540365,-1.03138878,University of York,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019 +58,China,VOC,voc,24.4399419,118.09301781,Xiamen University,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019 +59,United Kingdom,VOC,voc,51.5247272,-0.03931035,Queen Mary University of London,edu,b1045a2de35d0adf784353f90972118bc1162f8d,citation,http://eecs.qmul.ac.uk/~jason/Research/PreprintVersion/Quantifying%20and%20Transferring%20Contextual%20Information%20in%20Object%20Detection.pdf,Quantifying and Transferring Contextual Information in Object Detection,2012 +60,China,VOC,voc,23.09461185,113.28788994,Sun Yat-Sen University,edu,b1045a2de35d0adf784353f90972118bc1162f8d,citation,http://eecs.qmul.ac.uk/~jason/Research/PreprintVersion/Quantifying%20and%20Transferring%20Contextual%20Information%20in%20Object%20Detection.pdf,Quantifying and Transferring Contextual Information in Object Detection,2012 +61,China,VOC,voc,23.09461185,113.28788994,Sun Yat-Sen University,edu,ab781f035720d991e244adb35f1d04e671af1999,citation,https://arxiv.org/pdf/1712.07465.pdf,Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition,2018 +62,China,VOC,voc,39.993008,116.329882,SenseTime,company,ab781f035720d991e244adb35f1d04e671af1999,citation,https://arxiv.org/pdf/1712.07465.pdf,Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition,2018 +63,Canada,VOC,voc,43.66333345,-79.39769975,University of Toronto,edu,1bb0dd8d349cdb1bbc065f1f0e111a8334072257,citation,http://jmlr.csail.mit.edu/proceedings/papers/v22/tarlow12a/tarlow12a.pdf,Structured Output Learning with High Order Loss Functions,2012 +64,United States,VOC,voc,41.7846982,-87.5925848,Toyota Technological Institute at Chicago,company,3a4c70ca0bbd461fe2e4de3448a01f06c0217459,citation,https://arxiv.org/pdf/1510.09171.pdf,Accurate Vision-based Vehicle Localization using Satellite Imagery,2015 +65,Netherlands,VOC,voc,52.3553655,4.9501644,University of Amsterdam,edu,26c58e24687ccbe9737e41837aab74e4a499d259,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Li_Codemaps_-_Segment_2013_ICCV_paper.pdf,"Codemaps - Segment, Classify and Search Objects Locally",2013 +66,Netherlands,VOC,voc,52.356678,4.95187,"Centrum Wiskunde & Informatica, Amsterdam, The Netherlands",edu,26c58e24687ccbe9737e41837aab74e4a499d259,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Li_Codemaps_-_Segment_2013_ICCV_paper.pdf,"Codemaps - Segment, Classify and Search Objects Locally",2013 +67,United States,VOC,voc,47.6423318,-122.1369302,Microsoft,company,c9abf6cb2d916262425033db12cf0181d40be7cb,citation,https://pdfs.semanticscholar.org/c9ab/f6cb2d916262425033db12cf0181d40be7cb.pdf,Entropy-based Latent Structured Output Prediction-Supplementary materials,2015 +68,China,VOC,voc,31.83907195,117.26420748,University of Science and Technology of China,edu,ce43209fc68e51ef05fa06cc0fe6210cbd021e3f,citation,http://min.sjtu.edu.cn/files%5Cpapers%5C2016%5CJournal%5C2016-TIP-CV-ZHANGXIAOPENG%5C2016-TIP-CV-02.pdf,Fused One-vs-All Features With Semantic Alignments for Fine-Grained Visual Categorization,2016 +69,United States,VOC,voc,29.58333105,-98.61944505,University of Texas at San Antonio,edu,ce43209fc68e51ef05fa06cc0fe6210cbd021e3f,citation,http://min.sjtu.edu.cn/files%5Cpapers%5C2016%5CJournal%5C2016-TIP-CV-ZHANGXIAOPENG%5C2016-TIP-CV-02.pdf,Fused One-vs-All Features With Semantic Alignments for Fine-Grained Visual Categorization,2016 +70,China,VOC,voc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,ce43209fc68e51ef05fa06cc0fe6210cbd021e3f,citation,http://min.sjtu.edu.cn/files%5Cpapers%5C2016%5CJournal%5C2016-TIP-CV-ZHANGXIAOPENG%5C2016-TIP-CV-02.pdf,Fused One-vs-All Features With Semantic Alignments for Fine-Grained Visual Categorization,2016 +71,United Kingdom,VOC,voc,51.7555205,-1.2261597,Oxford Brookes University,edu,70d71c2f8c865438c0158bed9f7d64e57e245535,citation,http://cms.brookes.ac.uk/research/visiongroup/publications/2013/intr_obj_vrt_nips13.pdf,"Higher Order Priors for Joint Intrinsic Image, Objects, and Attributes Estimation",2013 +72,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,70d71c2f8c865438c0158bed9f7d64e57e245535,citation,http://cms.brookes.ac.uk/research/visiongroup/publications/2013/intr_obj_vrt_nips13.pdf,"Higher Order Priors for Joint Intrinsic Image, Objects, and Attributes Estimation",2013 +73,China,VOC,voc,34.2469152,108.91061982,Northwestern Polytechnical University,edu,50953b9a15aca6ef3351e613e7215abdcae1435e,citation,http://sunw.csail.mit.edu/papers/63_Cheng_SUNw.pdf,Learning coarse-to-fine sparselets for efficient object detection and scene classification,2015 +74,Thailand,VOC,voc,13.65450525,100.49423171,Robotics Institute,edu,d6d7dcdcf66fe83e49d175cd9d8ac0b507d0e9d8,citation,http://dhoiem.cs.illinois.edu/publications/ijcv2010_occlusion.pdf,Recovering Occlusion Boundaries from an Image,2010 +75,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,d6d7dcdcf66fe83e49d175cd9d8ac0b507d0e9d8,citation,http://dhoiem.cs.illinois.edu/publications/ijcv2010_occlusion.pdf,Recovering Occlusion Boundaries from an Image,2010 +76,United States,VOC,voc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,d6d7dcdcf66fe83e49d175cd9d8ac0b507d0e9d8,citation,http://dhoiem.cs.illinois.edu/publications/ijcv2010_occlusion.pdf,Recovering Occlusion Boundaries from an Image,2010 +77,China,VOC,voc,28.727339,115.816633,Jiangxi University of Finance and Economics,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 +78,Singapore,VOC,voc,1.3484104,103.68297965,Nanyang Technological University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 +79,China,VOC,voc,34.250803,108.983693,Xi’an Jiaotong University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018 +80,Netherlands,VOC,voc,52.3553655,4.9501644,University of Amsterdam,edu,25d7da85858a4d89b7de84fd94f0c0a51a9fc67a,citation,http://graphics.cs.cmu.edu/courses/16-824/2016_spring/slides/seg_3.pdf,Selective Search for Object Recognition,2013 +81,Italy,VOC,voc,46.0658836,11.1159894,University of Trento,edu,25d7da85858a4d89b7de84fd94f0c0a51a9fc67a,citation,http://graphics.cs.cmu.edu/courses/16-824/2016_spring/slides/seg_3.pdf,Selective Search for Object Recognition,2013 +82,United States,VOC,voc,37.4219999,-122.0840575,Google,company,0690ba31424310a90028533218d0afd25a829c8d,citation,https://arxiv.org/pdf/1412.7062.pdf,Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs,2015 +83,Germany,VOC,voc,53.8338371,10.7035939,Institute of Systems and Robotics,edu,7fb8d9c36c23f274f2dd84945dd32ec2cc143de1,citation,http://home.isr.uc.pt/~joaoluis/papers/eccv2012.pdf,Semantic segmentation with second-order pooling,2012 +84,Germany,VOC,voc,50.7338124,7.1022465,University of Bonn,edu,7fb8d9c36c23f274f2dd84945dd32ec2cc143de1,citation,http://home.isr.uc.pt/~joaoluis/papers/eccv2012.pdf,Semantic segmentation with second-order pooling,2012 +85,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,4682fee7dc045aea7177d7f3bfe344aabf153bd5,citation,http://cs.brown.edu/~ls/teaching_CMU_16-824/slides_tz-1.pdf,Tabula rasa: Model transfer for object category detection,2011 +86,United States,VOC,voc,42.3614256,-71.0812092,Microsoft Research Asia,company,35f345ebe3831e4741dcdc1931da59043acf4b83,citation,https://pdfs.semanticscholar.org/35f3/45ebe3831e4741dcdc1931da59043acf4b83.pdf,Towards High Performance Video Object Detection for Mobiles 3 2 Revisiting Video Object Detection Baseline,2018 +87,Canada,VOC,voc,49.8091536,-97.13304179,University of Manitoba,edu,488fff23542ff397cdb1ced64db2c96320afc560,citation,http://www.cs.umanitoba.ca/~ywang/papers/cvpr15.pdf,Weakly supervised localization of novel objects using appearance transfer,2015 +88,United States,VOC,voc,37.43131385,-122.16936535,Stanford University,edu,032bde9da87439c781a6c81ba7933985ed95d88e,citation,https://arxiv.org/pdf/1506.02106.pdf,What's the point: Semantic segmentation with point supervision,2016 +89,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,032bde9da87439c781a6c81ba7933985ed95d88e,citation,https://arxiv.org/pdf/1506.02106.pdf,What's the point: Semantic segmentation with point supervision,2016 +90,United Kingdom,VOC,voc,55.94951105,-3.19534913,University of Edinburgh,edu,032bde9da87439c781a6c81ba7933985ed95d88e,citation,https://arxiv.org/pdf/1506.02106.pdf,What's the point: Semantic segmentation with point supervision,2016 +91,Australia,VOC,voc,-42.902631,147.3273381,University of Tasmania,edu,c2a2093b4163616b83398e503ae9ed948f4f6a2b,citation,http://mima.sdu.edu.cn/(X(1)S(ar3myg55nqom1l55ttix5kjj))/Images/publication/Dual-CNN-ML.pdf,A Dual-CNN Model for Multi-label Classification by Leveraging Co-occurrence Dependencies Between Labels,2017 +92,China,VOC,voc,36.3693473,120.673818,Shandong University,edu,c2a2093b4163616b83398e503ae9ed948f4f6a2b,citation,http://mima.sdu.edu.cn/(X(1)S(ar3myg55nqom1l55ttix5kjj))/Images/publication/Dual-CNN-ML.pdf,A Dual-CNN Model for Multi-label Classification by Leveraging Co-occurrence Dependencies Between Labels,2017 +93,United States,VOC,voc,34.068921,-118.4451811,UCLA,edu,c4fc07072d7ebfbca471d2394b20199d8107e517,citation,https://pdfs.semanticscholar.org/c4fc/07072d7ebfbca471d2394b20199d8107e517.pdf,Active Mask Hierarchies for Object Detection,2010 +94,United States,VOC,voc,42.3583961,-71.09567788,MIT,edu,c4fc07072d7ebfbca471d2394b20199d8107e517,citation,https://pdfs.semanticscholar.org/c4fc/07072d7ebfbca471d2394b20199d8107e517.pdf,Active Mask Hierarchies for Object Detection,2010 +95,China,VOC,voc,38.88140235,121.52281098,Dalian University of Technology,edu,39afeceb57a7fde266ddd842aa23d2eea7ad5665,citation,https://arxiv.org/pdf/1802.06960.pdf,Agile Amulet: Real-Time Salient Object Detection with Contextual Attention,2018 +96,Australia,VOC,voc,-34.9189226,138.60423668,University of Adelaide,edu,39afeceb57a7fde266ddd842aa23d2eea7ad5665,citation,https://arxiv.org/pdf/1802.06960.pdf,Agile Amulet: Real-Time Salient Object Detection with Contextual Attention,2018 +97,United States,VOC,voc,42.3583961,-71.09567788,MIT,edu,732e4016225280b485c557a119ec50cffb8fee98,citation,https://arxiv.org/pdf/1311.6510.pdf,Are all training examples equally valuable?,2013 +98,Spain,VOC,voc,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,732e4016225280b485c557a119ec50cffb8fee98,citation,https://arxiv.org/pdf/1311.6510.pdf,Are all training examples equally valuable?,2013 +99,United States,VOC,voc,39.2899685,-76.62196103,University of Maryland,edu,38b4ac4a0802fdb63dea6769dd1aee075cc3f87d,citation,https://arxiv.org/pdf/1712.08675.pdf,Boundary-sensitive Network for Portrait Segmentation,2017 +100,United States,VOC,voc,37.4019735,-122.0477876,Samsung Research America,edu,38b4ac4a0802fdb63dea6769dd1aee075cc3f87d,citation,https://arxiv.org/pdf/1712.08675.pdf,Boundary-sensitive Network for Portrait Segmentation,2017 +101,Switzerland,VOC,voc,47.3764534,8.54770931,ETH Zürich,edu,10f13579084670291019c6e8ef55f5cd35c926b6,citation,https://pdfs.semanticscholar.org/7088/0e0ba2478c7250918ee9b7accc6993d13ba4.pdf,Closed-Form Approximate CRF Training for Scalable Image Segmentation,2014 +102,United Kingdom,VOC,voc,55.94951105,-3.19534913,University of Edinburgh,edu,10f13579084670291019c6e8ef55f5cd35c926b6,citation,https://pdfs.semanticscholar.org/7088/0e0ba2478c7250918ee9b7accc6993d13ba4.pdf,Closed-Form Approximate CRF Training for Scalable Image Segmentation,2014 +103,Singapore,VOC,voc,1.2962018,103.77689944,National University of Singapore,edu,5250f319cae32437489bb97b2ed9a1dc962d4d39,citation,https://arxiv.org/pdf/1411.2861.pdf,Computational Baby Learning.,2014 +104,China,VOC,voc,39.94976005,116.33629046,Beijing Jiaotong University,edu,5250f319cae32437489bb97b2ed9a1dc962d4d39,citation,https://arxiv.org/pdf/1411.2861.pdf,Computational Baby Learning.,2014 +105,Switzerland,VOC,voc,46.5190557,6.5667576,"EPFL, Lausanne (Switzerland)",edu,7b8ace072475a9a42d6ceb293c8b4a8c9b573284,citation,http://www.vision.ee.ethz.ch/en/publications/papers/proceedings/eth_biwi_00855.pdf,Conditional Random Fields for multi-camera object detection,2011 +106,Switzerland,VOC,voc,47.376313,8.5476699,"ETHZ, Zurich (Switzerland)",edu,7b8ace072475a9a42d6ceb293c8b4a8c9b573284,citation,http://www.vision.ee.ethz.ch/en/publications/papers/proceedings/eth_biwi_00855.pdf,Conditional Random Fields for multi-camera object detection,2011 +107,United States,VOC,voc,37.2283843,-80.4234167,Virginia Tech,edu,3d0660e18c17db305b9764bb86b21a429241309e,citation,https://arxiv.org/pdf/1604.03505.pdf,Counting Everyday Objects in Everyday Scenes,2017 +108,United States,VOC,voc,33.776033,-84.39884086,Georgia Institute of Technology,edu,3d0660e18c17db305b9764bb86b21a429241309e,citation,https://arxiv.org/pdf/1604.03505.pdf,Counting Everyday Objects in Everyday Scenes,2017 +109,United States,VOC,voc,37.3239177,-122.0129693,"NEC Labs, Cupertino, CA",company,8f76401847d3e3f0331bab24b17f76953be66220,citation,http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2010_1077.pdf,Deep Coding Network,2010 +110,United States,VOC,voc,40.47913175,-74.43168868,Rutgers University,edu,8f76401847d3e3f0331bab24b17f76953be66220,citation,http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2010_1077.pdf,Deep Coding Network,2010 +111,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,fe7ae13bf5fc80cf0837bacbe44905bd8749f03f,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Deep%20coupled%20metric%20learning%20for%20cross-modal%20matching.pdf,Deep Coupled Metric Learning for Cross-Modal Matching,2017 +112,Singapore,VOC,voc,1.3484104,103.68297965,Nanyang Technological University,edu,fe7ae13bf5fc80cf0837bacbe44905bd8749f03f,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Deep%20coupled%20metric%20learning%20for%20cross-modal%20matching.pdf,Deep Coupled Metric Learning for Cross-Modal Matching,2017 +113,Canada,VOC,voc,43.7743911,-79.50481085,York University,edu,cdeee5eed68e7c8eb06185f7fcb1a072af784886,citation,https://arxiv.org/pdf/1505.01173.pdf,Deep Learning for Object Saliency Detection and Image Segmentation,2015 +114,United States,VOC,voc,37.43131385,-122.16936535,Stanford University,edu,cdeee5eed68e7c8eb06185f7fcb1a072af784886,citation,https://arxiv.org/pdf/1505.01173.pdf,Deep Learning for Object Saliency Detection and Image Segmentation,2015 +115,Canada,VOC,voc,49.8091536,-97.13304179,University of Manitoba,edu,64b9675e924974fdec78a7272b27c7e7ec63a608,citation,http://www.cs.umanitoba.ca/~ywang/papers/icip17.pdf,Depth-aware object instance segmentation,2017 +116,China,VOC,voc,31.32235655,121.38400941,Shanghai University,edu,64b9675e924974fdec78a7272b27c7e7ec63a608,citation,http://www.cs.umanitoba.ca/~ywang/papers/icip17.pdf,Depth-aware object instance segmentation,2017 +117,Thailand,VOC,voc,13.65450525,100.49423171,Robotics Institute,edu,7d520f474f2fc59422d910b980f8485716ce0a3e,citation,https://pdfs.semanticscholar.org/2128/4a9310a4b4c836b8dfb6af39c682b7348128.pdf,Designing Convolutional Neural Networks for Urban Scene Understanding,2017 +118,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,7d520f474f2fc59422d910b980f8485716ce0a3e,citation,https://pdfs.semanticscholar.org/2128/4a9310a4b4c836b8dfb6af39c682b7348128.pdf,Designing Convolutional Neural Networks for Urban Scene Understanding,2017 +119,India,VOC,voc,17.4450981,78.3497678,IIIT Hyderabad,edu,f23114073e0e513b1c1c55e8777bda503721718c,citation,https://arxiv.org/pdf/1811.10016.pdf,Dissimilarity Coefficient based Weakly Supervised Object Detection,2018 +120,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,f23114073e0e513b1c1c55e8777bda503721718c,citation,https://arxiv.org/pdf/1811.10016.pdf,Dissimilarity Coefficient based Weakly Supervised Object Detection,2018 +121,United States,VOC,voc,37.43131385,-122.16936535,Stanford University,edu,280d632ef3234c5ab06018c6eaccead75bc173b3,citation,http://ai.stanford.edu/~ajoulin/article/eccv14-vidcoloc.pdf,Efficient Image and Video Co-localization with Frank-Wolfe Algorithm,2014 +122,United States,VOC,voc,37.3239177,-122.0129693,NEC,company,44a3ee0429a6d1b79d431b4d396962175c28ace6,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Yang_Exploit_All_the_CVPR_2016_paper.pdf,Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers,2016 +123,United States,VOC,voc,38.99203005,-76.9461029,University of Maryland College Park,edu,44a3ee0429a6d1b79d431b4d396962175c28ace6,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Yang_Exploit_All_the_CVPR_2016_paper.pdf,Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers,2016 +124,United States,VOC,voc,34.13710185,-118.12527487,California Institute of Technology,edu,1a54a8b0c7b3fc5a21c6d33656690585c46ca08b,citation,http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf,Fast Feature Pyramids for Object Detection,2014 +125,United States,VOC,voc,42.4505507,-76.4783513,Cornell University,edu,1a54a8b0c7b3fc5a21c6d33656690585c46ca08b,citation,http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf,Fast Feature Pyramids for Object Detection,2014 +126,United States,VOC,voc,47.6418392,-122.1407465,"Microsoft Research Redmond, Redmond, USA",company,1a54a8b0c7b3fc5a21c6d33656690585c46ca08b,citation,http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf,Fast Feature Pyramids for Object Detection,2014 +127,Singapore,VOC,voc,1.29500195,103.84909214,Singapore Management University,edu,742d5b4590284b632ca043a16507fb5a459dceb2,citation,https://arxiv.org/pdf/1712.00721.pdf,Feature Agglomeration Networks for Single Stage Face Detection,2017 +128,China,VOC,voc,30.19331415,120.11930822,Zhejiang University,edu,742d5b4590284b632ca043a16507fb5a459dceb2,citation,https://arxiv.org/pdf/1712.00721.pdf,Feature Agglomeration Networks for Single Stage Face Detection,2017 +129,United States,VOC,voc,42.2745754,-71.8062724,Worcester Polytechnic Institute,edu,bd433d471af50b571d7284afb5ee435654ace99f,citation,https://pdfs.semanticscholar.org/bd43/3d471af50b571d7284afb5ee435654ace99f.pdf,Going Deeper with Convolutional Neural Network for Intelligent Transportation,2016 +130,United States,VOC,voc,33.5866784,-101.87539204,Electrical and Computer Engineering,edu,bd433d471af50b571d7284afb5ee435654ace99f,citation,https://pdfs.semanticscholar.org/bd43/3d471af50b571d7284afb5ee435654ace99f.pdf,Going Deeper with Convolutional Neural Network for Intelligent Transportation,2016 +131,Israel,VOC,voc,32.76162915,35.01986304,University of Haifa,edu,fe683e48f373fa14c07851966474d15588b8c28b,citation,https://pdfs.semanticscholar.org/fe68/3e48f373fa14c07851966474d15588b8c28b.pdf,Hinge-Minimax Learner for the Ensemble of Hyperplanes,2018 +132,Israel,VOC,voc,32.7767783,35.0231271,Technion - Israel Institute of Technology,edu,fe683e48f373fa14c07851966474d15588b8c28b,citation,https://pdfs.semanticscholar.org/fe68/3e48f373fa14c07851966474d15588b8c28b.pdf,Hinge-Minimax Learner for the Ensemble of Hyperplanes,2018 +133,United States,VOC,voc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,4e65c9f0a64b6a4333b12e2adc3861ad75aca83b,citation,https://pdfs.semanticscholar.org/4e65/c9f0a64b6a4333b12e2adc3861ad75aca83b.pdf,Image Classification Using Super-Vector Coding of Local Image Descriptors,2010 +134,United States,VOC,voc,40.47913175,-74.43168868,Rutgers University,edu,4e65c9f0a64b6a4333b12e2adc3861ad75aca83b,citation,https://pdfs.semanticscholar.org/4e65/c9f0a64b6a4333b12e2adc3861ad75aca83b.pdf,Image Classification Using Super-Vector Coding of Local Image Descriptors,2010 +135,United States,VOC,voc,41.7847112,-87.59260567,"Toyota Technological Institute, Chicago",edu,a1f33473ea3b8e98fee37e32ecbecabc379e07a0,citation,http://cs.brown.edu/people/ren/publications/cvpr2013/cascade_final.pdf,Image Segmentation by Cascaded Region Agglomeration,2013 +136,China,VOC,voc,30.19331415,120.11930822,Zhejiang University,edu,a1f33473ea3b8e98fee37e32ecbecabc379e07a0,citation,http://cs.brown.edu/people/ren/publications/cvpr2013/cascade_final.pdf,Image Segmentation by Cascaded Region Agglomeration,2013 +137,Canada,VOC,voc,49.8091536,-97.13304179,University of Manitoba,edu,3b60af814574ebe389856e9f7008bb83b0539abc,citation,https://arxiv.org/pdf/1703.00551.pdf,Label Refinement Network for Coarse-to-Fine Semantic Segmentation.,2017 +138,United States,VOC,voc,39.86948105,-84.87956905,Indiana University,edu,3b60af814574ebe389856e9f7008bb83b0539abc,citation,https://arxiv.org/pdf/1703.00551.pdf,Label Refinement Network for Coarse-to-Fine Semantic Segmentation.,2017 +139,United States,VOC,voc,47.6543238,-122.30800894,University of Washington,edu,214f552070a7eb5ef5efe0d6ffeaaa594a3c3535,citation,http://allenai.org/content/publications/objectNgrams_cvpr14.pdf,Learning Everything about Anything: Webly-Supervised Visual Concept Learning,2014 +140,Germany,VOC,voc,48.14955455,11.56775314,Technical University Munich,edu,472541ccd941b9b4c52e1f088cc1152de9b3430f,citation,https://arxiv.org/pdf/1612.00197.pdf,Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses,2017 +141,United States,VOC,voc,39.3299013,-76.6205177,Johns Hopkins University,edu,472541ccd941b9b4c52e1f088cc1152de9b3430f,citation,https://arxiv.org/pdf/1612.00197.pdf,Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses,2017 +142,United States,VOC,voc,40.11571585,-88.22750772,Beckman Institute,edu,0bbb40e5b9e546a3f4e7340b2980059065c99203,citation,https://arxiv.org/pdf/1712.00886.pdf,Learning Object Detectors from Scratch with Gated Recurrent Feature Pyramids,2017 +143,China,VOC,voc,31.30104395,121.50045497,Fudan University,edu,0bbb40e5b9e546a3f4e7340b2980059065c99203,citation,https://arxiv.org/pdf/1712.00886.pdf,Learning Object Detectors from Scratch with Gated Recurrent Feature Pyramids,2017 diff --git a/site/datasets/verified/yfcc_100m.csv b/site/datasets/verified/yfcc_100m.csv index c7b3cd1f..a7625e9d 100644 --- a/site/datasets/verified/yfcc_100m.csv +++ b/site/datasets/verified/yfcc_100m.csv @@ -1,2 +1,104 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,YFCC100M,yfcc_100m,0.0,0.0,,,,main,,YFCC100M: the new data in multimedia research,2016 +1,United States,YFCC100M,yfcc_100m,38.7768106,-94.9442982,Amazon,company,d2067c7d31bebf89249966c3d8ee9395dd8531b8,citation,http://skamalas.com/docs/ICPR_2016.pdf,Visual congruent ads for image search,2016 +2,Netherlands,YFCC100M,yfcc_100m,52.356678,4.95187,"Centrum Wiskunde & Informatica (CWI), The Netherlands",edu,d2067c7d31bebf89249966c3d8ee9395dd8531b8,citation,http://skamalas.com/docs/ICPR_2016.pdf,Visual congruent ads for image search,2016 +3,Spain,YFCC100M,yfcc_100m,41.3789689,2.1797941,"DTIC, Universitat Pompeu Fabra & DCC, Universidad de Chile, Chile",edu,d2067c7d31bebf89249966c3d8ee9395dd8531b8,citation,http://skamalas.com/docs/ICPR_2016.pdf,Visual congruent ads for image search,2016 +4,United States,YFCC100M,yfcc_100m,33.0723372,-96.810299,"Futurewei Technologies Inc., USA",company,d2067c7d31bebf89249966c3d8ee9395dd8531b8,citation,http://skamalas.com/docs/ICPR_2016.pdf,Visual congruent ads for image search,2016 +5,United States,YFCC100M,yfcc_100m,40.7574714,-73.9877318,Yahoo,company,d2067c7d31bebf89249966c3d8ee9395dd8531b8,citation,http://skamalas.com/docs/ICPR_2016.pdf,Visual congruent ads for image search,2016 +6,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,010f0f4929e6a6644fb01f0e43820f91d0fad292,citation,,YFCC100M: the new data in multimedia research,2016 +7,United States,YFCC100M,yfcc_100m,37.4523809,-122.1797586,In-Q-Tel,mil,010f0f4929e6a6644fb01f0e43820f91d0fad292,citation,,YFCC100M: the new data in multimedia research,2016 +8,United States,YFCC100M,yfcc_100m,40.7574714,-73.9877318,Yahoo,company,010f0f4929e6a6644fb01f0e43820f91d0fad292,citation,,YFCC100M: the new data in multimedia research,2016 +9,United States,YFCC100M,yfcc_100m,39.1254938,-77.22293475,National Institute of Standards and Technology,edu,36631dcbb9452ea3d35b19b2de6ef709022531a6,citation,https://pdfs.semanticscholar.org/0109/93ae9742f7f4c40763a25ded237723de60b5.pdf,"TRECVID 2016 : Evaluating Video Search , Video Event Detection , Localization , and Hyperlinking",2016 +10,Ireland,YFCC100M,yfcc_100m,53.38522185,-6.25740874,Dublin City University,edu,36631dcbb9452ea3d35b19b2de6ef709022531a6,citation,https://pdfs.semanticscholar.org/0109/93ae9742f7f4c40763a25ded237723de60b5.pdf,"TRECVID 2016 : Evaluating Video Search , Video Event Detection , Localization , and Hyperlinking",2016 +11,Netherlands,YFCC100M,yfcc_100m,51.816701,5.865272,Radboud University,edu,36631dcbb9452ea3d35b19b2de6ef709022531a6,citation,https://pdfs.semanticscholar.org/0109/93ae9742f7f4c40763a25ded237723de60b5.pdf,"TRECVID 2016 : Evaluating Video Search , Video Event Detection , Localization , and Hyperlinking",2016 +12,Netherlands,YFCC100M,yfcc_100m,52.2380139,6.8566761,University of Twente,edu,36631dcbb9452ea3d35b19b2de6ef709022531a6,citation,https://pdfs.semanticscholar.org/0109/93ae9742f7f4c40763a25ded237723de60b5.pdf,"TRECVID 2016 : Evaluating Video Search , Video Event Detection , Localization , and Hyperlinking",2016 +13,France,YFCC100M,yfcc_100m,43.614386,7.071125,EURECOM,edu,36631dcbb9452ea3d35b19b2de6ef709022531a6,citation,https://pdfs.semanticscholar.org/0109/93ae9742f7f4c40763a25ded237723de60b5.pdf,"TRECVID 2016 : Evaluating Video Search , Video Event Detection , Localization , and Hyperlinking",2016 +14,China,YFCC100M,yfcc_100m,40.00229045,116.32098908,Tsinghua University,edu,788da403d220e2cc08dca9cffbe1f84b3c68469a,citation,https://arxiv.org/pdf/1708.06656.pdf,Causally Regularized Learning with Agnostic Data Selection Bias.,2018 +15,United States,YFCC100M,yfcc_100m,22.5447154,113.9357164,Tencent,company,788da403d220e2cc08dca9cffbe1f84b3c68469a,citation,https://arxiv.org/pdf/1708.06656.pdf,Causally Regularized Learning with Agnostic Data Selection Bias.,2018 +16,Italy,YFCC100M,yfcc_100m,45.069428,7.6889006,University of Turin,edu,61b17f719bab899dd50bcc3be9d55673255fe102,citation,https://arxiv.org/pdf/1608.02289.pdf,Detecting Sarcasm in Multimodal Social Platforms,2016 +17,United States,YFCC100M,yfcc_100m,40.7574714,-73.9877318,Yahoo,company,61b17f719bab899dd50bcc3be9d55673255fe102,citation,https://arxiv.org/pdf/1608.02289.pdf,Detecting Sarcasm in Multimodal Social Platforms,2016 +18,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,2577211aeaaa1f2245ddc379564813bee3d46c06,citation,https://arxiv.org/pdf/1512.06974.pdf,Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels,2016 +19,United States,YFCC100M,yfcc_100m,47.6423318,-122.1369302,Microsoft,company,2577211aeaaa1f2245ddc379564813bee3d46c06,citation,https://arxiv.org/pdf/1512.06974.pdf,Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels,2016 +20,United States,YFCC100M,yfcc_100m,37.3936717,-122.0807262,Facebook,company,b6397f818f67faad6a36de8480212f6e7e82e71c,citation,,Tag Prediction at Flickr: A View from the Darkroom,2017 +21,United States,YFCC100M,yfcc_100m,47.6543238,-122.30800894,University of Washington,edu,b6397f818f67faad6a36de8480212f6e7e82e71c,citation,,Tag Prediction at Flickr: A View from the Darkroom,2017 +22,United States,YFCC100M,yfcc_100m,37.7749295,-122.4194155,"Yahoo Research, San Francisco, CA",company,b6397f818f67faad6a36de8480212f6e7e82e71c,citation,,Tag Prediction at Flickr: A View from the Darkroom,2017 +23,United States,YFCC100M,yfcc_100m,37.36883,-122.0363496,"Yahoo Research, Sunnyvale, CA, USA",edu,b6397f818f67faad6a36de8480212f6e7e82e71c,citation,,Tag Prediction at Flickr: A View from the Darkroom,2017 +24,Germany,YFCC100M,yfcc_100m,53.1474921,8.1817645,University of Oldenburg,edu,d3dae5c4f47a0457ebe2297d7e70432521c82cc6,citation,https://pdfs.semanticscholar.org/d3da/e5c4f47a0457ebe2297d7e70432521c82cc6.pdf,The Benchmarking Initiative for Multimedia Evaluation: MediaEval 2016,2017 +25,Netherlands,YFCC100M,yfcc_100m,51.816701,5.865272,Radboud University,edu,d3dae5c4f47a0457ebe2297d7e70432521c82cc6,citation,https://pdfs.semanticscholar.org/d3da/e5c4f47a0457ebe2297d7e70432521c82cc6.pdf,The Benchmarking Initiative for Multimedia Evaluation: MediaEval 2016,2017 +26,United States,YFCC100M,yfcc_100m,42.57054745,-88.55578627,University of Geneva,edu,d3dae5c4f47a0457ebe2297d7e70432521c82cc6,citation,https://pdfs.semanticscholar.org/d3da/e5c4f47a0457ebe2297d7e70432521c82cc6.pdf,The Benchmarking Initiative for Multimedia Evaluation: MediaEval 2016,2017 +27,Ireland,YFCC100M,yfcc_100m,53.38522185,-6.25740874,Dublin City University,edu,d3dae5c4f47a0457ebe2297d7e70432521c82cc6,citation,https://pdfs.semanticscholar.org/d3da/e5c4f47a0457ebe2297d7e70432521c82cc6.pdf,The Benchmarking Initiative for Multimedia Evaluation: MediaEval 2016,2017 +28,United Kingdom,YFCC100M,yfcc_100m,51.24303255,-0.59001382,University of Surrey,edu,8a5be2b370c5a1df06e1063b306b2874706c24dc,citation,http://epubs.surrey.ac.uk/814067/1/konstanz-natural-video.pdf,The Konstanz natural video database (KoNViD-1k),2017 +29,Germany,YFCC100M,yfcc_100m,47.689426,9.1868777,University of Konstanz,edu,8a5be2b370c5a1df06e1063b306b2874706c24dc,citation,http://epubs.surrey.ac.uk/814067/1/konstanz-natural-video.pdf,The Konstanz natural video database (KoNViD-1k),2017 +30,Hungary,YFCC100M,yfcc_100m,47.4782828,19.0521075,Hungarian Academy of Sciences,edu,8a5be2b370c5a1df06e1063b306b2874706c24dc,citation,http://epubs.surrey.ac.uk/814067/1/konstanz-natural-video.pdf,The Konstanz natural video database (KoNViD-1k),2017 +31,Australia,YFCC100M,yfcc_100m,-33.8809651,151.20107299,University of Technology Sydney,edu,062d67af7677db086ef35186dc936b4511f155d7,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Chang_They_Are_Not_CVPR_2016_paper.pdf,They are Not Equally Reliable: Semantic Event Search Using Differentiated Concept Classifiers,2016 +32,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,062d67af7677db086ef35186dc936b4511f155d7,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Chang_They_Are_Not_CVPR_2016_paper.pdf,They are Not Equally Reliable: Semantic Event Search Using Differentiated Concept Classifiers,2016 +33,United States,YFCC100M,yfcc_100m,47.6543238,-122.30800894,University of Washington,edu,697f0e24f24b016cef9474db485fe61a667f07b8,citation,https://arxiv.org/pdf/1802.02568.pdf,VISER: Visual Self-Regularization,2018 +34,United States,YFCC100M,yfcc_100m,32.970001,-96.7054311,Yahoo Research,company,697f0e24f24b016cef9474db485fe61a667f07b8,citation,https://arxiv.org/pdf/1802.02568.pdf,VISER: Visual Self-Regularization,2018 +35,United States,YFCC100M,yfcc_100m,40.4439789,-79.9464634,Intel Labs,company,5f96af88dfef2bff4ed8a49ceca909efb701d1d5,citation,https://pdfs.semanticscholar.org/6d3f/b3ef83a5d5a905250a1ec986e720ae422ed4.pdf,Addressing the Dark Side of Vision Research: Storage,2017 +36,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,db989600b1857cea9abd14dba9c10808030c7d33,citation,,Delving Deep into Personal Photo and Video Search,2017 +37,United States,YFCC100M,yfcc_100m,42.718568,-84.47791571,Michigan State University,edu,db989600b1857cea9abd14dba9c10808030c7d33,citation,,Delving Deep into Personal Photo and Video Search,2017 +38,United States,YFCC100M,yfcc_100m,40.7127753,-74.0059728,"Yahoo Research, New York City, NY, USA",edu,db989600b1857cea9abd14dba9c10808030c7d33,citation,,Delving Deep into Personal Photo and Video Search,2017 +39,United States,YFCC100M,yfcc_100m,37.7749295,-122.4194155,"Yahoo Research, San Francisco, CA",company,db989600b1857cea9abd14dba9c10808030c7d33,citation,,Delving Deep into Personal Photo and Video Search,2017 +40,United States,YFCC100M,yfcc_100m,37.36883,-122.0363496,"Yahoo Research, Sunnyvale, CA, USA",edu,db989600b1857cea9abd14dba9c10808030c7d33,citation,,Delving Deep into Personal Photo and Video Search,2017 +41,United States,YFCC100M,yfcc_100m,41.2097516,-73.8026467,IBM T.J. Watson Research Center,company,9e1b0f50417867317a8cb8fe35c6b2617ad9641e,citation,https://arxiv.org/pdf/1901.10436.pdf,Diversity in Faces,2019 +42,United States,YFCC100M,yfcc_100m,32.87935255,-117.23110049,"University of California, San Diego",edu,a9be20954e9177d8b2bc39747acdea4f5496f394,citation,http://acsweb.ucsd.edu/~yuw176/report/cvpr_2016.pdf,Event-Specific Image Importance,2016 +43,United States,YFCC100M,yfcc_100m,47.6423318,-122.1369302,Microsoft,company,9bbc952adb3e3c6091d45d800e806d3373a52bac,citation,https://pdfs.semanticscholar.org/9bbc/952adb3e3c6091d45d800e806d3373a52bac.pdf,Learning Visual Classifiers using Human-centric Annotations,2015 +44,Singapore,YFCC100M,yfcc_100m,1.29500195,103.84909214,Singapore Management University,edu,c8b4beb3dd4d6594fcad58de0394c731d112780f,citation,https://pdfs.semanticscholar.org/c8b4/beb3dd4d6594fcad58de0394c731d112780f.pdf,Leveraging Multimodal Semantics and Sentiments Information in Event Understanding and Summarization,2017 +45,Canada,YFCC100M,yfcc_100m,43.6129484,-79.5590303,Samsung Electronics,edu,c8b4beb3dd4d6594fcad58de0394c731d112780f,citation,https://pdfs.semanticscholar.org/c8b4/beb3dd4d6594fcad58de0394c731d112780f.pdf,Leveraging Multimodal Semantics and Sentiments Information in Event Understanding and Summarization,2017 +46,Singapore,YFCC100M,yfcc_100m,1.2962018,103.77689944,National University of Singapore,edu,c8b4beb3dd4d6594fcad58de0394c731d112780f,citation,https://pdfs.semanticscholar.org/c8b4/beb3dd4d6594fcad58de0394c731d112780f.pdf,Leveraging Multimodal Semantics and Sentiments Information in Event Understanding and Summarization,2017 +47,United States,YFCC100M,yfcc_100m,40.7574714,-73.9877318,Yahoo,company,f0f876b5bf3d442ef9eb017a6fa873bc5d5830c8,citation,https://arxiv.org/pdf/1604.06480.pdf,"LOH and behold: Web-scale visual search, recommendation and clustering using Locally Optimized Hashing",2016 +48,Australia,YFCC100M,yfcc_100m,-37.7963689,144.9611738,The University of Melbourne,edu,3ad6bd5c34b0866019b54f5976d644326069cb3d,citation,http://people.eng.unimelb.edu.au/limk2/2016-ICAPS-groupTourRec.pdf,Towards next generation touring: personalized group tours,2016 +49,Australia,YFCC100M,yfcc_100m,-33.917347,151.2312675,National ICT Australia,edu,3ad6bd5c34b0866019b54f5976d644326069cb3d,citation,http://people.eng.unimelb.edu.au/limk2/2016-ICAPS-groupTourRec.pdf,Towards next generation touring: personalized group tours,2016 +50,Australia,YFCC100M,yfcc_100m,-37.8087465,144.9638875,RMIT University,edu,3ad6bd5c34b0866019b54f5976d644326069cb3d,citation,http://people.eng.unimelb.edu.au/limk2/2016-ICAPS-groupTourRec.pdf,Towards next generation touring: personalized group tours,2016 +51,Denmark,YFCC100M,yfcc_100m,55.659635,12.590958,IT University of Copenhagen,edu,92fb2cb7f9a54360ea4442f902472aded5e88c74,citation,https://pure.itu.dk/portal/files/82406569/tmm_2017_blackthorn.pdf,Blackthorn: Large-Scale Interactive Multimodal Learning,2018 +52,Netherlands,YFCC100M,yfcc_100m,52.3553655,4.9501644,University of Amsterdam,edu,92fb2cb7f9a54360ea4442f902472aded5e88c74,citation,https://pure.itu.dk/portal/files/82406569/tmm_2017_blackthorn.pdf,Blackthorn: Large-Scale Interactive Multimodal Learning,2018 +53,Singapore,YFCC100M,yfcc_100m,1.2962018,103.77689944,National University of Singapore,edu,f2cbdd5f24c2d6a4f33734636cc220f0825042f0,citation,https://arxiv.org/pdf/1708.00634.pdf,Dual-Glance Model for Deciphering Social Relationships,2017 +54,United States,YFCC100M,yfcc_100m,44.97308605,-93.23708813,University of Minnesota,edu,f2cbdd5f24c2d6a4f33734636cc220f0825042f0,citation,https://arxiv.org/pdf/1708.00634.pdf,Dual-Glance Model for Deciphering Social Relationships,2017 +55,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,d0ac9913a3b1784f94446db2f1fb4cf3afda151f,citation,https://arxiv.org/pdf/1607.04780.pdf,Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale Concept Learning,2016 +56,China,YFCC100M,yfcc_100m,34.250803,108.983693,Xi’an Jiaotong University,edu,d0ac9913a3b1784f94446db2f1fb4cf3afda151f,citation,https://arxiv.org/pdf/1607.04780.pdf,Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale Concept Learning,2016 +57,Netherlands,YFCC100M,yfcc_100m,52.0021256,4.3732982,"Delft University of Technology, Netherlands",edu,5674ace2c666f6af53a2a58279ade6ebd271e8c7,citation,https://pdfs.semanticscholar.org/5e11/24345969a536fd5fa78db05b6149ea262a69.pdf,Exploiting Visual-based Intent Classification for Diverse Social Image Retrieval,2017 +58,Netherlands,YFCC100M,yfcc_100m,51.816701,5.865272,Radboud University,edu,5674ace2c666f6af53a2a58279ade6ebd271e8c7,citation,https://pdfs.semanticscholar.org/5e11/24345969a536fd5fa78db05b6149ea262a69.pdf,Exploiting Visual-based Intent Classification for Diverse Social Image Retrieval,2017 +59,China,YFCC100M,yfcc_100m,34.2469152,108.91061982,Northwestern Polytechnical University,edu,5ed63317cdef429f77499d9de0e58402ed1f687e,citation,https://arxiv.org/pdf/1702.05878.pdf,From Photo Streams to Evolving Situations,2017 +60,Thailand,YFCC100M,yfcc_100m,13.7972777,100.3263216,Mahidol University,edu,5ed63317cdef429f77499d9de0e58402ed1f687e,citation,https://arxiv.org/pdf/1702.05878.pdf,From Photo Streams to Evolving Situations,2017 +61,United States,YFCC100M,yfcc_100m,38.0333742,-84.5017758,University of Kentucky,edu,a851f32d4a4bffd6f95ac67c2ef1b25b8c4e5480,citation,http://bmvc2018.org/contents/papers/0586.pdf,Learning Geo-Temporal Image Features.,2018 +62,United States,YFCC100M,yfcc_100m,38.6480445,-90.3099667,Washington University,edu,a851f32d4a4bffd6f95ac67c2ef1b25b8c4e5480,citation,http://bmvc2018.org/contents/papers/0586.pdf,Learning Geo-Temporal Image Features.,2018 +63,Canada,YFCC100M,yfcc_100m,48.4634067,-123.3116935,University of Victoria,edu,8a2e3453d5f88ce6ce73cc7731800cd512f95e64,citation,https://arxiv.org/pdf/1711.05971.pdf,Learning to Find Good Correspondences,2018 +64,Austria,YFCC100M,yfcc_100m,47.05821,15.46019568,Graz University of Technology,edu,8a2e3453d5f88ce6ce73cc7731800cd512f95e64,citation,https://arxiv.org/pdf/1711.05971.pdf,Learning to Find Good Correspondences,2018 +65,Netherlands,YFCC100M,yfcc_100m,52.356678,4.95187,"Centrum Wiskunde & Informatica, Amsterdam, Netherlands",edu,cbd0f4006df1b2661f2c3a711d95727d61756afe,citation,,Multimodal Classification of Moderated Online Pro-Eating Disorder Content,2017 +66,United States,YFCC100M,yfcc_100m,33.776033,-84.39884086,Georgia Institute of Technology,edu,cbd0f4006df1b2661f2c3a711d95727d61756afe,citation,,Multimodal Classification of Moderated Online Pro-Eating Disorder Content,2017 +67,United States,YFCC100M,yfcc_100m,33.7756178,-84.396285,Georgia Tech,edu,cbd0f4006df1b2661f2c3a711d95727d61756afe,citation,,Multimodal Classification of Moderated Online Pro-Eating Disorder Content,2017 +68,United States,YFCC100M,yfcc_100m,37.7749295,-122.4194155,"Yahoo Research, San Francisco, CA",company,cbd0f4006df1b2661f2c3a711d95727d61756afe,citation,,Multimodal Classification of Moderated Online Pro-Eating Disorder Content,2017 +69,Australia,YFCC100M,yfcc_100m,-37.7963689,144.9611738,The University of Melbourne,edu,26861e41e5b44774a2801e1cd76fd56126bbe257,citation,https://pdfs.semanticscholar.org/2686/1e41e5b44774a2801e1cd76fd56126bbe257.pdf,Personalized Tour Recommendation Based on User Interests and Points of Interest Visit Durations,2015 +70,Australia,YFCC100M,yfcc_100m,-33.917347,151.2312675,National ICT Australia,edu,26861e41e5b44774a2801e1cd76fd56126bbe257,citation,https://pdfs.semanticscholar.org/2686/1e41e5b44774a2801e1cd76fd56126bbe257.pdf,Personalized Tour Recommendation Based on User Interests and Points of Interest Visit Durations,2015 +71,Australia,YFCC100M,yfcc_100m,-33.8809651,151.20107299,University of Technology Sydney,edu,fc8fb68a7e3b79c37108588671c0e1abf374f501,citation,https://cs.uwaterloo.ca/~y328yu/mypapers/pami17.pdf,Semantic Pooling for Complex Event Analysis in Untrimmed Videos,2017 +72,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,fc8fb68a7e3b79c37108588671c0e1abf374f501,citation,https://cs.uwaterloo.ca/~y328yu/mypapers/pami17.pdf,Semantic Pooling for Complex Event Analysis in Untrimmed Videos,2017 +73,United States,YFCC100M,yfcc_100m,37.4585796,-122.17560525,SRI International,edu,33737f966cca541d5dbfb72906da2794c692b65b,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w28/papers/Mensink_Spotting_Audio-Visual_Inconsistencies_CVPR_2017_paper.pdf,Spotting Audio-Visual Inconsistencies (SAVI) in Manipulated Video,2017 +74,Netherlands,YFCC100M,yfcc_100m,52.3553655,4.9501644,University of Amsterdam,edu,33737f966cca541d5dbfb72906da2794c692b65b,citation,http://openaccess.thecvf.com/content_cvpr_2017_workshops/w28/papers/Mensink_Spotting_Audio-Visual_Inconsistencies_CVPR_2017_paper.pdf,Spotting Audio-Visual Inconsistencies (SAVI) in Manipulated Video,2017 +75,Australia,YFCC100M,yfcc_100m,-35.2776999,149.118527,Australian National University,edu,2ef0adfaf84def97e88ae77f887f4497ddc9ccbb,citation,https://arxiv.org/pdf/1706.09067.pdf,Structured Recommendation,2017 +76,Australia,YFCC100M,yfcc_100m,-35.2776999,149.118527,CSIRO,edu,2ef0adfaf84def97e88ae77f887f4497ddc9ccbb,citation,https://arxiv.org/pdf/1706.09067.pdf,Structured Recommendation,2017 +77,Singapore,YFCC100M,yfcc_100m,1.2962018,103.77689944,National University of Singapore,edu,6e50c32f7244e3556eb879f24b7de8410f3177f6,citation,https://arxiv.org/pdf/1812.05917.pdf,Visual Social Relationship Recognition,2018 +78,United States,YFCC100M,yfcc_100m,44.97399,-93.2277285,University of Minnesota-Twin Cities,edu,6e50c32f7244e3556eb879f24b7de8410f3177f6,citation,https://arxiv.org/pdf/1812.05917.pdf,Visual Social Relationship Recognition,2018 +79,Australia,YFCC100M,yfcc_100m,-37.7963689,144.9611738,The University of Melbourne,edu,24301df85a669c86ae58962b5645b04a66c63cb1,citation,https://arxiv.org/pdf/1808.08023.pdf,A Jointly Learned Context-Aware Place of Interest Embedding for Trip Recommendations,2018 +80,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,f1b35a675017b9eabd70a4bb4ec90a61117e4ad2,citation,http://www.cs.cmu.edu/~yunwang/papers/interspeech17.pdf,A Transfer Learning Based Feature Extractor for Polyphonic Sound Event Detection Using Connectionist Temporal Classification.,2017 +81,Australia,YFCC100M,yfcc_100m,-34.9189226,138.60423668,University of Adelaide,edu,86973c8c9adef3b6a36c31c2682f2179e3013ae1,citation,https://pdfs.semanticscholar.org/8697/3c8c9adef3b6a36c31c2682f2179e3013ae1.pdf,Active Learning from Noisy Tagged Images,2018 +82,Italy,YFCC100M,yfcc_100m,45.069428,7.6889006,University of Turin,edu,06d10f906ac9023b5566c70a2600384b8c1b24c3,citation,https://arxiv.org/pdf/1711.00536.pdf,Beautiful and Damned. Combined Effect of Content Quality and Social Ties on User Engagement,2017 +83,Japan,YFCC100M,yfcc_100m,33.5934539,130.3557837,Information Technologies Institute,edu,ea985e35b36f05156f82ac2025ad3fe8037be0cd,citation,https://pdfs.semanticscholar.org/ea98/5e35b36f05156f82ac2025ad3fe8037be0cd.pdf,CERTH/CEA LIST at MediaEval Placing Task 2015,2015 +84,Japan,YFCC100M,yfcc_100m,35.6572957,139.54255868,Tokyo Denki University,edu,666300af8ffb8c903223f32f1fcc5c4674e2430b,citation,https://arxiv.org/pdf/1703.07920.pdf,Changing Fashion Cultures,2017 +85,China,YFCC100M,yfcc_100m,24.4399419,118.09301781,Xiamen University,edu,b3e50a64709a62628105546e392cf796f95ea0fb,citation,https://arxiv.org/pdf/1804.04312.pdf,Clustering via Boundary Erosion,2018 +86,Thailand,YFCC100M,yfcc_100m,13.65450525,100.49423171,Robotics Institute,edu,b1398234454ee3c9bc5a20f6d2d00232cb79622c,citation,https://pdfs.semanticscholar.org/b139/8234454ee3c9bc5a20f6d2d00232cb79622c.pdf,Combining Low-Density Separators with CNNs,2016 +87,Switzerland,YFCC100M,yfcc_100m,46.5190557,6.5667576,EPFL,edu,e8dbdd936c132a1cfb0ecdffce05292ee282263f,citation,http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2018/03/NDSS2018_06B-1_Olteanu_Slides.pdf,Consensual and Privacy-Preserving Sharing of Multi-Subject and Interdependent Data,2018 +88,United Kingdom,YFCC100M,yfcc_100m,51.7534538,-1.25400997,University of Oxford,edu,20a1350815c4588a2380414bc78a7e215a2e3955,citation,https://arxiv.org/pdf/1807.05636.pdf,Cross Pixel Optical Flow Similarity for Self-Supervised Learning,2018 +89,United States,YFCC100M,yfcc_100m,37.43131385,-122.16936535,Stanford University,edu,1e54025a6b399bfc210a52a8c3314e8f570c2204,citation,https://arxiv.org/pdf/1511.07571.pdf,DenseCap: Fully Convolutional Localization Networks for Dense Captioning,2016 +90,Ireland,YFCC100M,yfcc_100m,53.308244,-6.2241652,University College Dublin,edu,cc45fb67772898c36519de565c9bd0d1d11f1435,citation,https://forensicsandsecurity.com/papers/EvaluatingFacialAgeEstimation.pdf,Evaluating Automated Facial Age Estimation Techniques for Digital Forensics,2018 +91,Italy,YFCC100M,yfcc_100m,46.0658836,11.1159894,University of Trento,edu,27f8b01e628f20ebfcb58d14ea40573d351bbaad,citation,https://pdfs.semanticscholar.org/27f8/b01e628f20ebfcb58d14ea40573d351bbaad.pdf,Events based Multimedia Indexing and Retrieval,2017 +92,Germany,YFCC100M,yfcc_100m,47.689426,9.1868777,University of Konstanz,edu,da30d5e0cf214c1d86f629081493fa55e5a27efc,citation,https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/HoLiSa18.pdf,Expertise screening in crowdsourcing image quality,2018 +93,United States,YFCC100M,yfcc_100m,45.51181205,-122.68492999,Portland State University,edu,90eb833df9614da495712f4c1fbb65f8e7d9b356,citation,https://pdfs.semanticscholar.org/c12d/09f36feaa03a533d87eb3ceef5bc76989f05.pdf,Improved Scoring Models for Semantic Image Retrieval Using Scene Graphs,2017 +94,Australia,YFCC100M,yfcc_100m,-37.7963689,144.9611738,University of Melbourne,edu,c82840923eeded245a8dab2dd102d8b0cf96758a,citation,https://pdfs.semanticscholar.org/c828/40923eeded245a8dab2dd102d8b0cf96758a.pdf,KDGAN: Knowledge Distillation with Generative Adversarial Networks,2018 +95,Germany,YFCC100M,yfcc_100m,47.689426,9.1868777,University of Konstanz,edu,a1ff747cf512c8156620d9c17cb6ed8d21a76ad6,citation,https://arxiv.org/pdf/1803.08489.pdf,KonIQ-10k: Towards an ecologically valid and large-scale IQA database,2018 +96,United States,YFCC100M,yfcc_100m,37.8687126,-122.25586815,"University of California, Berkeley",edu,35d181da0b939bdf3bdf579969e5fe69e277e03e,citation,https://arxiv.org/pdf/1612.06370.pdf,Learning Features by Watching Objects Move,2017 +97,Thailand,YFCC100M,yfcc_100m,13.65450525,100.49423171,Robotics Institute,edu,774ae9c6b2a83c6891b5aeeb169cfd462d45f715,citation,https://pdfs.semanticscholar.org/774a/e9c6b2a83c6891b5aeeb169cfd462d45f715.pdf,Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs,2016 +98,Australia,YFCC100M,yfcc_100m,-35.2776999,149.118527,Australian National University,edu,2ce4e06a9fe107ff29a34ed4a8771222cbaacc9c,citation,https://arxiv.org/pdf/1608.07051.pdf,Learning Points and Routes to Recommend Trajectories,2016 +99,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,6eb5f375d67dd690ec3b134de7caecde461e8c72,citation,http://ijcai.org/Proceedings/16/Papers/250.pdf,Learning to detect concepts from webly-labeled video data,2016 +100,China,YFCC100M,yfcc_100m,34.250803,108.983693,Xi’an Jiaotong University,edu,6eb5f375d67dd690ec3b134de7caecde461e8c72,citation,http://ijcai.org/Proceedings/16/Papers/250.pdf,Learning to detect concepts from webly-labeled video data,2016 +101,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,22954dd92a795d7f381465d1b353bcc41901430d,citation,https://arxiv.org/pdf/1604.04279.pdf,Learning Visual Storylines with Skipping Recurrent Neural Networks,2016 +102,United States,YFCC100M,yfcc_100m,40.4441619,-79.94272826,Carnegie Mellon University,edu,ceac30061d8f7985987448f4712c49eeb98efad2,citation,https://arxiv.org/pdf/1708.01336.pdf,MemexQA: Visual Memex Question Answering,2017 diff --git a/site/datasets/verified/youtube_makeup.csv b/site/datasets/verified/youtube_makeup.csv index 9ea99ac9..e46b7489 100644 --- a/site/datasets/verified/youtube_makeup.csv +++ b/site/datasets/verified/youtube_makeup.csv @@ -1,2 +1,8 @@ id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year 0,,YMU,youtube_makeup,0.0,0.0,,,,main,,Can facial cosmetics affect the matching accuracy of face recognition systems?,2012 +1,France,YMU,youtube_makeup,43.614386,7.071125,EURECOM,edu,e1179a5746b4bf12e1c8a033192326bf7f670a4d,citation,http://www.eurecom.fr/en/publication/4494/download/mm-publi-4494.pdf,Facial makeup detection technique based on texture and shape analysis,2015 +2,France,YMU,youtube_makeup,43.6271655,7.0410917,Télécom ParisTech,edu,e1179a5746b4bf12e1c8a033192326bf7f670a4d,citation,http://www.eurecom.fr/en/publication/4494/download/mm-publi-4494.pdf,Facial makeup detection technique based on texture and shape analysis,2015 +3,United States,YMU,youtube_makeup,39.65404635,-79.96475355,West Virginia University,edu,55bc7abcef8266d76667896bbc652d081d00f797,citation,http://www.cse.msu.edu/~rossarun/pubs/ChenCosmeticsGenderAge_VISAPP2014.pdf,Impact of facial cosmetics on automatic gender and age estimation algorithms,2014 +4,United States,YMU,youtube_makeup,42.718568,-84.47791571,Michigan State University,edu,55bc7abcef8266d76667896bbc652d081d00f797,citation,http://www.cse.msu.edu/~rossarun/pubs/ChenCosmeticsGenderAge_VISAPP2014.pdf,Impact of facial cosmetics on automatic gender and age estimation algorithms,2014 +5,United States,YMU,youtube_makeup,41.70456775,-86.23822026,University of Notre Dame,edu,559795d3f3b096ceddc03720ba62d79d50eae300,citation,http://www3.nd.edu/~kwb/BarrBowyerFlynnTIFS_2014.pdf,Framework for Active Clustering With Ensembles,2014 +6,France,YMU,youtube_makeup,43.614386,7.071125,Eurecom Digital Security Department,edu,21bd9374c211749104232db33f0f71eab4df35d5,citation,http://www.eurecom.fr/en/publication/5184/download/sec-publi-5184.pdf,Integrating facial makeup detection into multimodal biometric user verification system,2017 diff --git a/site/public/about/attribution/index.html b/site/public/about/attribution/index.html index 34713c82..3afb30b2 100644 --- a/site/public/about/attribution/index.html +++ b/site/public/about/attribution/index.html @@ -71,9 +71,8 @@ url = {https://megapixels.cc/}, urldate = {2019-04-18} } -</pre><p>and include this license and attribution protocol within any derivative work.</p> -<p>If you publish data derived from MegaPixels, the original dataset creators should first be notified.</p> -<p>The MegaPixels dataset is made available under the Open Data Commons Attribution License (<a href="https://opendatacommons.org/licenses/by/1.0/">https://opendatacommons.org/licenses/by/1.0/</a>) and for academic use only.</p> +</pre><p>If you redistribute any data from this site, you must also include this <a href="assets/megapixels_license.pdf">license</a> in PDF format</p> +<p>The MegaPixel dataset is made available under the Open Data Commons Attribution License (<a href="https://opendatacommons.org/licenses/by/1.0/">https://opendatacommons.org/licenses/by/1.0/</a>) and for academic use only.</p> <p>READABLE SUMMARY OF Open Data Commons Attribution License</p> <p>You are free:</p> <blockquote><p>To Share: To copy, distribute and use the dataset diff --git a/site/public/about/index.html b/site/public/about/index.html index 07c64438..2c008504 100644 --- a/site/public/about/index.html +++ b/site/public/about/index.html @@ -77,9 +77,8 @@ <p><a href="https://asdf.us/">asdf.us</a></p> </div> </div><p>MegaPixels is an art and research project first launched in 2017 for an <a href="https://ahprojects.com/megapixels-glassroom/">installation</a> at Tactical Technology Collective's <a href="https://tacticaltech.org/pages/glass-room-london-press/">GlassRoom</a> about face recognition datasets. In 2018 MegaPixels was extended to cover pedestrian analysis datasets for a <a href="https://esc.mur.at/de/node/2370">commission by Elevate Arts festival</a> in Austria. Since then MegaPixels has evolved into a large-scale interrogation of hundreds of publicly-available face and person analysis datasets, the first of which launched on this site in April 2019.</p> -<p>MegaPixels aims to provide a critical perspective on machine learning image datasets, one that might otherwise escape academia and industry funded artificial intelligence think tanks that are often supported by the several of the same technology companies who have created datasets presented on this site.</p> +<p>MegaPixels aims to provide a critical perspective on machine learning image datasets, one that might otherwise escape academia and industry funded artificial intelligence think tanks that are often supported by the same technology companies who created many of the datasets presented on this site.</p> <p>MegaPixels is an independent project, designed as a public resource for educators, students, journalists, and researchers. Each dataset presented on this site undergoes a thorough review of its images, intent, and funding sources. Though the goals are similar to publishing an academic paper, MegaPixels is a website-first research project, with an academic publication to follow.</p> -<p>One of the main focuses of the dataset investigations presented on this site is to uncover where funding originated. Because of our emphasis on other researcher's funding sources, it is important that we are transparent about our own. This site and the past year of research have been primarily funded by a privacy art grant from Mozilla in 2018. The original MegaPixels installation in 2017 was built as a commission for and with support from Tactical Technology Collective and Mozilla. The research into pedestrian analysis datasets was funded by a commission from Elevate Arts, and continued development in 2019 is supported in part by a 1-year Researcher-in-Residence grant from Karlsruhe HfG, as well as lecture and workshop fees.</p> </section><section><div class='columns columns-3'><div class='column'><h5>Team</h5> <ul> <li>Adam Harvey: Concept, research and analysis, design, computer vision</li> diff --git a/site/public/about/updates/index.html b/site/public/about/updates/index.html new file mode 100644 index 00000000..6796e579 --- /dev/null +++ b/site/public/about/updates/index.html @@ -0,0 +1,97 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: MegaPixels Site Updates</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="MegaPixels Site Updates" /> + <meta property="og:title" content="MegaPixels: MegaPixels Site Updates"/> + <meta property="og:type" content="website"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/about/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-about"> + + <section><h1>Updates and Responses</h1> +<section class="about-menu"> +<ul> +<li><a href="/about/">About</a></li> +<li><a class="current" href="/about/updates/">Updates</a></li> +<li><a href="/about/press/">Press</a></li> +<li><a href="/about/attribution/">Attribution</a></li> +<li><a href="/about/legal/">Legal / Privacy</a></li> +</ul> +</section><p>Since publishing this project, several of datasets have disappeared. Below is a chronical of recents events related to the datasets on this site.</p> +<p>June 2019</p> +<ul> +<li>June 2: The Duke MTMC main webpage was deactivated and the entire dataset seems to be no longer available from Duke</li> +<li>June 2: The has been <a href="https://reid-mct.github.io/2019/">https://reid-mct.github.io/2019/</a></li> +<li>June 1: The Brainwash face/head dataset has been taken down by its author after posting it about it</li> +</ul> +<p>May 2019</p> +<ul> +<li>May 31: Semantic Scholar appears to be censoring citations used in this project. Two of the citations linking the Brainwash dataset to a military research in China have been intentionally disabled.</li> +<li>May 28: The Microsoft Celeb (MS Celeb) face dataset website is now 404 and all the download links are deactivated. It appears that Microsoft Reserach has shuttered access to their MS Celeb dataset. Yet it remains available, as of June 2, on <a href="https://ibug.doc.ic.ac.uk/resources/lightweight-face-recognition-challenge-workshop/">Imperial College London's website</a></li> +<li></li> +</ul> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/press/">Press</a></li> + <li><a href="/about/legal/">Legal and Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html index beff3c97..8a92beca 100644 --- a/site/public/datasets/index.html +++ b/site/public/datasets/index.html @@ -55,8 +55,8 @@ <div class='dataset-heading'> - <section><h1>Face Recognition Datasets</h1> -<p>Explore face recognition datasets contributing to the growing crisis of authoritarian biometric surveillance technologies. This first group of 5 datasets focuses on image usage connected to foreign surveillance and defense organizations.</p> + <section><h1>Dataset Analyses</h1> +<p>Explore face and person recognition datasets contributing to the growing crisis of authoritarian biometric surveillance. This first group of 5 datasets focuses on image usage connected to foreign surveillance and defense organizations. Since publishing this project in April 2019, the <a href="https://purl.stanford.edu/sx925dc9385">Brainwash</a>, <a href="http://vision.cs.duke.edu/DukeMTMC/">Duke MTMC</a>, and <a href="http://msceleb.org/">MS Celeb</a> datasets have been taken down by their authors. The <a href="https://vast.uccs.edu/Opensetface/">UCCS</a> dataset was temporarily deactivated due to metadata exposure and the <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">Town Centre data</a> remains active.</p> </section> </div> |
