| 61084a25ebe736e8f6d7a6e53b2c20d9723c4608 | |
| 61f04606528ecf4a42b49e8ac2add2e9f92c0def | Deep Deformation Network for Object Landmark
Localization NEC Laboratories America, Department of Media Analytics |
| 614a7c42aae8946c7ad4c36b53290860f6256441 | 1
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks |
| 0d88ab0250748410a1bc990b67ab2efb370ade5d | Author(s) :
ERROR HANDLING IN MULTIMODAL BIOMETRIC SYSTEMS USING RELIABILITY MEASURES (ThuPmOR6) (EPFL, Switzerland) (EPFL, Switzerland) (EPFL, Switzerland) (EPFL, Switzerland) Plamen Prodanov |
| 0d467adaf936b112f570970c5210bdb3c626a717 | |
| 0d6b28691e1aa2a17ffaa98b9b38ac3140fb3306 | Review of Perceptual Resemblance of Local
Plastic Surgery Facial Images using Near Sets 1,2 Department of Computer Technology, YCCE Nagpur, India |
| 0db8e6eb861ed9a70305c1839eaef34f2c85bbaf | |
| 0dbf4232fcbd52eb4599dc0760b18fcc1e9546e9 | |
| 0d760e7d762fa449737ad51431f3ff938d6803fe | LCDet: Low-Complexity Fully-Convolutional Neural Networks for
Object Detection in Embedded Systems UC San Diego ∗ Gokce Dane Qualcomm Inc. UC San Diego Qualcomm Inc. UC San Diego |
| 0dd72887465046b0f8fc655793c6eaaac9c03a3d | Real-time Head Orientation from a Monocular
Camera using Deep Neural Network KAIST, Republic of Korea |
| 0d087aaa6e2753099789cd9943495fbbd08437c0 | |
| 0d8415a56660d3969449e77095be46ef0254a448 | |
| 0d735e7552af0d1dcd856a8740401916e54b7eee | |
| 0d06b3a4132d8a2effed115a89617e0a702c957a | |
| 0d2dd4fc016cb6a517d8fb43a7cc3ff62964832e | |
| 0d33b6c8b4d1a3cb6d669b4b8c11c2a54c203d1a | Detection and Tracking of Faces in Videos: A Review
© 2016 IJEDR | Volume 4, Issue 2 | ISSN: 2321-9939 of Related Work 1Student, 2Assistant Professor 1, 2Dept. of Electronics & Comm., S S I E T, Punjab, India ________________________________________________________________________________________________________ |
| 956317de62bd3024d4ea5a62effe8d6623a64e53 | Lighting Analysis and Texture Modification of 3D Human
Face Scans Author Zhang, Paul, Zhao, Sanqiang, Gao, Yongsheng Published 2007 Conference Title Digital Image Computing Techniques and Applications DOI https://doi.org/10.1109/DICTA.2007.4426825 Copyright Statement © 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/ republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Downloaded from http://hdl.handle.net/10072/17889 Link to published version http://www.ieee.org/ Griffith Research Online https://research-repository.griffith.edu.au |
| 956c634343e49319a5e3cba4f2bd2360bdcbc075 | IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 4, AUGUST 2006
873 A Novel Incremental Principal Component Analysis and Its Application for Face Recognition |
| 958c599a6f01678513849637bec5dc5dba592394 | Noname manuscript No.
(will be inserted by the editor) Generalized Zero-Shot Learning for Action Recognition with Web-Scale Video Data Received: date / Accepted: date |
| 59fc69b3bc4759eef1347161e1248e886702f8f7 | Final Report of Final Year Project
HKU-Face: A Large Scale Dataset for Deep Face Recognition 3035141841 COMP4801 Final Year Project Project Code: 17007 |
| 59bfeac0635d3f1f4891106ae0262b81841b06e4 | Face Verification Using the LARK Face
Representation |
| 590628a9584e500f3e7f349ba7e2046c8c273fcf | |
| 59eefa01c067a33a0b9bad31c882e2710748ea24 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition |
| 5945464d47549e8dcaec37ad41471aa70001907f | Noname manuscript No.
(will be inserted by the editor) Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos Received: date / Accepted: date |
| 59c9d416f7b3d33141cc94567925a447d0662d80 | Universität des Saarlandes
Max-Planck-Institut für Informatik AG5 Matrix factorization over max-times algebra for data mining Masterarbeit im Fach Informatik Master’s Thesis in Computer Science von / by angefertigt unter der Leitung von / supervised by begutachtet von / reviewers November 2013 UNIVERSITASSARAVIENSIS |
| 59a35b63cf845ebf0ba31c290423e24eb822d245 | The FaceSketchID System: Matching Facial
Composites to Mugshots tedious, and may not |
| 59f325e63f21b95d2b4e2700c461f0136aecc171 | 3070
978-1-4577-1302-6/11/$26.00 ©2011 IEEE FOR FACE RECOGNITION 1. INTRODUCTION |
| 5922e26c9eaaee92d1d70eae36275bb226ecdb2e | Boosting Classification Based Similarity
Learning by using Standard Distances Departament d’Informàtica, Universitat de València Av. de la Universitat s/n. 46100-Burjassot (Spain) |
| 59031a35b0727925f8c47c3b2194224323489d68 | Sparse Variation Dictionary Learning for Face Recognition with A Single
Training Sample Per Person ETH Zurich Switzerland |
| 926c67a611824bc5ba67db11db9c05626e79de96 | 1913
Enhancing Bilinear Subspace Learning by Element Rearrangement |
| 923ede53b0842619831e94c7150e0fc4104e62f7 | 978-1-4799-9988-0/16/$31.00 ©2016 IEEE
1293 ICASSP 2016 |
| 92b61b09d2eed4937058d0f9494d9efeddc39002 | Under review in IJCV manuscript No.
(will be inserted by the editor) BoxCars: Improving Vehicle Fine-Grained Recognition using 3D Bounding Boxes in Traffic Surveillance Received: date / Accepted: date |
| 920a92900fbff22fdaaef4b128ca3ca8e8d54c3e | LEARNING PATTERN TRANSFORMATION MANIFOLDS WITH PARAMETRIC ATOM
SELECTION Ecole Polytechnique F´ed´erale de Lausanne (EPFL) Signal Processing Laboratory (LTS4) Switzerland-1015 Lausanne |
| 9207671d9e2b668c065e06d9f58f597601039e5e | Face Detection Using a 3D Model on
Face Keypoints |
| 9282239846d79a29392aa71fc24880651826af72 | Antonakos et al. EURASIP Journal on Image and Video Processing 2014, 2014:14
http://jivp.eurasipjournals.com/content/2014/1/14 RESEARCH Open Access Classification of extreme facial events in sign language videos |
| 92c2dd6b3ac9227fce0a960093ca30678bceb364 | Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published
version when available. Title On color texture normalization for active appearance models Author(s) Ionita, Mircea C.; Corcoran, Peter M.; Buzuloiu, Vasile Publication Date 2009-05-12 Publication Information Ionita, M. C., Corcoran, P., & Buzuloiu, V. (2009). On Color Texture Normalization for Active Appearance Models. Image Processing, IEEE Transactions on, 18(6), 1372-1378. Publisher IEEE Link to publisher's version http://dx.doi.org/10.1109/TIP.2009.2017163 Item record http://hdl.handle.net/10379/1350 Some rights reserved. For more information, please see the item record link above. Downloaded 2018-11-06T00:40:53Z |
| 92fada7564d572b72fd3be09ea3c39373df3e27c | |
| 927ad0dceacce2bb482b96f42f2fe2ad1873f37a | Interest-Point based Face Recognition System
87 X Interest-Point based Face Recognition System Spain 1. Introduction Among all applications of face recognition systems, surveillance is one of the most challenging ones. In such an application, the goal is to detect known criminals in crowded environments, like airports or train stations. Some attempts have been made, like those of Tokio (Engadget, 2006) or Mainz (Deutsche Welle, 2006), with limited success. The first task to be carried out in an automatic surveillance system involves the detection of all the faces in the images taken by the video cameras. Current face detection algorithms are highly reliable and thus, they will not be the focus of our work. Some of the best performing examples are the Viola-Jones algorithm (Viola & Jones, 2004) or the Schneiderman-Kanade algorithm (Schneiderman & Kanade, 2000). The second task to be carried out involves the comparison of all detected faces among the database of known criminals. The ideal behaviour of an automatic system performing this task would be to get a 100% correct identification rate, but this behaviour is far from the capabilities of current face recognition algorithms. Assuming that there will be false identifications, supervised surveillance systems seem to be the most realistic option: the automatic system issues an alarm whenever it detects a possible match with a criminal, and a human decides whether it is a false alarm or not. Figure 1 shows an example. However, even in a supervised scenario the requirements for the face recognition algorithm are extremely high: the false alarm rate must be low enough as to allow the human operator to cope with it; and the percentage of undetected criminals must be kept to a minimum in order to ensure security. Fulfilling both requirements at the same time is the main challenge, as a reduction in false alarm rate usually implies an increase of the percentage of undetected criminals. We propose a novel face recognition system based in the use of interest point detectors and local descriptors. In order to check the performances of our system, and particularly its performances in a surveillance application, we present experimental results in terms of Receiver Operating Characteristic curves or ROC curves. From the experimental results, it becomes clear that our system outperforms classical appearance based approaches. www.intechopen.com |
| 929bd1d11d4f9cbc638779fbaf958f0efb82e603 | This is the author’s version of a work that was submitted/accepted for pub-
lication in the following source: Zhang, Ligang & Tjondronegoro, Dian W. (2010) Improving the perfor- mance of facial expression recognition using dynamic, subtle and regional features. In Kok, WaiWong, B. Sumudu, U. Mendis, & Abdesselam , Bouzerdoum (Eds.) Neural Information Processing. Models and Applica- tions, Lecture Notes in Computer Science, Sydney, N.S.W, pp. 582-589. This file was downloaded from: http://eprints.qut.edu.au/43788/ c(cid:13) Copyright 2010 Springer-Verlag Conference proceedings published, by Springer Verlag, will be available via Lecture Notes in Computer Science http://www.springer.de/comp/lncs/ Notice: Changes introduced as a result of publishing processes such as copy-editing and formatting may not be reflected in this document. For a definitive version of this work, please refer to the published source: http://dx.doi.org/10.1007/978-3-642-17534-3_72 |
| 0c36c988acc9ec239953ff1b3931799af388ef70 | Face Detection Using Improved Faster RCNN
Huawei Cloud BU, China Figure1.Face detection results of FDNet1.0 |
| 0c5ddfa02982dcad47704888b271997c4de0674b | |
| 0cccf576050f493c8b8fec9ee0238277c0cfd69a | |
| 0c069a870367b54dd06d0da63b1e3a900a257298 | Author manuscript, published in "ICANN 2011 - International Conference on Artificial Neural Networks (2011)" |
| 0c75c7c54eec85e962b1720755381cdca3f57dfb | 2212
Face Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Model |
| 0ca36ecaf4015ca4095e07f0302d28a5d9424254 | Improving Bag-of-Visual-Words Towards Effective Facial Expressive
Image Classification 1Univ. Grenoble Alpes, CNRS, Grenoble INP∗ , GIPSA-lab, 38000 Grenoble, France Keywords: BoVW, k-means++, Relative Conjunction Matrix, SIFT, Spatial Pyramids, TF.IDF. |
| 0cfca73806f443188632266513bac6aaf6923fa8 | Predictive Uncertainty in Large Scale Classification
using Dropout - Stochastic Gradient Hamiltonian Monte Carlo. Vergara, Diego∗1, Hern´andez, Sergio∗2, Valdenegro-Toro, Mat´ıas∗∗3 and Jorquera, Felipe∗4. ∗Laboratorio de Procesamiento de Informaci´on Geoespacial, Universidad Cat´olica del Maule, Chile. ∗∗German Research Centre for Artificial Intelligence, Bremen, Germany. |
| 0c54e9ac43d2d3bab1543c43ee137fc47b77276e | |
| 0c5afb209b647456e99ce42a6d9d177764f9a0dd | 97
Recognizing Action Units for Facial Expression Analysis |
| 0c377fcbc3bbd35386b6ed4768beda7b5111eec6 | 258
A Unified Probabilistic Framework for Spontaneous Facial Action Modeling and Understanding |
| 0cb2dd5f178e3a297a0c33068961018659d0f443 | |
| 0cf7da0df64557a4774100f6fde898bc4a3c4840 | Shape Matching and Object Recognition using Low Distortion Correspondences
Department of Electrical Engineering and Computer Science U.C. Berkeley |
| 0c4659b35ec2518914da924e692deb37e96d6206 | 1236
Registering a MultiSensor Ensemble of Images |
| 0c53ef79bb8e5ba4e6a8ebad6d453ecf3672926d | SUBMITTED TO JOURNAL
Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition |
| 0c60eebe10b56dbffe66bb3812793dd514865935 | |
| 6601a0906e503a6221d2e0f2ca8c3f544a4adab7 | SRTM-2 2/9/06 3:27 PM Page 321
Detection of Ancient Settlement Mounds: Archaeological Survey Based on the SRTM Terrain Model B.H. Menze, J.A. Ur, and A.G. Sherratt |
| 660b73b0f39d4e644bf13a1745d6ee74424d4a16 | |
| 66d512342355fb77a4450decc89977efe7e55fa2 | Under review as a conference paper at ICLR 2018
LEARNING NON-LINEAR TRANSFORM WITH DISCRIM- INATIVE AND MINIMUM INFORMATION LOSS PRIORS Anonymous authors Paper under double-blind review |
| 6643a7feebd0479916d94fb9186e403a4e5f7cbf | Chapter 8
3D Face Recognition |
| 661ca4bbb49bb496f56311e9d4263dfac8eb96e9 | Datasheets for Datasets |
| 66d087f3dd2e19ffe340c26ef17efe0062a59290 | Dog Breed Identification
Brian Mittl Vijay Singh |
| 66a2c229ac82e38f1b7c77a786d8cf0d7e369598 | Proceedings of the 2016 Industrial and Systems Engineering Research Conference
H. Yang, Z. Kong, and MD Sarder, eds. A Probabilistic Adaptive Search System for Exploring the Face Space Escuela Superior Politecnica del Litoral (ESPOL) Guayaquil-Ecuador |
| 66886997988358847615375ba7d6e9eb0f1bb27f | |
| 66837add89caffd9c91430820f49adb5d3f40930 | |
| 66a9935e958a779a3a2267c85ecb69fbbb75b8dc | FAST AND ROBUST FIXED-RANK MATRIX RECOVERY
Fast and Robust Fixed-Rank Matrix Recovery Antonio Lopez |
| 66533107f9abdc7d1cb8f8795025fc7e78eb1122 | Vi a Sevig f a Ue h wih E(cid:11)ecive ei Readig
i a Whee chai baed Rbic A W y g Sgy Dae i iy g S g iz ad Ze ga Biey y EECS AST 373 1 g Dg Y g G Taej 305 701 REA z VR Cee ETR 161 ajg Dg Y g G Taej 305 350 REA Abac Thee exi he c eaive aciviy bewee a h a beig ad ehabi iai b beca e he h a eae ehabi iai b i he ae evi e ad ha he bee(cid:12) f ehabi iai b ch a ai ay bi e f ci. ei eadig i e f he eeia f ci f h a fied y ehabi iai b i de ie he cf ad afey f a wh eed he. Fi f a he vea c e f a ew whee chai baed bic a ye ARES ad i h a b ieaci ech gie ae eeed. Ag he ech gie we cceae vi a evig ha a w hi bic a eae a y via vi a feedback. E(cid:11)ecive iei eadig ch a ecgizig he iive ad egaive eaig f he e i efed he bai f chage f he facia exei a d i ha i g y e aed he e iei whi e hi bic a vide he e wih a beveage. F he eÆcie vi a ifa i ceig g a aed iage ae ed c he ee caea head ha i caed i he ed e(cid:11)ec f he bic a. The vi a evig wih e(cid:11)ecive iei eadig i ccef y a ied eve a beveage f he e. d ci Whee chai baed bic ye ae ai y ed ai he e de y ad he diab ed wh have hadi ca i ey ad f ci i ib. S ch a ye ci f a weed whee chai ad a bic a ad ha y a bi e caabi iy h gh he whee chai b a a ai ay f ci via he bic a ad h ake ib e he c exiece f a e ad a b i he ae evi e. hi cae he e eed ieac wih he bic a i cfab e ad afe way. w Fig e 1: The whee chai baed bic a ad i h a b ieaci ech gie. eve i ha bee eed ha ay diÆc ie exi i h a bf ieaci i exiig ehabi iai b. F exa e a a c f he bic a ake a high cgiive ad he e a whi e hyica y diab ed e ay have diÆc ie i eaig jyick dexe y hig b f de icae vee [4]. addii AUS eva ai e eed ha he diÆc hig ig ehabi iai b i ay cad f a a adj e ad ay f ci kee i id a he begiig [4]. Theefe h a fied y h a b ieaci i e f eeia echi e i a whee chai baed bic a. hi ae we cide he whee chai baed bic ye ARES AST Rehabi iai E gieeig Sevice ye which we ae deve ig a a evice bic ye f he diab ed ad he e de y ad dic i h a b ieaci ech i e Fig. 1. Ag h a b ieaci ech i e vi a evig i dea wih a a aj ic. |
| 66810438bfb52367e3f6f62c24f5bc127cf92e56 | Face Recognition of Illumination Tolerance in 2D
Subspace Based on the Optimum Correlation Filter Xu Yi Department of Information Engineering, Hunan Industry Polytechnic, Changsha, China images will be tested to project |
| 66af2afd4c598c2841dbfd1053bf0c386579234e | Noname manuscript No.
(will be inserted by the editor) Context Assisted Face Clustering Framework with Human-in-the-Loop Received: date / Accepted: date |
| 66e6f08873325d37e0ec20a4769ce881e04e964e | Int J Comput Vis (2014) 108:59–81
DOI 10.1007/s11263-013-0695-z The SUN Attribute Database: Beyond Categories for Deeper Scene Understanding Received: 27 February 2013 / Accepted: 28 December 2013 / Published online: 18 January 2014 © Springer Science+Business Media New York 2014 |
| 661da40b838806a7effcb42d63a9624fcd684976 | 53
An Illumination Invariant Accurate Face Recognition with Down Scaling of DCT Coefficients Department of Computer Science and Engineering, Amity School of Engineering and Technology, New Delhi, India In this paper, a novel approach for illumination normal- ization under varying lighting conditions is presented. Our approach utilizes the fact that discrete cosine trans- form (DCT) low-frequency coefficients correspond to illumination variations in a digital image. Under varying illuminations, the images captured may have low con- trast; initially we apply histogram equalization on these for contrast stretching. Then the low-frequency DCT coefficients are scaled down to compensate the illumi- nation variations. The value of scaling down factor and the number of low-frequency DCT coefficients, which are to be rescaled, are obtained experimentally. The classification is done using k−nearest neighbor classi- fication and nearest mean classification on the images obtained by inverse DCT on the processed coefficients. The correlation coefficient and Euclidean distance ob- tained using principal component analysis are used as distance metrics in classification. We have tested our face recognition method using Yale Face Database B. The results show that our method performs without any error (100% face recognition performance), even on the most extreme illumination variations. There are different schemes in the literature for illumination normalization under varying lighting conditions, but no one is claimed to give 100% recognition rate under all illumination variations for this database. The proposed technique is computationally efficient and can easily be implemented for real time face recognition system. Keywords: discrete cosine transform, correlation co- efficient, face recognition, illumination normalization, nearest neighbor classification 1. Introduction Two-dimensional pattern classification plays a crucial role in real-world applications. To build high-performance surveillance or information security systems, face recognition has been known as the key application attracting enor- mous researchers highlighting on related topics [1,2]. Even though current machine recognition systems have reached a certain level of matu- rity, their success is limited by the real appli- cations constraints, like pose, illumination and expression. The FERET evaluation shows that the performance of a face recognition system decline seriously with the change of pose and illumination conditions [31]. To solve the variable illumination problem a variety of approaches have been proposed [3, 7- 11, 26-29]. Early work in illumination invariant face recognition focused on image representa- tions that are mostly insensitive to changes in illumination. There were approaches in which the image representations and distance mea- sures were evaluated on a tightly controlled face database that varied the face pose, illumination, and expression. The image representations in- clude edge maps, 2D Gabor-like filters, first and second derivatives of the gray-level image, and the logarithmic transformations of the intensity image along with these representations [4]. The different approaches to solve the prob- lem of illumination invariant face recognition can be broadly classified into two main cate- gories. The first category is named as passive approach in which the visual spectrum images are analyzed to overcome this problem. The approaches belonging to other category named active, attempt to overcome this problem by employing active imaging techniques to obtain face images captured in consistent illumina- tion condition, or images of illumination invari- ant modalities. There is a hierarchical catego- rization of these two approaches. An exten- sive review of both approaches is given in [5]. |
| 3edb0fa2d6b0f1984e8e2c523c558cb026b2a983 | Automatic Age Estimation Based on
Facial Aging Patterns |
| 3ee7a8107a805370b296a53e355d111118e96b7c | |
| 3e4acf3f2d112fc6516abcdddbe9e17d839f5d9b | Deep Value Networks Learn to
Evaluate and Iteratively Refine Structured Outputs |
| 3ea8a6dc79d79319f7ad90d663558c664cf298d4 | |
| 3e4f84ce00027723bdfdb21156c9003168bc1c80 | 1979
© EURASIP, 2011 - ISSN 2076-1465 19th European Signal Processing Conference (EUSIPCO 2011) INTRODUCTION |
| 3e685704b140180d48142d1727080d2fb9e52163 | Single Image Action Recognition by Predicting
Space-Time Saliency |
| 3e687d5ace90c407186602de1a7727167461194a | Photo Tagging by Collection-Aware People Recognition
UFF UFF Asla S´a FGV IMPA |
| 501096cca4d0b3d1ef407844642e39cd2ff86b37 | Illumination Invariant Face Image
Representation using Quaternions Dayron Rizo-Rodr´ıguez, Heydi M´endez-V´azquez, and Edel Garc´ıa-Reyes Advanced Technologies Application Center. 7a # 21812 b/ 218 and 222, Rpto. Siboney, Playa, P.C. 12200, La Habana, Cuba. |
| 501eda2d04b1db717b7834800d74dacb7df58f91 | |
| 5083c6be0f8c85815ead5368882b584e4dfab4d1 | Please do not quote. In press, Handbook of affective computing. New York, NY: Oxford
Automated Face Analysis for Affective Computing |
| 500b92578e4deff98ce20e6017124e6d2053b451 | |
| 50ff21e595e0ebe51ae808a2da3b7940549f4035 | IEEE TRANSACTIONS ON LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 2017
Age Group and Gender Estimation in the Wild with Deep RoR Architecture |
| 5042b358705e8d8e8b0655d07f751be6a1565482 | International Journal of
Emerging Research in Management &Technology ISSN: 2278-9359 (Volume-4, Issue-8) Research Article August 2015 Review on Emotion Detection in Image CSE & PCET, PTU HOD, CSE & PCET, PTU Punjab, India Punj ab, India |
| 50e47857b11bfd3d420f6eafb155199f4b41f6d7 | International Journal of Computer, Consumer and Control (IJ3C), Vol. 2, No.1 (2013)
3D Human Face Reconstruction Using a Hybrid of Photometric Stereo and Independent Component Analysis |
| 50eb75dfece76ed9119ec543e04386dfc95dfd13 | Learning Visual Entities and their Visual Attributes from Text Corpora
Dept. of Computer Science K.U.Leuven, Belgium Dept. of Computer Science K.U.Leuven, Belgium Dept. of Computer Science K.U.Leuven, Belgium |
| 50a0930cb8cc353e15a5cb4d2f41b365675b5ebf | |
| 50d15cb17144344bb1879c0a5de7207471b9ff74 | Divide, Share, and Conquer: Multi-task
Attribute Learning with Selective Sharing |
| 5028c0decfc8dd623c50b102424b93a8e9f2e390 | Published as a conference paper at ICLR 2017
REVISITING CLASSIFIER TWO-SAMPLE TESTS 1Facebook AI Research, 2WILLOW project team, Inria / ENS / CNRS |
| 505e55d0be8e48b30067fb132f05a91650666c41 | A Model of Illumination Variation for Robust Face Recognition
Institut Eur´ecom Multimedia Communications Department BP 193, 06904 Sophia Antipolis Cedex, France |
| 680d662c30739521f5c4b76845cb341dce010735 | Int J Comput Vis (2014) 108:82–96
DOI 10.1007/s11263-014-0716-6 Part and Attribute Discovery from Relative Annotations Received: 25 February 2013 / Accepted: 14 March 2014 / Published online: 26 April 2014 © Springer Science+Business Media New York 2014 |
| 68d2afd8c5c1c3a9bbda3dd209184e368e4376b9 | Representation Learning by Rotating Your Faces |
| 68a3f12382003bc714c51c85fb6d0557dcb15467 | |
| 68d08ed9470d973a54ef7806318d8894d87ba610 | Drive Video Analysis for the Detection of Traffic Near-Miss Incidents |
| 68caf5d8ef325d7ea669f3fb76eac58e0170fff0 | |
| 68d4056765c27fbcac233794857b7f5b8a6a82bf | Example-Based Face Shape Recovery Using the
Zenith Angle of the Surface Normal Mario Castel´an1, Ana J. Almaz´an-Delf´ın2, Marco I. Ram´ırez-Sosa-Mor´an3, and Luz A. Torres-M´endez1 1 CINVESTAV Campus Saltillo, Ramos Arizpe 25900, Coahuila, M´exico 2 Universidad Veracruzana, Facultad de F´ısica e Inteligencia Artificial, Xalapa 91000, 3 ITESM, Campus Saltillo, Saltillo 25270, Coahuila, M´exico Veracruz, M´exico |
| 684f5166d8147b59d9e0938d627beff8c9d208dd | IEEE TRANS. NNLS, JUNE 2017
Discriminative Block-Diagonal Representation Learning for Image Recognition |
| 68cf263a17862e4dd3547f7ecc863b2dc53320d8 | |
| 68e9c837431f2ba59741b55004df60235e50994d | Detecting Faces Using Region-based Fully
Convolutional Networks Tencent AI Lab, China |
| 687e17db5043661f8921fb86f215e9ca2264d4d2 | A Robust Elastic and Partial Matching Metric for Face Recognition
Microsoft Corporate One Microsoft Way, Redmond, WA 98052 |
| 688754568623f62032820546ae3b9ca458ed0870 | bioRxiv preprint first posted online Sep. 27, 2016;
doi: http://dx.doi.org/10.1101/077784 . The copyright holder for this preprint (which was not peer-reviewed) is the author/funder. It is made available under a CC-BY-NC-ND 4.0 International license . Resting high frequency heart rate variability is not associated with the recognition of emotional facial expressions in healthy human adults. 1 Univ. Grenoble Alpes, LPNC, F-38040, Grenoble, France 2 CNRS, LPNC UMR 5105, F-38040, Grenoble, France 3 IPSY, Université Catholique de Louvain, Louvain-la-Neuve, Belgium 4 Fund for Scientific Research (FRS-FNRS), Brussels, Belgium Correspondence concerning this article should be addressed to Brice Beffara, Office E250, Institut de Recherches en Sciences Psychologiques, IPSY - Place du Cardinal Mercier, 10 bte L3.05.01 B-1348 Author note This study explores whether the myelinated vagal connection between the heart and the brain is involved in emotion recognition. The Polyvagal theory postulates that the activity of the myelinated vagus nerve underlies socio-emotional skills. It has been proposed that the perception of emotions could be one of this skills dependent on heart-brain interactions. However, this assumption was differently supported by diverging results suggesting that it could be related to confounded factors. In the current study, we recorded the resting state vagal activity (reflected by High Frequency Heart Rate Variability, HF-HRV) of 77 (68 suitable for analysis) healthy human adults and measured their ability to identify dynamic emotional facial expressions. Results show that HF-HRV is not related to the recognition of emotional facial expressions in healthy human adults. We discuss this result in the frameworks of the polyvagal theory and the neurovisceral integration model. Keywords: HF-HRV; autonomic flexibility; emotion identification; dynamic EFEs; Polyvagal theory; Neurovisceral integration model Word count: 9810 10 11 12 13 14 15 16 17 Introduction The behavior of an animal is said social when involved in in- teractions with other animals (Ward & Webster, 2016). These interactions imply an exchange of information, signals, be- tween at least two animals. In humans, the face is an efficient communication channel, rapidly providing a high quantity of information. Facial expressions thus play an important role in the transmission of emotional information during social interactions. The result of the communication is the combina- tion of transmission from the sender and decoding from the receiver (Jack & Schyns, 2015). As a consequence, the quality of the interaction depends on the ability to both produce and identify facial expressions. Emotions are therefore a core feature of social bonding (Spoor & Kelly, 2004). Health of individuals and groups depend on the quality of social bonds in many animals (Boyer, Firat, & Leeuwen, 2015; S. L. Brown & Brown, 2015; Neuberg, Kenrick, & Schaller, 2011), 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 especially in highly social species such as humans (Singer & Klimecki, 2014). The recognition of emotional signals produced by others is not independent from its production by oneself (Niedenthal, 2007). The muscles of the face involved in the production of a facial expressions are also activated during the perception of the same facial expressions (Dimberg, Thunberg, & Elmehed, 2000). In other terms, the facial mimicry of the perceived emotional facial expression (EFE) triggers its sensorimotor simulation in the brain, which improves the recognition abili- ties (Wood, Rychlowska, Korb, & Niedenthal, 2016). Beyond that, the emotion can be seen as the body -including brain- dynamic itself (Gallese & Caruana, 2016) which helps to un- derstand why behavioral simulation is necessary to understand the emotion. The interplay between emotion production, emotion percep- tion, social communication and body dynamics has been sum- marized in the framework of the polyvagal theory (Porges, |
| 68f9cb5ee129e2b9477faf01181cd7e3099d1824 | ALDA Algorithms for Online Feature Extraction |
| 68bf34e383092eb827dd6a61e9b362fcba36a83a | |
| 6889d649c6bbd9c0042fadec6c813f8e894ac6cc | Analysis of Robust Soft Learning Vector
Quantization and an application to Facial Expression Recognition |
| 68c17aa1ecbff0787709be74d1d98d9efd78f410 | International Journal of Optomechatronics, 6: 92–119, 2012
Copyright # Taylor & Francis Group, LLC ISSN: 1559-9612 print=1559-9620 online DOI: 10.1080/15599612.2012.663463 GENDER CLASSIFICATION FROM FACE IMAGES USING MUTUAL INFORMATION AND FEATURE FUSION Department of Electrical Engineering and Advanced Mining Technology Center, Universidad de Chile, Santiago, Chile In this article we report a new method for gender classification from frontal face images using feature selection based on mutual information and fusion of features extracted from intensity, shape, texture, and from three different spatial scales. We compare the results of three different mutual information measures: minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), and conditional mutual information feature selection (CMIFS). We also show that by fusing features extracted from six different methods we significantly improve the gender classification results relative to those previously published, yielding 99.13% of the gender classification rate on the FERET database. Keywords: Feature fusion, feature selection, gender classification, mutual information, real-time gender classification 1. INTRODUCTION During the 90’s, one of the main issues addressed in the area of computer vision was face detection. Many methods and applications were developed including the face detection used in many digital cameras nowadays. Gender classification is important in many possible applications including electronic marketing. Displays at retail stores could show products and offers according to the person gender as the person passes in front of a camera at the store. This is not a simple task since faces are not rigid and depend on illumination, pose, gestures, facial expressions, occlusions (glasses), and other facial features (makeup, beard). The high variability in the appearance of the face directly affects their detection and classification. Auto- matic classification of gender from face images has a wide range of possible applica- tions, ranging from human-computer interaction to applications in real-time electronic marketing in retail stores (Shan 2012; Bekios-Calfa et al. 2011; Chu et al. 2010; Perez et al. 2010a). Automatic gender classification has a wide range of possible applications for improving human-machine interaction and face identification methods (Irick et al. ing.uchile.cl 92 |
| 6888f3402039a36028d0a7e2c3df6db94f5cb9bb | Under review as a conference paper at ICLR 2018
CLASSIFIER-TO-GENERATOR ATTACK: ESTIMATION OF TRAINING DATA DISTRIBUTION FROM CLASSIFIER Anonymous authors Paper under double-blind review |
| 574751dbb53777101502419127ba8209562c4758 | |
| 57b8b28f8748d998951b5a863ff1bfd7ca4ae6a5 | |
| 57101b29680208cfedf041d13198299e2d396314 | |
| 57893403f543db75d1f4e7355283bdca11f3ab1b | |
| 57f8e1f461ab25614f5fe51a83601710142f8e88 | Region Selection for Robust Face Verification using UMACE Filters
Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia. In this paper, we investigate the verification performances of four subdivided face images with varying expressions. The objective of this study is to evaluate which part of the face image is more tolerant to facial expression and still retains its personal characteristics due to the variations of the image. The Unconstrained Minimum Average Correlation Energy (UMACE) filter is implemented to perform the verification process because of its advantages such as shift–invariance, ability to trade-off between discrimination and distortion tolerance, e.g. variations in pose, illumination and facial expression. The database obtained from the facial expression database of Advanced Multimedia Processing (AMP) Lab at CMU is used in this study. Four equal sizes of face regions i.e. bottom, top, left and right halves are used for the purpose of this study. The results show that the bottom half of the face region gives the best performance in terms of the PSR values with zero false accepted rate (FAR) and zero false rejection rate (FRR) compared to the other three regions. 1. Introduction Face recognition is a well established field of research, and a large number of algorithms have been proposed in the literature. Various classifiers have been explored to improve the accuracy of face classification. The basic approach is to use distance-base methods which measure Euclidean distance between any two vectors and then compare it with the preset threshold. Neural Networks are often used as classifiers due to their powerful generation ability [1]. Support Vector Machines (SVM) have been applied with encouraging results [2]. In biometric applications, one of the important tasks is the matching process between an individual biometrics against the database that has been prepared during the enrolment stage. For biometrics systems such as face authentication that use images as personal characteristics, biometrics sensor output and image pre-processing play an important role since the quality of a biometric input can change significantly due to illumination, noise and pose variations. Over the years, researchers have studied the role of illumination variation, pose variation, facial expression, and occlusions in affecting the performance of face verification systems [3]. The Minimum Average Correlation Energy (MACE) filters have been reported to be an alternative solution to these problems because of the advantages such as shift-invariance, close-form expressions and distortion-tolerance. MACE filters have been successfully applied in the field of automatic target recognition as well as in biometric verification [3][4]. Face and fingerprint verification using correlation filters have been investigated in [5] and [6], respectively. Savvides et.al performed face authentication and identification using correlation filters based on illumination variation [7]. In the process of implementing correlation filters, the number of training images used depends on the level of distortions applied to the images [5], [6]. In this study, we investigate which part of a face image is more tolerant to facial expression and retains its personal characteristics for the verification process. Four subdivided face images, i.e. bottom, top, left and right halves, with varying expressions are investigated. By identifying only the region of the face that gives the highest verification performance, that region can be used instead of the full-face to reduce storage requirements. 2. Unconstrained Minimum Average Correlation Energy (UMACE) Filter Correlation filter theory and the descriptions of the design of the correlation filter can be found in a tutorial survey paper [8]. According to [4][6], correlation filter evolves from matched filters which are optimal for detecting a known reference image in the presence of additive white Gaussian noise. However, the detection rate of matched filters decreases significantly due to even the small changes of scale, rotation and pose of the reference image. the pre-specified peak values In an effort to solve this problem, the Synthetic Discriminant Function (SDF) filter and the Equal Correlation Peak SDF (ECP SDF) filter ware introduced which allowed several training images to be represented by a single correlation filter. SDF filter produces pre-specified values called peak constraints. These peak values correspond to the authentic class or impostor class when an image is tested. However, to misclassifications when the sidelobes are larger than the controlled values at the origin. Savvides et.al developed the Minimum Average Correlation Energy (MACE) filters [5]. This filter reduces the large sidelobes and produces a sharp peak when the test image is from the same class as the images that have been used to design the filter. There are two kinds of variants that can be used in order to obtain a sharp peak when the test image belongs to the authentic class. The first MACE filter variant minimizes the average correlation energy of the training images while constraining the correlation output at the origin to a specific value for each of the training images. The second MACE filter variant is the Unconstrained Minimum Average Correlation Energy (UMACE) filter which also minimizes the average correlation output while maximizing the correlation output at the origin [4]. lead Proceedings of the International Conference onElectrical Engineering and InformaticsInstitut Teknologi Bandung, Indonesia June 17-19, 2007B-67ISBN 978-979-16338-0-2611 |
| 57a1466c5985fe7594a91d46588d969007210581 | A Taxonomy of Face-models for System Evaluation
Motivation and Data Types Synthetic Data Types Unverified – Have no underlying physical or statistical basis Physics -Based – Based on structure and materials combined with the properties formally modeled in physics. Statistical – Use statistics from real data/experiments to estimate/learn model parameters. Generally have measurements of accuracy Guided Synthetic – Individual models based on individual people. No attempt to capture properties of large groups, a unique model per person. For faces, guided models are composed of 3D structure models and skin textures, capturing many artifacts not easily parameterized. Can be combined with physics-based rendering to generate samples under different conditions. Semi–Synethetic – Use measured data such as 2D images or 3D facial scans. These are not truly synthetic as they are re-rendering’s of real measured data. Semi and Guided Synthetic data provide higher operational relevance while maintaining a high degree of control. Generating statistically significant size datasets for face matching system evaluation is both a laborious and expensive process. There is a gap in datasets that allow for evaluation of system issues including: Long distance recognition Blur caused by atmospherics Various weather conditions End to end systems evaluation Our contributions: Define a taxonomy of face-models for controlled experimentations Show how Synthetic addresses gaps in system evaluation Show a process for generating and validating synthetic models Use these models in long distance face recognition system evaluation Experimental Setup Results and Conclusions Example Models Original Pie Semi- Synthetic FaceGen Animetrics http://www.facegen.com http://www.animetrics.com/products/Forensica.php Guided- Synthetic Models Models generated using the well known CMU PIE [18] dataset. Each of the 68 subjects of PIE were modeled using a right profile and frontal image from the lights subset. Two modeling programs were used, Facegen and Animetrics. Both programs create OBJ files and textures Models are re-rendered using custom display software built with OpenGL, GLUT and DevIL libraries Custom Display Box housing a BENQ SP820 high powered projector rated at 4000 ANSI Lumens Canon EOS 7D withd a Sigma 800mm F5.6 EX APO DG HSM lens a 2x adapter imaging the display from 214 meters Normalized Example Captures Real PIE 1 Animetrics FaceGen 81M inside 214M outside Real PIE 2 Pre-cropped images were used for the commercial core Ground truth eye points + geometric/lighting normalization pre processing before running through the implementation of the V1 recognition algorithm found in [1]. Geo normalization highlights how the feature region of the models looks very similar to that of the real person. Each test consisted of using 3 approximately frontal gallery images NOT used to make the 3D model used as the probe, best score over 3 images determined score. Even though the PIE-3D-20100224A–D sets were imaged on the same day, the V1 core scored differently on each highlighting the synthetic data’s ability to help evaluate data capture methods and effects of varying atmospherics. The ISO setting varied which effects the shutter speed, with higher ISO generally yielding less blur. Dataset Range(m) Iso V1 Comm. Original PIE Images FaceGen ScreenShots Animetrics Screenshots PIE-3D-20100210B PIE-3D-20100224A PIE-3D-20100224B PIE-3D-20100224C PIE-3D-20100224D N/A N/A N/A 81m 214m 214m 214m 214m N/A N/A N/A 500 125 125 250 400 100 47.76 100 100 58.82 45.59 81.82 79.1 100 100 100 100 100 100 The same (100 percent) recognition rate on screenshots as original images validate the Anmetrics guided synthetic models and fails FaceGen Models. 100% recognition means dataset is too small/easy; exapanding pose and models underway. Expanded the photohead methodology into 3D Developed a robust modeling system allowing for multiple configurations of a single real life data set. Gabor+SVM based V1[15] significantly more impacted by atmospheric blur than the commercial algorithm Key References: [6 of 21] R. Bevridge, D. Bolme, M Teixeira, and B. Draper. The CSU Face Identification Evaluation System Users Guide: Version 5.0. Technical report, CSU 2003 [8 of 21] T. Boult and W. Scheirer. Long range facial image acquisition and quality. In M. Tisarelli, S. Li, and R. Chellappa. [15 of 21] N. Pinto, J. J. DiCarlo, and D. D. Cox. How far can you get with a modern face recognition test set using only simple features? In IEEE CVPR, 2009. [18 of 21] T. Sim, S. Baker, and M. Bsat. The CMU Pose, Illumination and Expression (PIE) Database. In Proceedings of the IEEE F&G, May 2002. |
| 5721216f2163d026e90d7cd9942aeb4bebc92334 | |
| 5753b2b5e442eaa3be066daa4a2ca8d8a0bb1725 | |
| 574ad7ef015995efb7338829a021776bf9daaa08 | AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks
for Human Action Recognition in Videos 1IIT Kanpur‡ 2SRI International 3UCSD |
| 57d37ad025b5796457eee7392d2038910988655a | GEERATVEEETATF
|