<!doctype html><html><head><title>Institutions</title><link rel='stylesheet' href='reports.css'></head><body><h2>Institutions</h2><table border='1' cellpadding='3' cellspacing='3'><tr><td>61084a25ebe736e8f6d7a6e53b2c20d9723c4608</td><td></td></tr><tr><td>614a7c42aae8946c7ad4c36b53290860f6256441</td><td>1
<br/>Joint Face Detection and Alignment using
<br/>Multi-task Cascaded Convolutional Networks
</td></tr><tr><td>0d88ab0250748410a1bc990b67ab2efb370ade5d</td><td>Author(s) :
<br/>ERROR HANDLING IN MULTIMODAL BIOMETRIC SYSTEMS USING
<br/>RELIABILITY MEASURES (ThuPmOR6)
<br/>(EPFL, Switzerland)
<br/>(EPFL, Switzerland)
<br/>(EPFL, Switzerland)
<br/>(EPFL, Switzerland)
<br/>Plamen Prodanov
</td></tr><tr><td>0d467adaf936b112f570970c5210bdb3c626a717</td><td></td></tr><tr><td>0d6b28691e1aa2a17ffaa98b9b38ac3140fb3306</td><td>Review of Perceptual Resemblance of Local
<br/>Plastic Surgery Facial Images using Near Sets
<br/>1,2 Department of Computer Technology,
<br/>YCCE Nagpur, India
</td></tr><tr><td>0db8e6eb861ed9a70305c1839eaef34f2c85bbaf</td><td></td></tr><tr><td>0dbf4232fcbd52eb4599dc0760b18fcc1e9546e9</td><td></td></tr><tr><td>0d760e7d762fa449737ad51431f3ff938d6803fe</td><td>LCDet: Low-Complexity Fully-Convolutional Neural Networks for
<br/>Object Detection in Embedded Systems
<br/>UC San Diego ∗
<br/>Gokce Dane
<br/>Qualcomm Inc.
<br/>UC San Diego
<br/>Qualcomm Inc.
<br/>UC San Diego
</td></tr><tr><td>0dd72887465046b0f8fc655793c6eaaac9c03a3d</td><td>Real-time Head Orientation from a Monocular
<br/>Camera using Deep Neural Network
<br/>KAIST, Republic of Korea
</td></tr><tr><td>0d087aaa6e2753099789cd9943495fbbd08437c0</td><td></td></tr><tr><td>0d8415a56660d3969449e77095be46ef0254a448</td><td></td></tr><tr><td>0d735e7552af0d1dcd856a8740401916e54b7eee</td><td></td></tr><tr><td>0d06b3a4132d8a2effed115a89617e0a702c957a</td><td></td></tr><tr><td>0d2dd4fc016cb6a517d8fb43a7cc3ff62964832e</td><td></td></tr><tr><td>956317de62bd3024d4ea5a62effe8d6623a64e53</td><td>Lighting Analysis and Texture Modification of 3D Human
<br/>Face Scans
<br/>Author
<br/>Zhang, Paul, Zhao, Sanqiang, Gao, Yongsheng
<br/>Published
<br/>2007
<br/>Conference Title
<br/>Digital Image Computing Techniques and Applications
<br/>DOI
<br/>https://doi.org/10.1109/DICTA.2007.4426825
<br/>Copyright Statement
<br/>© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/
<br/>republish this material for advertising or promotional purposes or for creating new collective
<br/>works for resale or redistribution to servers or lists, or to reuse any copyrighted component of
<br/>this work in other works must be obtained from the IEEE.
<br/>Downloaded from
<br/>http://hdl.handle.net/10072/17889
<br/>Link to published version
<br/>http://www.ieee.org/
<br/>Griffith Research Online
<br/>https://research-repository.griffith.edu.au
</td></tr><tr><td>956c634343e49319a5e3cba4f2bd2360bdcbc075</td><td>IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 4, AUGUST 2006
<br/>873
<br/>A Novel Incremental Principal Component Analysis
<br/>and Its Application for Face Recognition
</td></tr><tr><td>958c599a6f01678513849637bec5dc5dba592394</td><td>Noname manuscript No.
<br/>(will be inserted by the editor)
<br/>Generalized Zero-Shot Learning for Action
<br/>Recognition with Web-Scale Video Data
<br/>Received: date / Accepted: date
</td></tr><tr><td>59bfeac0635d3f1f4891106ae0262b81841b06e4</td><td>Face Verification Using the LARK Face
<br/>Representation
</td></tr><tr><td>590628a9584e500f3e7f349ba7e2046c8c273fcf</td><td></td></tr><tr><td>59eefa01c067a33a0b9bad31c882e2710748ea24</td><td>IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
<br/>Fast Landmark Localization
<br/>with 3D Component Reconstruction and CNN for
<br/>Cross-Pose Recognition
</td></tr><tr><td>59c9d416f7b3d33141cc94567925a447d0662d80</td><td>Universität des Saarlandes
<br/>Max-Planck-Institut für Informatik
<br/>AG5
<br/>Matrix factorization over max-times
<br/>algebra for data mining
<br/>Masterarbeit im Fach Informatik
<br/>Master’s Thesis in Computer Science
<br/>von / by
<br/>angefertigt unter der Leitung von / supervised by
<br/>begutachtet von / reviewers
<br/>November 2013
<br/>UNIVERSITASSARAVIENSIS</td></tr><tr><td>59a35b63cf845ebf0ba31c290423e24eb822d245</td><td>The FaceSketchID System: Matching Facial
<br/>Composites to Mugshots
<br/>tedious, and may not
</td></tr><tr><td>59f325e63f21b95d2b4e2700c461f0136aecc171</td><td>3070
<br/>978-1-4577-1302-6/11/$26.00 ©2011 IEEE
<br/>FOR FACE RECOGNITION
<br/>1. INTRODUCTION
</td></tr><tr><td>5922e26c9eaaee92d1d70eae36275bb226ecdb2e</td><td>Boosting Classification Based Similarity
<br/>Learning by using Standard Distances
<br/>Departament d’Informàtica, Universitat de València
<br/>Av. de la Universitat s/n. 46100-Burjassot (Spain)
</td></tr><tr><td>59031a35b0727925f8c47c3b2194224323489d68</td><td>Sparse Variation Dictionary Learning for Face Recognition with A Single
<br/>Training Sample Per Person
<br/>ETH Zurich
<br/>Switzerland
</td></tr><tr><td>926c67a611824bc5ba67db11db9c05626e79de96</td><td>1913
<br/>Enhancing Bilinear Subspace Learning
<br/>by Element Rearrangement
</td></tr><tr><td>923ede53b0842619831e94c7150e0fc4104e62f7</td><td>978-1-4799-9988-0/16/$31.00 ©2016 IEEE
<br/>1293
<br/>ICASSP 2016
</td></tr><tr><td>920a92900fbff22fdaaef4b128ca3ca8e8d54c3e</td><td>LEARNING PATTERN TRANSFORMATION MANIFOLDS WITH PARAMETRIC ATOM
<br/>SELECTION
<br/>Ecole Polytechnique F´ed´erale de Lausanne (EPFL)
<br/>Signal Processing Laboratory (LTS4)
<br/>Switzerland-1015 Lausanne
</td></tr><tr><td>9282239846d79a29392aa71fc24880651826af72</td><td>Antonakos et al. EURASIP Journal on Image and Video Processing 2014, 2014:14
<br/>http://jivp.eurasipjournals.com/content/2014/1/14
<br/>RESEARCH
<br/>Open Access
<br/>Classification of extreme facial events in sign
<br/>language videos
</td></tr><tr><td>92c2dd6b3ac9227fce0a960093ca30678bceb364</td><td>Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published
<br/>version when available.
<br/>Title
<br/>On color texture normalization for active appearance models
<br/>Author(s)
<br/>Ionita, Mircea C.; Corcoran, Peter M.; Buzuloiu, Vasile
<br/>Publication
<br/>Date
<br/>2009-05-12
<br/>Publication
<br/>Information
<br/>Ionita, M. C., Corcoran, P., & Buzuloiu, V. (2009). On Color
<br/>Texture Normalization for Active Appearance Models. Image
<br/>Processing, IEEE Transactions on, 18(6), 1372-1378.
<br/>Publisher
<br/>IEEE
<br/>Link to
<br/>publisher's
<br/>version
<br/>http://dx.doi.org/10.1109/TIP.2009.2017163
<br/>Item record
<br/>http://hdl.handle.net/10379/1350
<br/>Some rights reserved. For more information, please see the item record link above.
<br/>Downloaded 2018-11-06T00:40:53Z
</td></tr><tr><td>92fada7564d572b72fd3be09ea3c39373df3e27c</td><td></td></tr><tr><td>927ad0dceacce2bb482b96f42f2fe2ad1873f37a</td><td>Interest-Point based Face Recognition System
<br/>87
<br/>X
<br/>Interest-Point based Face Recognition System
<br/>Spain
<br/>1. Introduction
<br/>Among all applications of face recognition systems, surveillance is one of the most
<br/>challenging ones. In such an application, the goal is to detect known criminals in crowded
<br/>environments, like airports or train stations. Some attempts have been made, like those of
<br/>Tokio (Engadget, 2006) or Mainz (Deutsche Welle, 2006), with limited success.
<br/>The first task to be carried out in an automatic surveillance system involves the detection of
<br/>all the faces in the images taken by the video cameras. Current face detection algorithms are
<br/>highly reliable and thus, they will not be the focus of our work. Some of the best performing
<br/>examples are the Viola-Jones algorithm (Viola & Jones, 2004) or the Schneiderman-Kanade
<br/>algorithm (Schneiderman & Kanade, 2000).
<br/>The second task to be carried out involves the comparison of all detected faces among the
<br/>database of known criminals. The ideal behaviour of an automatic system performing this
<br/>task would be to get a 100% correct identification rate, but this behaviour is far from the
<br/>capabilities of current face recognition algorithms. Assuming that there will be false
<br/>identifications, supervised surveillance systems seem to be the most realistic option: the
<br/>automatic system issues an alarm whenever it detects a possible match with a criminal, and
<br/>a human decides whether it is a false alarm or not. Figure 1 shows an example.
<br/>However, even in a supervised scenario the requirements for the face recognition algorithm
<br/>are extremely high: the false alarm rate must be low enough as to allow the human operator
<br/>to cope with it; and the percentage of undetected criminals must be kept to a minimum in
<br/>order to ensure security. Fulfilling both requirements at the same time is the main challenge,
<br/>as a reduction in false alarm rate usually implies an increase of the percentage of undetected
<br/>criminals.
<br/>We propose a novel face recognition system based in the use of interest point detectors and
<br/>local descriptors. In order to check the performances of our system, and particularly its
<br/>performances in a surveillance application, we present experimental results in terms of
<br/>Receiver Operating Characteristic curves or ROC curves. From the experimental results, it
<br/>becomes clear that our system outperforms classical appearance based approaches.
<br/>www.intechopen.com
</td></tr><tr><td>929bd1d11d4f9cbc638779fbaf958f0efb82e603</td><td>This is the author’s version of a work that was submitted/accepted for pub-
<br/>lication in the following source:
<br/>Zhang, Ligang & Tjondronegoro, Dian W. (2010) Improving the perfor-
<br/>mance of facial expression recognition using dynamic, subtle and regional
<br/>features.
<br/>In Kok, WaiWong, B. Sumudu, U. Mendis, & Abdesselam ,
<br/>Bouzerdoum (Eds.) Neural Information Processing. Models and Applica-
<br/>tions, Lecture Notes in Computer Science, Sydney, N.S.W, pp. 582-589.
<br/>This file was downloaded from: http://eprints.qut.edu.au/43788/
<br/>c(cid:13) Copyright 2010 Springer-Verlag
<br/>Conference proceedings published, by Springer Verlag, will be available
<br/>via Lecture Notes in Computer Science http://www.springer.de/comp/lncs/
<br/>Notice: Changes introduced as a result of publishing processes such as
<br/>copy-editing and formatting may not be reflected in this document. For a
<br/>definitive version of this work, please refer to the published source:
<br/>http://dx.doi.org/10.1007/978-3-642-17534-3_72
</td></tr><tr><td>0c36c988acc9ec239953ff1b3931799af388ef70</td><td>Face Detection Using Improved Faster RCNN
<br/>Huawei Cloud BU, China
<br/>Figure1.Face detection results of FDNet1.0
</td></tr><tr><td>0c5ddfa02982dcad47704888b271997c4de0674b</td><td></td></tr><tr><td>0cccf576050f493c8b8fec9ee0238277c0cfd69a</td><td></td></tr><tr><td>0c069a870367b54dd06d0da63b1e3a900a257298</td><td>Author manuscript, published in "ICANN 2011 - International Conference on Artificial Neural Networks (2011)"
</td></tr><tr><td>0c75c7c54eec85e962b1720755381cdca3f57dfb</td><td>2212
<br/>Face Landmark Fitting via Optimized Part
<br/>Mixtures and Cascaded Deformable Model
</td></tr><tr><td>0c54e9ac43d2d3bab1543c43ee137fc47b77276e</td><td></td></tr><tr><td>0c5afb209b647456e99ce42a6d9d177764f9a0dd</td><td>97
<br/>Recognizing Action Units for
<br/>Facial Expression Analysis
</td></tr><tr><td>0c377fcbc3bbd35386b6ed4768beda7b5111eec6</td><td>258
<br/>A Unified Probabilistic Framework
<br/>for Spontaneous Facial Action Modeling
<br/>and Understanding
</td></tr><tr><td>0cb2dd5f178e3a297a0c33068961018659d0f443</td><td></td></tr><tr><td>0cf7da0df64557a4774100f6fde898bc4a3c4840</td><td>Shape Matching and Object Recognition using Low Distortion Correspondences
<br/>Department of Electrical Engineering and Computer Science
<br/>U.C. Berkeley
</td></tr><tr><td>0c4659b35ec2518914da924e692deb37e96d6206</td><td>1236
<br/>Registering a MultiSensor Ensemble of Images
</td></tr><tr><td>0c53ef79bb8e5ba4e6a8ebad6d453ecf3672926d</td><td>SUBMITTED TO JOURNAL
<br/>Weakly Supervised PatchNets: Describing and
<br/>Aggregating Local Patches for Scene Recognition
</td></tr><tr><td>0c60eebe10b56dbffe66bb3812793dd514865935</td><td></td></tr><tr><td>660b73b0f39d4e644bf13a1745d6ee74424d4a16</td><td></td></tr><tr><td>66d512342355fb77a4450decc89977efe7e55fa2</td><td>Under review as a conference paper at ICLR 2018
<br/>LEARNING NON-LINEAR TRANSFORM WITH DISCRIM-
<br/>INATIVE AND MINIMUM INFORMATION LOSS PRIORS
<br/>Anonymous authors
<br/>Paper under double-blind review
</td></tr><tr><td>6643a7feebd0479916d94fb9186e403a4e5f7cbf</td><td>Chapter 8
<br/>3D Face Recognition
</td></tr><tr><td>66a2c229ac82e38f1b7c77a786d8cf0d7e369598</td><td>Proceedings of the 2016 Industrial and Systems Engineering Research Conference
<br/>H. Yang, Z. Kong, and MD Sarder, eds.
<br/>A Probabilistic Adaptive Search System
<br/>for Exploring the Face Space
<br/>Escuela Superior Politecnica del Litoral (ESPOL)
<br/>Guayaquil-Ecuador
</td></tr><tr><td>66886997988358847615375ba7d6e9eb0f1bb27f</td><td></td></tr><tr><td>66a9935e958a779a3a2267c85ecb69fbbb75b8dc</td><td>FAST AND ROBUST FIXED-RANK MATRIX RECOVERY
<br/>Fast and Robust Fixed-Rank Matrix
<br/>Recovery
<br/>Antonio Lopez
</td></tr><tr><td>66533107f9abdc7d1cb8f8795025fc7e78eb1122</td><td>Vi a Sevig f a Ue h wih E(cid:11)ecive ei Readig
<br/>i a Whee chai baed Rbic A
<br/>W y g Sgy Dae i iy g S g iz ad Ze ga Biey
<br/>y EECS AST 373 1 g Dg Y g G Taej 305 701 REA
<br/>z VR Cee ETR 161 ajg Dg Y g G Taej 305 350 REA
<br/>Abac
<br/>Thee exi he c eaive aciviy bewee a h
<br/>a beig ad ehabi iai b beca e he h
<br/>a eae ehabi iai b i he ae evi
<br/>e ad ha he bee(cid:12) f ehabi iai b
<br/> ch a ai ay bi e f ci. ei
<br/>eadig i e f he eeia f ci f h a
<br/>fied y ehabi iai b i de ie he
<br/>cf ad afey f a wh eed he. Fi f
<br/>a he vea c e f a ew whee chai baed
<br/>bic a ye ARES ad i h a b
<br/>ieaci ech gie ae eeed. Ag he
<br/>ech gie we cceae vi a evig ha
<br/>a w hi bic a eae a y via
<br/>vi a feedback. E(cid:11)ecive iei eadig ch a
<br/>ecgizig he iive ad egaive eaig f he
<br/> e i efed he bai f chage f he facia
<br/>exei a d i ha i g y e aed he
<br/> e iei whi e hi bic a vide he
<br/> e wih a beveage. F he eÆcie vi a ifa
<br/>i ceig g a aed iage ae ed
<br/>c he ee caea head ha i caed i he
<br/>ed e(cid:11)ec f he bic a. The vi a evig
<br/>wih e(cid:11)ecive iei eadig i ccef y a ied
<br/> eve a beveage f he e.
<br/>d ci
<br/>Whee chai baed bic ye ae ai y ed
<br/>ai he e de y ad he diab ed wh have hadi
<br/>ca i ey ad f ci i ib. S ch a
<br/>ye ci f a weed whee chai ad a bic
<br/>a ad ha y a bi e caabi iy h gh
<br/>he whee chai b a a ai ay f ci via
<br/>he bic a ad h ake ib e he c
<br/>exiece f a e ad a b i he ae evi
<br/>e.
<br/> hi cae he e eed ieac wih
<br/>he bic a i cfab e ad afe way. w
<br/>Fig e 1: The whee chai baed bic a ad i
<br/>h a b ieaci ech gie.
<br/>eve i ha bee eed ha ay diÆc ie exi
<br/>i h a bf ieaci i exiig ehabi iai
<br/>b. F exa e a a c f he bic
<br/>a ake a high cgiive ad he e a whi e
<br/>hyica y diab ed e ay have diÆc ie i
<br/>eaig jyick dexe y hig b f
<br/>de icae vee [4]. addii AUS eva
<br/>ai e eed ha he diÆc hig
<br/>ig ehabi iai b i ay cad f a
<br/>a adj e ad ay f ci kee i
<br/>id a he begiig [4]. Theefe h a fied y
<br/>h a b ieaci i e f eeia echi e
<br/>i a whee chai baed bic a.
<br/> hi ae we cide he whee chai baed
<br/>bic ye ARES AST Rehabi iai E
<br/>gieeig Sevice ye which we ae deve ig
<br/>a a evice bic ye f he diab ed ad he
<br/>e de y ad dic i h a b ieaci ech
<br/>i e Fig. 1. Ag h a b ieaci ech
<br/>i e vi a evig i dea wih a a aj ic.
</td></tr><tr><td>66810438bfb52367e3f6f62c24f5bc127cf92e56</td><td>Face Recognition of Illumination Tolerance in 2D
<br/>Subspace Based on the Optimum Correlation
<br/>Filter
<br/>Xu Yi
<br/>Department of Information Engineering, Hunan Industry Polytechnic, Changsha, China
<br/>images will be tested to project
</td></tr><tr><td>66af2afd4c598c2841dbfd1053bf0c386579234e</td><td>Noname manuscript No.
<br/>(will be inserted by the editor)
<br/>Context Assisted Face Clustering Framework with
<br/>Human-in-the-Loop
<br/>Received: date / Accepted: date
</td></tr><tr><td>66e6f08873325d37e0ec20a4769ce881e04e964e</td><td>Int J Comput Vis (2014) 108:59–81
<br/>DOI 10.1007/s11263-013-0695-z
<br/>The SUN Attribute Database: Beyond Categories for Deeper Scene
<br/>Understanding
<br/>Received: 27 February 2013 / Accepted: 28 December 2013 / Published online: 18 January 2014
<br/>© Springer Science+Business Media New York 2014
</td></tr><tr><td>661da40b838806a7effcb42d63a9624fcd684976</td><td>53
<br/>An Illumination Invariant Accurate
<br/>Face Recognition with Down Scaling
<br/>of DCT Coefficients
<br/>Department of Computer Science and Engineering, Amity School of Engineering and Technology, New Delhi, India
<br/>In this paper, a novel approach for illumination normal-
<br/>ization under varying lighting conditions is presented.
<br/>Our approach utilizes the fact that discrete cosine trans-
<br/>form (DCT) low-frequency coefficients correspond to
<br/>illumination variations in a digital image. Under varying
<br/>illuminations, the images captured may have low con-
<br/>trast; initially we apply histogram equalization on these
<br/>for contrast stretching. Then the low-frequency DCT
<br/>coefficients are scaled down to compensate the illumi-
<br/>nation variations. The value of scaling down factor and
<br/>the number of low-frequency DCT coefficients, which
<br/>are to be rescaled, are obtained experimentally. The
<br/>classification is done using k−nearest neighbor classi-
<br/>fication and nearest mean classification on the images
<br/>obtained by inverse DCT on the processed coefficients.
<br/>The correlation coefficient and Euclidean distance ob-
<br/>tained using principal component analysis are used as
<br/>distance metrics in classification. We have tested our
<br/>face recognition method using Yale Face Database B.
<br/>The results show that our method performs without any
<br/>error (100% face recognition performance), even on the
<br/>most extreme illumination variations. There are different
<br/>schemes in the literature for illumination normalization
<br/>under varying lighting conditions, but no one is claimed
<br/>to give 100% recognition rate under all illumination
<br/>variations for this database. The proposed technique is
<br/>computationally efficient and can easily be implemented
<br/>for real time face recognition system.
<br/>Keywords: discrete cosine transform, correlation co-
<br/>efficient, face recognition, illumination normalization,
<br/>nearest neighbor classification
<br/>1. Introduction
<br/>Two-dimensional pattern classification plays a
<br/>crucial role in real-world applications. To build
<br/>high-performance surveillance or information
<br/>security systems, face recognition has been
<br/>known as the key application attracting enor-
<br/>mous researchers highlighting on related topics
<br/>[1,2]. Even though current machine recognition
<br/>systems have reached a certain level of matu-
<br/>rity, their success is limited by the real appli-
<br/>cations constraints, like pose, illumination and
<br/>expression. The FERET evaluation shows that
<br/>the performance of a face recognition system
<br/>decline seriously with the change of pose and
<br/>illumination conditions [31].
<br/>To solve the variable illumination problem a
<br/>variety of approaches have been proposed [3, 7-
<br/>11, 26-29]. Early work in illumination invariant
<br/>face recognition focused on image representa-
<br/>tions that are mostly insensitive to changes in
<br/>illumination. There were approaches in which
<br/>the image representations and distance mea-
<br/>sures were evaluated on a tightly controlled face
<br/>database that varied the face pose, illumination,
<br/>and expression. The image representations in-
<br/>clude edge maps, 2D Gabor-like filters, first and
<br/>second derivatives of the gray-level image, and
<br/>the logarithmic transformations of the intensity
<br/>image along with these representations [4].
<br/>The different approaches to solve the prob-
<br/>lem of illumination invariant face recognition
<br/>can be broadly classified into two main cate-
<br/>gories. The first category is named as passive
<br/>approach in which the visual spectrum images
<br/>are analyzed to overcome this problem. The
<br/>approaches belonging to other category named
<br/>active, attempt to overcome this problem by
<br/>employing active imaging techniques to obtain
<br/>face images captured in consistent illumina-
<br/>tion condition, or images of illumination invari-
<br/>ant modalities. There is a hierarchical catego-
<br/>rization of these two approaches. An exten-
<br/>sive review of both approaches is given in [5].
</td></tr><tr><td>3edb0fa2d6b0f1984e8e2c523c558cb026b2a983</td><td>Automatic Age Estimation Based on
<br/>Facial Aging Patterns
</td></tr><tr><td>3ee7a8107a805370b296a53e355d111118e96b7c</td><td></td></tr><tr><td>3ea8a6dc79d79319f7ad90d663558c664cf298d4</td><td></td></tr><tr><td>3e4f84ce00027723bdfdb21156c9003168bc1c80</td><td>1979
<br/>© EURASIP, 2011 - ISSN 2076-1465
<br/>19th European Signal Processing Conference (EUSIPCO 2011)
<br/>INTRODUCTION
</td></tr><tr><td>3e685704b140180d48142d1727080d2fb9e52163</td><td>Single Image Action Recognition by Predicting
<br/>Space-Time Saliency
</td></tr><tr><td>3e687d5ace90c407186602de1a7727167461194a</td><td>Photo Tagging by Collection-Aware People Recognition
<br/>UFF
<br/>UFF
<br/>Asla S´a
<br/>FGV
<br/>IMPA
</td></tr><tr><td>501096cca4d0b3d1ef407844642e39cd2ff86b37</td><td>Illumination Invariant Face Image
<br/>Representation using Quaternions
<br/>Dayron Rizo-Rodr´ıguez, Heydi M´endez-V´azquez, and Edel Garc´ıa-Reyes
<br/>Advanced Technologies Application Center. 7a # 21812 b/ 218 and 222,
<br/>Rpto. Siboney, Playa, P.C. 12200, La Habana, Cuba.
</td></tr><tr><td>501eda2d04b1db717b7834800d74dacb7df58f91</td><td></td></tr><tr><td>5083c6be0f8c85815ead5368882b584e4dfab4d1</td><td> Please do not quote. In press, Handbook of affective computing. New York, NY: Oxford
<br/>Automated Face Analysis for Affective Computing
</td></tr><tr><td>500b92578e4deff98ce20e6017124e6d2053b451</td><td></td></tr><tr><td>50ff21e595e0ebe51ae808a2da3b7940549f4035</td><td>IEEE TRANSACTIONS ON LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 2017
<br/>Age Group and Gender Estimation in the Wild with
<br/>Deep RoR Architecture
</td></tr><tr><td>5042b358705e8d8e8b0655d07f751be6a1565482</td><td>International Journal of
<br/>Emerging Research in Management &Technology
<br/>ISSN: 2278-9359 (Volume-4, Issue-8)
<br/>Research Article
<br/> August
<br/> 2015
<br/>Review on Emotion Detection in Image
<br/>CSE & PCET, PTU HOD, CSE & PCET, PTU
<br/> Punjab, India Punj ab, India
</td></tr><tr><td>50e47857b11bfd3d420f6eafb155199f4b41f6d7</td><td>International Journal of Computer, Consumer and Control (IJ3C), Vol. 2, No.1 (2013)
<br/>3D Human Face Reconstruction Using a Hybrid of Photometric
<br/>Stereo and Independent Component Analysis
</td></tr><tr><td>50eb75dfece76ed9119ec543e04386dfc95dfd13</td><td>Learning Visual Entities and their Visual Attributes from Text Corpora
<br/>Dept. of Computer Science
<br/>K.U.Leuven, Belgium
<br/>Dept. of Computer Science
<br/>K.U.Leuven, Belgium
<br/>Dept. of Computer Science
<br/>K.U.Leuven, Belgium
</td></tr><tr><td>50d15cb17144344bb1879c0a5de7207471b9ff74</td><td>Divide, Share, and Conquer: Multi-task
<br/>Attribute Learning with Selective Sharing
</td></tr><tr><td>5028c0decfc8dd623c50b102424b93a8e9f2e390</td><td>Published as a conference paper at ICLR 2017
<br/>REVISITING CLASSIFIER TWO-SAMPLE TESTS
<br/>1Facebook AI Research, 2WILLOW project team, Inria / ENS / CNRS
</td></tr><tr><td>505e55d0be8e48b30067fb132f05a91650666c41</td><td>A Model of Illumination Variation for Robust Face Recognition
<br/>Institut Eur´ecom
<br/>Multimedia Communications Department
<br/>BP 193, 06904 Sophia Antipolis Cedex, France
</td></tr><tr><td>680d662c30739521f5c4b76845cb341dce010735</td><td>Int J Comput Vis (2014) 108:82–96
<br/>DOI 10.1007/s11263-014-0716-6
<br/>Part and Attribute Discovery from Relative Annotations
<br/>Received: 25 February 2013 / Accepted: 14 March 2014 / Published online: 26 April 2014
<br/>© Springer Science+Business Media New York 2014
</td></tr><tr><td>68a3f12382003bc714c51c85fb6d0557dcb15467</td><td></td></tr><tr><td>68d4056765c27fbcac233794857b7f5b8a6a82bf</td><td>Example-Based Face Shape Recovery Using the
<br/>Zenith Angle of the Surface Normal
<br/>Mario Castel´an1, Ana J. Almaz´an-Delf´ın2, Marco I. Ram´ırez-Sosa-Mor´an3,
<br/>and Luz A. Torres-M´endez1
<br/>1 CINVESTAV Campus Saltillo, Ramos Arizpe 25900, Coahuila, M´exico
<br/>2 Universidad Veracruzana, Facultad de F´ısica e Inteligencia Artificial, Xalapa 91000,
<br/>3 ITESM, Campus Saltillo, Saltillo 25270, Coahuila, M´exico
<br/>Veracruz, M´exico
</td></tr><tr><td>68cf263a17862e4dd3547f7ecc863b2dc53320d8</td><td></td></tr><tr><td>68e9c837431f2ba59741b55004df60235e50994d</td><td>Detecting Faces Using Region-based Fully
<br/>Convolutional Networks
<br/>Tencent AI Lab, China
</td></tr><tr><td>687e17db5043661f8921fb86f215e9ca2264d4d2</td><td>A Robust Elastic and Partial Matching Metric for Face Recognition
<br/>Microsoft Corporate
<br/>One Microsoft Way, Redmond, WA 98052
</td></tr><tr><td>688754568623f62032820546ae3b9ca458ed0870</td><td>bioRxiv preprint first posted online Sep. 27, 2016;
<br/>doi:
<br/>http://dx.doi.org/10.1101/077784
<br/>.
<br/>The copyright holder for this preprint (which was not
<br/>peer-reviewed) is the author/funder. It is made available under a
<br/>CC-BY-NC-ND 4.0 International license
<br/>.
<br/>Resting high frequency heart rate variability is not associated with the
<br/>recognition of emotional facial expressions in healthy human adults.
<br/>1 Univ. Grenoble Alpes, LPNC, F-38040, Grenoble, France
<br/>2 CNRS, LPNC UMR 5105, F-38040, Grenoble, France
<br/>3 IPSY, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
<br/>4 Fund for Scientific Research (FRS-FNRS), Brussels, Belgium
<br/>Correspondence concerning this article should be addressed to Brice Beffara, Office E250, Institut
<br/>de Recherches en Sciences Psychologiques, IPSY - Place du Cardinal Mercier, 10 bte L3.05.01 B-1348
<br/>Author note
<br/>This study explores whether the myelinated vagal connection between the heart and the brain
<br/>is involved in emotion recognition. The Polyvagal theory postulates that the activity of the
<br/>myelinated vagus nerve underlies socio-emotional skills. It has been proposed that the perception
<br/>of emotions could be one of this skills dependent on heart-brain interactions. However, this
<br/>assumption was differently supported by diverging results suggesting that it could be related to
<br/>confounded factors. In the current study, we recorded the resting state vagal activity (reflected by
<br/>High Frequency Heart Rate Variability, HF-HRV) of 77 (68 suitable for analysis) healthy human
<br/>adults and measured their ability to identify dynamic emotional facial expressions. Results show
<br/>that HF-HRV is not related to the recognition of emotional facial expressions in healthy human
<br/>adults. We discuss this result in the frameworks of the polyvagal theory and the neurovisceral
<br/>integration model.
<br/>Keywords: HF-HRV; autonomic flexibility; emotion identification; dynamic EFEs; Polyvagal
<br/>theory; Neurovisceral integration model
<br/>Word count: 9810
<br/>10
<br/>11
<br/>12
<br/>13
<br/>14
<br/>15
<br/>16
<br/>17
<br/>Introduction
<br/>The behavior of an animal is said social when involved in in-
<br/>teractions with other animals (Ward & Webster, 2016). These
<br/>interactions imply an exchange of information, signals, be-
<br/>tween at least two animals. In humans, the face is an efficient
<br/>communication channel, rapidly providing a high quantity of
<br/>information. Facial expressions thus play an important role
<br/>in the transmission of emotional information during social
<br/>interactions. The result of the communication is the combina-
<br/>tion of transmission from the sender and decoding from the
<br/>receiver (Jack & Schyns, 2015). As a consequence, the quality
<br/>of the interaction depends on the ability to both produce and
<br/>identify facial expressions. Emotions are therefore a core
<br/>feature of social bonding (Spoor & Kelly, 2004). Health
<br/>of individuals and groups depend on the quality of social
<br/>bonds in many animals (Boyer, Firat, & Leeuwen, 2015; S. L.
<br/>Brown & Brown, 2015; Neuberg, Kenrick, & Schaller, 2011),
<br/>18
<br/>19
<br/>20
<br/>21
<br/>22
<br/>23
<br/>24
<br/>25
<br/>26
<br/>27
<br/>28
<br/>29
<br/>30
<br/>31
<br/>32
<br/>33
<br/>34
<br/>35
<br/>especially in highly social species such as humans (Singer &
<br/>Klimecki, 2014).
<br/>The recognition of emotional signals produced by others is
<br/>not independent from its production by oneself (Niedenthal,
<br/>2007). The muscles of the face involved in the production of
<br/>a facial expressions are also activated during the perception of
<br/>the same facial expressions (Dimberg, Thunberg, & Elmehed,
<br/>2000). In other terms, the facial mimicry of the perceived
<br/>emotional facial expression (EFE) triggers its sensorimotor
<br/>simulation in the brain, which improves the recognition abili-
<br/>ties (Wood, Rychlowska, Korb, & Niedenthal, 2016). Beyond
<br/>that, the emotion can be seen as the body -including brain-
<br/>dynamic itself (Gallese & Caruana, 2016) which helps to un-
<br/>derstand why behavioral simulation is necessary to understand
<br/>the emotion.
<br/>The interplay between emotion production, emotion percep-
<br/>tion, social communication and body dynamics has been sum-
<br/>marized in the framework of the polyvagal theory (Porges,
</td></tr><tr><td>68f9cb5ee129e2b9477faf01181cd7e3099d1824</td><td>ALDA Algorithms for Online Feature Extraction
</td></tr><tr><td>68bf34e383092eb827dd6a61e9b362fcba36a83a</td><td></td></tr><tr><td>6889d649c6bbd9c0042fadec6c813f8e894ac6cc</td><td>Analysis of Robust Soft Learning Vector
<br/>Quantization and an application to Facial
<br/>Expression Recognition
</td></tr><tr><td>68c17aa1ecbff0787709be74d1d98d9efd78f410</td><td>International Journal of Optomechatronics, 6: 92–119, 2012
<br/>Copyright # Taylor & Francis Group, LLC
<br/>ISSN: 1559-9612 print=1559-9620 online
<br/>DOI: 10.1080/15599612.2012.663463
<br/>GENDER CLASSIFICATION FROM FACE IMAGES
<br/>USING MUTUAL INFORMATION AND FEATURE
<br/>FUSION
<br/>Department of Electrical Engineering and Advanced Mining Technology
<br/>Center, Universidad de Chile, Santiago, Chile
<br/>In this article we report a new method for gender classification from frontal face images
<br/>using feature selection based on mutual information and fusion of features extracted from
<br/>intensity, shape, texture, and from three different spatial scales. We compare the results of
<br/>three different mutual information measures: minimum redundancy and maximal relevance
<br/>(mRMR), normalized mutual information feature selection (NMIFS), and conditional
<br/>mutual information feature selection (CMIFS). We also show that by fusing features
<br/>extracted from six different methods we significantly improve the gender classification
<br/>results relative to those previously published, yielding 99.13% of the gender classification
<br/>rate on the FERET database.
<br/>Keywords: Feature fusion, feature selection, gender classification, mutual information, real-time gender
<br/>classification
<br/>1. INTRODUCTION
<br/>During the 90’s, one of the main issues addressed in the area of computer
<br/>vision was face detection. Many methods and applications were developed including
<br/>the face detection used in many digital cameras nowadays. Gender classification is
<br/>important in many possible applications including electronic marketing. Displays
<br/>at retail stores could show products and offers according to the person gender as
<br/>the person passes in front of a camera at the store. This is not a simple task since
<br/>faces are not rigid and depend on illumination, pose, gestures, facial expressions,
<br/>occlusions (glasses), and other facial features (makeup, beard). The high variability
<br/>in the appearance of the face directly affects their detection and classification. Auto-
<br/>matic classification of gender from face images has a wide range of possible applica-
<br/>tions, ranging from human-computer interaction to applications in real-time
<br/>electronic marketing in retail stores (Shan 2012; Bekios-Calfa et al. 2011; Chu
<br/>et al. 2010; Perez et al. 2010a).
<br/>Automatic gender classification has a wide range of possible applications for
<br/>improving human-machine interaction and face identification methods (Irick et al.
<br/>ing.uchile.cl
<br/>92
</td></tr><tr><td>6888f3402039a36028d0a7e2c3df6db94f5cb9bb</td><td>Under review as a conference paper at ICLR 2018
<br/>CLASSIFIER-TO-GENERATOR ATTACK: ESTIMATION
<br/>OF TRAINING DATA DISTRIBUTION FROM CLASSIFIER
<br/>Anonymous authors
<br/>Paper under double-blind review
</td></tr><tr><td>574751dbb53777101502419127ba8209562c4758</td><td></td></tr><tr><td>57b8b28f8748d998951b5a863ff1bfd7ca4ae6a5</td><td></td></tr><tr><td>57101b29680208cfedf041d13198299e2d396314</td><td></td></tr><tr><td>57893403f543db75d1f4e7355283bdca11f3ab1b</td><td></td></tr><tr><td>57f8e1f461ab25614f5fe51a83601710142f8e88</td><td>Region Selection for Robust Face Verification using UMACE Filters
<br/>Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering,
<br/>Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia.
<br/>In this paper, we investigate the verification performances of four subdivided face images with varying expressions. The
<br/>objective of this study is to evaluate which part of the face image is more tolerant to facial expression and still retains its personal
<br/>characteristics due to the variations of the image. The Unconstrained Minimum Average Correlation Energy (UMACE) filter is
<br/>implemented to perform the verification process because of its advantages such as shift–invariance, ability to trade-off between
<br/>discrimination and distortion tolerance, e.g. variations in pose, illumination and facial expression. The database obtained from the
<br/>facial expression database of Advanced Multimedia Processing (AMP) Lab at CMU is used in this study. Four equal
<br/>sizes of face regions i.e. bottom, top, left and right halves are used for the purpose of this study. The results show that the bottom
<br/>half of the face region gives the best performance in terms of the PSR values with zero false accepted rate (FAR) and zero false
<br/>rejection rate (FRR) compared to the other three regions.
<br/>1. Introduction
<br/>Face recognition is a well established field of research,
<br/>and a large number of algorithms have been proposed in the
<br/>literature. Various classifiers have been explored to improve
<br/>the accuracy of face classification. The basic approach is to
<br/>use distance-base methods which measure Euclidean distance
<br/>between any two vectors and then compare it with the preset
<br/>threshold. Neural Networks are often used as classifiers due
<br/>to their powerful generation ability [1]. Support Vector
<br/>Machines (SVM) have been applied with encouraging results
<br/>[2].
<br/>In biometric applications, one of the important tasks is the
<br/>matching process between an individual biometrics against
<br/>the database that has been prepared during the enrolment
<br/>stage. For biometrics systems such as face authentication that
<br/>use images as personal characteristics, biometrics sensor
<br/>output and image pre-processing play an important role since
<br/>the quality of a biometric input can change significantly due
<br/>to illumination, noise and pose variations. Over the years,
<br/>researchers have studied the role of illumination variation,
<br/>pose variation, facial expression, and occlusions in affecting
<br/>the performance of face verification systems [3].
<br/>The Minimum Average Correlation Energy (MACE)
<br/>filters have been reported to be an alternative solution to these
<br/>problems because of the advantages such as shift-invariance,
<br/>close-form expressions and distortion-tolerance. MACE
<br/>filters have been successfully applied in the field of automatic
<br/>target recognition as well as in biometric verification [3][4].
<br/>Face and fingerprint verification using correlation filters have
<br/>been investigated in [5] and [6], respectively. Savvides et.al
<br/>performed face authentication and identification using
<br/>correlation filters based on illumination variation [7]. In the
<br/>process of implementing correlation filters, the number of
<br/>training images used depends on the level of distortions
<br/>applied to the images [5], [6].
<br/>In this study, we investigate which part of a face image is
<br/>more tolerant to facial expression and retains its personal
<br/>characteristics for the verification process. Four subdivided
<br/>face images, i.e. bottom, top, left and right halves, with
<br/>varying expressions are investigated. By identifying only the
<br/>region of the face that gives the highest verification
<br/>performance, that region can be used instead of the full-face
<br/>to reduce storage requirements.
<br/>2. Unconstrained Minimum Average Correlation
<br/>Energy (UMACE) Filter
<br/>Correlation filter theory and the descriptions of the design
<br/>of the correlation filter can be found in a tutorial survey paper
<br/>[8]. According to [4][6], correlation filter evolves from
<br/>matched filters which are optimal for detecting a known
<br/>reference image in the presence of additive white Gaussian
<br/>noise. However, the detection rate of matched filters
<br/>decreases significantly due to even the small changes of scale,
<br/>rotation and pose of the reference image.
<br/>the pre-specified peak values
<br/>In an effort to solve this problem, the Synthetic
<br/>Discriminant Function (SDF) filter and the Equal Correlation
<br/>Peak SDF (ECP SDF) filter ware introduced which allowed
<br/>several training images to be represented by a single
<br/>correlation filter. SDF filter produces pre-specified values
<br/>called peak constraints. These peak values correspond to the
<br/>authentic class or impostor class when an image is tested.
<br/>However,
<br/>to
<br/>misclassifications when the sidelobes are larger than the
<br/>controlled values at the origin.
<br/>Savvides et.al developed
<br/>the Minimum Average
<br/>Correlation Energy (MACE) filters [5]. This filter reduces the
<br/>large sidelobes and produces a sharp peak when the test
<br/>image is from the same class as the images that have been
<br/>used to design the filter. There are two kinds of variants that
<br/>can be used in order to obtain a sharp peak when the test
<br/>image belongs to the authentic class. The first MACE filter
<br/>variant minimizes the average correlation energy of the
<br/>training images while constraining the correlation output at
<br/>the origin to a specific value for each of the training images.
<br/>The second MACE filter variant is the Unconstrained
<br/>Minimum Average Correlation Energy (UMACE) filter
<br/>which also minimizes the average correlation output while
<br/>maximizing the correlation output at the origin [4].
<br/>lead
<br/>Proceedings of the International Conference onElectrical Engineering and InformaticsInstitut Teknologi Bandung, Indonesia June 17-19, 2007B-67ISBN 978-979-16338-0-2611</td></tr><tr><td>57a1466c5985fe7594a91d46588d969007210581</td><td>A Taxonomy of Face-models for System Evaluation
<br/>Motivation and Data Types
<br/>Synthetic Data Types
<br/>Unverified – Have no underlying physical or
<br/>statistical basis
<br/>Physics -Based – Based on structure and
<br/>materials combined with the properties
<br/>formally modeled in physics.
<br/>Statistical – Use statistics from real
<br/>data/experiments to estimate/learn model
<br/>parameters. Generally have measurements
<br/>of accuracy
<br/>Guided Synthetic – Individual models based
<br/>on individual people. No attempt to capture
<br/>properties of large groups, a unique model
<br/>per person. For faces, guided models are
<br/>composed of 3D structure models and skin
<br/>textures, capturing many artifacts not
<br/>easily parameterized. Can be combined with
<br/>physics-based rendering to generate samples
<br/>under different conditions.
<br/>Semi–Synethetic – Use measured data such
<br/>as 2D images or 3D facial scans. These are
<br/>not truly synthetic as they are re-rendering’s
<br/>of real measured data.
<br/>Semi and Guided Synthetic data provide
<br/>higher operational relevance while
<br/>maintaining a high degree of control.
<br/>Generating statistically significant size
<br/>datasets for face matching system
<br/>evaluation is both a laborious and
<br/>expensive process.
<br/>There is a gap in datasets that allow for
<br/>evaluation of system issues including:
<br/> Long distance recognition
<br/> Blur caused by atmospherics
<br/> Various weather conditions
<br/> End to end systems evaluation
<br/>Our contributions:
<br/> Define a taxonomy of face-models
<br/>for controlled experimentations
<br/> Show how Synthetic addresses gaps
<br/>in system evaluation
<br/> Show a process for generating and
<br/>validating synthetic models
<br/> Use these models in long distance
<br/>face recognition system evaluation
<br/>Experimental Setup
<br/>Results and Conclusions
<br/>Example Models
<br/>Original Pie
<br/>Semi-
<br/>Synthetic
<br/>FaceGen
<br/>Animetrics
<br/>http://www.facegen.com
<br/>http://www.animetrics.com/products/Forensica.php
<br/>Guided-
<br/>Synthetic
<br/>Models
<br/> Models generated using the well
<br/>known CMU PIE [18] dataset. Each of
<br/>the 68 subjects of PIE were modeled
<br/>using a right profile and frontal
<br/>image from the lights subset.
<br/> Two modeling programs were used,
<br/>Facegen and Animetrics. Both
<br/>programs create OBJ files and
<br/>textures
<br/> Models are re-rendered using
<br/>custom display software built with
<br/>OpenGL, GLUT and DevIL libraries
<br/> Custom Display Box housing a BENQ SP820 high
<br/>powered projector rated at 4000 ANSI Lumens
<br/> Canon EOS 7D withd a Sigma 800mm F5.6 EX APO
<br/>DG HSM lens a 2x adapter imaging the display
<br/>from 214 meters
<br/>Normalized Example Captures
<br/>Real PIE 1 Animetrics
<br/>FaceGen
<br/>81M inside 214M outside
<br/>Real PIE 2
<br/> Pre-cropped images were used for the
<br/>commercial core
<br/> Ground truth eye points + geometric/lighting
<br/>normalization pre processing before running
<br/>through the implementation of the V1
<br/>recognition algorithm found in [1].
<br/> Geo normalization highlights how the feature
<br/>region of the models looks very similar to
<br/>that of the real person.
<br/>Each test consisted of using 3 approximately frontal gallery images NOT used to
<br/>make the 3D model used as the probe, best score over 3 images determined score.
<br/>Even though the PIE-3D-20100224A–D sets were imaged on the same day, the V1
<br/>core scored differently on each highlighting the synthetic data’s ability to help
<br/>evaluate data capture methods and effects of varying atmospherics. The ISO setting
<br/>varied which effects the shutter speed, with higher ISO generally yielding less blur.
<br/>Dataset
<br/>Range(m)
<br/>Iso
<br/>V1
<br/>Comm.
<br/>Original PIE Images
<br/>FaceGen ScreenShots
<br/>Animetrics Screenshots
<br/>PIE-3D-20100210B
<br/>PIE-3D-20100224A
<br/>PIE-3D-20100224B
<br/>PIE-3D-20100224C
<br/>PIE-3D-20100224D
<br/>N/A
<br/>N/A
<br/>N/A
<br/>81m
<br/>214m
<br/>214m
<br/>214m
<br/>214m
<br/>N/A
<br/>N/A
<br/>N/A
<br/>500
<br/>125
<br/>125
<br/>250
<br/>400
<br/>100
<br/>47.76
<br/>100
<br/>100
<br/>58.82
<br/>45.59
<br/>81.82
<br/>79.1
<br/>100
<br/>100
<br/>100
<br/>100
<br/>100
<br/>100
<br/> The same (100 percent) recognition rate on screenshots as original images
<br/>validate the Anmetrics guided synthetic models and fails FaceGen Models.
<br/> 100% recognition means dataset is too small/easy; exapanding pose and models
<br/>underway.
<br/> Expanded the photohead methodology into 3D
<br/> Developed a robust modeling system allowing for multiple configurations of a
<br/>single real life data set.
<br/> Gabor+SVM based V1[15] significantly more impacted by atmospheric blur than
<br/>the commercial algorithm
<br/>Key References:
<br/>[6 of 21] R. Bevridge, D. Bolme, M Teixeira, and B. Draper. The CSU Face Identification Evaluation System Users Guide: Version 5.0. Technical report, CSU 2003
<br/>[8 of 21] T. Boult and W. Scheirer. Long range facial image acquisition and quality. In M. Tisarelli, S. Li, and R. Chellappa.
<br/>[15 of 21] N. Pinto, J. J. DiCarlo, and D. D. Cox. How far can you get with a modern face recognition test set using only simple features? In IEEE CVPR, 2009.
<br/>[18 of 21] T. Sim, S. Baker, and M. Bsat. The CMU Pose, Illumination and Expression (PIE) Database. In Proceedings of the IEEE F&G, May 2002.
</td></tr><tr><td>5721216f2163d026e90d7cd9942aeb4bebc92334</td><td></td></tr><tr><td>5753b2b5e442eaa3be066daa4a2ca8d8a0bb1725</td><td></td></tr><tr><td>57d37ad025b5796457eee7392d2038910988655a</td><td>GEERATVEEETATF
<br/>