<!doctype html><html><head><title>First pages</title><link rel='stylesheet' href='reports.css'></head><body><h2>First pages</h2><table border='1' cellpadding='3' cellspacing='3'><tr><td>611961abc4dfc02b67edd8124abb08c449f5280a</td><td>Exploiting Image-trained CNN Architectures
<br/>for Unconstrained Video Classification
<br/><b>Northwestern University</b><br/>Evanston IL USA
<br/>Raytheon BBN Technologies
<br/>Cambridge, MA USA
<br/><b>University of Toronto</b></td><td>('2815926', 'Shengxin Zha', 'shengxin zha')<br/>('1689313', 'Florian Luisier', 'florian luisier')<br/>('2996926', 'Walter Andrews', 'walter andrews')<br/>('2897313', 'Nitish Srivastava', 'nitish srivastava')<br/>('1776908', 'Ruslan Salakhutdinov', 'ruslan salakhutdinov')</td><td>szha@u.northwestern.edu
<br/>{fluisier,wandrews}@bbn.com
<br/>{nitish,rsalakhu}@cs.toronto.edu
</td></tr><tr><td>610a4451423ad7f82916c736cd8adb86a5a64c59</td><td> Volume 4, Issue 11, November 2014 ISSN: 2277 128X
<br/>International Journal of Advanced Research in
<br/> Computer Science and Software Engineering
<br/> Research Paper
<br/> Available online at: www.ijarcsse.com
<br/>A Survey on Search Based Face Annotation Using Weakly
<br/>Labelled Facial Images
<br/>Department of Computer Engg, DYPIET Pimpri,
<br/><b>Savitri Bai Phule Pune University, Maharashtra India</b></td><td>('15731441', 'Shital A. Shinde', 'shital a. shinde')<br/>('3392505', 'Archana Chaugule', 'archana chaugule')</td><td></td></tr><tr><td>6156eaad00aad74c90cbcfd822fa0c9bd4eb14c2</td><td>Complex Bingham Distribution for Facial
<br/>Feature Detection
<br/>Eslam Mostafa1,2 and Aly Farag1
<br/><b>CVIP Lab, University of Louisville, Louisville, KY, USA</b><br/><b>Alexandria University, Alexandria, Egypt</b></td><td></td><td>{eslam.mostafa,aly.farag}@louisville.edu
</td></tr><tr><td>61ffedd8a70a78332c2bbdc9feba6c3d1fd4f1b8</td><td>Greedy Feature Selection for Subspace Clustering
<br/>Greedy Feature Selection for Subspace Clustering
<br/>Department of Electrical & Computer Engineering
<br/><b>Rice University, Houston, TX, 77005, USA</b><br/>Department of Electrical & Computer Engineering
<br/><b>Carnegie Mellon University, Pittsburgh, PA, 15213, USA</b><br/>Department of Electrical & Computer Engineering
<br/><b>Rice University, Houston, TX, 77005, USA</b><br/>Editor:
</td><td>('1746363', 'Eva L. Dyer', 'eva l. dyer')<br/>('1745861', 'Aswin C. Sankaranarayanan', 'aswin c. sankaranarayanan')<br/>('1746260', 'Richard G. Baraniuk', 'richard g. baraniuk')</td><td>e.dyer@rice.edu
<br/>saswin@ece.cmu.edu
<br/>richb@rice.edu
</td></tr><tr><td>61084a25ebe736e8f6d7a6e53b2c20d9723c4608</td><td></td><td></td><td></td></tr><tr><td>61542874efb0b4c125389793d8131f9f99995671</td><td>Fair comparison of skin detection approaches on publicly available datasets
<br/>a. DISI, Università di Bologna, Via Sacchi 3, 47521 Cesena, Italy.
<br/><b>b DEI - University of Padova, Via Gradenigo, 6 - 35131- Padova, Italy</b></td><td>('1707759', 'Alessandra Lumini', 'alessandra lumini')<br/>('1804258', 'Loris Nanni', 'loris nanni')</td><td></td></tr><tr><td>61f93ed515b3bfac822deed348d9e21d5dffe373</td><td>Deep Image Set Hashing
<br/><b>Columbia University</b><br/><b>Columbia University</b></td><td>('1710567', 'Jie Feng', 'jie feng')<br/>('2602265', 'Svebor Karaman', 'svebor karaman')<br/>('9546964', 'Shih-Fu Chang', 'shih-fu chang')</td><td>jiefeng@cs.columbia.edu
<br/>svebor.karaman@columbia.edu, sfchang@ee.columbia.edu
</td></tr><tr><td>6180bc0816b1776ca4b32ced8ea45c3c9ce56b47</td><td>Fast Randomized Algorithms for Convex Optimization and
<br/>Statistical Estimation
<br/>Electrical Engineering and Computer Sciences
<br/><b>University of California at Berkeley</b><br/>Technical Report No. UCB/EECS-2016-147
<br/>http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-147.html
<br/>August 14, 2016
</td><td>('3173667', 'Mert Pilanci', 'mert pilanci')</td><td></td></tr><tr><td>61f04606528ecf4a42b49e8ac2add2e9f92c0def</td><td>Deep Deformation Network for Object Landmark
<br/>Localization
<br/>NEC Laboratories America, Department of Media Analytics
</td><td>('39960064', 'Xiang Yu', 'xiang yu')<br/>('46468682', 'Feng Zhou', 'feng zhou')</td><td>{xiangyu,manu}@nec-labs.com, zhfe99@gmail.com
</td></tr><tr><td>612075999e82596f3b42a80e6996712cc52880a3</td><td>CNNs with Cross-Correlation Matching for Face Recognition in Video
<br/>Surveillance Using a Single Training Sample Per Person
<br/><b>University of Texas at Arlington, TX, USA</b><br/>2École de technologie supérieure, Université du Québec, Montreal, Canada
</td><td>('3046171', 'Mostafa Parchami', 'mostafa parchami')<br/>('2805645', 'Saman Bashbaghi', 'saman bashbaghi')<br/>('1697195', 'Eric Granger', 'eric granger')</td><td>mostafa.parchami@mavs.uta.edu, bashbaghi@livia.etsmtl.ca and eric.granger@etsmtl.ca
</td></tr><tr><td>61efeb64e8431cfbafa4b02eb76bf0c58e61a0fa</td><td>Merging Datasets Through Deep learning
<br/>IBM Research
<br/><b>Yeshiva University</b><br/>IBM Research
</td><td>('35970154', 'Kavitha Srinivas', 'kavitha srinivas')<br/>('51428397', 'Abraham Gale', 'abraham gale')<br/>('2828094', 'Julian Dolby', 'julian dolby')</td><td></td></tr><tr><td>61e9e180d3d1d8b09f1cc59bdd9f98c497707eff</td><td>Semi-supervised learning of
<br/>facial attributes in video
<br/>1INRIA, WILLOW, Laboratoire d’Informatique de l’Ecole Normale Sup´erieure,
<br/>ENS/INRIA/CNRS UMR 8548
<br/><b>University of Oxford</b></td><td>('1877079', 'Neva Cherniavsky', 'neva cherniavsky')<br/>('1785596', 'Ivan Laptev', 'ivan laptev')<br/>('1782755', 'Josef Sivic', 'josef sivic')<br/>('1688869', 'Andrew Zisserman', 'andrew zisserman')</td><td></td></tr><tr><td>6193c833ad25ac27abbde1a31c1cabe56ce1515b</td><td>Trojaning Attack on Neural Networks
<br/><b>Purdue University, 2Nanjing University</b></td><td>('3347155', 'Yingqi Liu', 'yingqi liu')<br/>('2026855', 'Shiqing Ma', 'shiqing ma')<br/>('3216258', 'Yousra Aafer', 'yousra aafer')<br/>('2547748', 'Wen-Chuan Lee', 'wen-chuan lee')<br/>('3293342', 'Juan Zhai', 'juan zhai')<br/>('3155328', 'Weihang Wang', 'weihang wang')<br/>('1771551', 'Xiangyu Zhang', 'xiangyu zhang')</td><td>liu1751@purdue.edu, ma229@purdue.edu, yaafer@purdue.edu, lee1938@purdue.edu, zhaijuan@nju.edu.cn,
<br/>wang1315@cs.purdue.edu, xyzhang@cs.purdue.edu
</td></tr><tr><td>614a7c42aae8946c7ad4c36b53290860f6256441</td><td>1
<br/>Joint Face Detection and Alignment using
<br/>Multi-task Cascaded Convolutional Networks
</td><td>('3393556', 'Kaipeng Zhang', 'kaipeng zhang')<br/>('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')<br/>('32787758', 'Zhifeng Li', 'zhifeng li')<br/>('33427555', 'Yu Qiao', 'yu qiao')</td><td></td></tr><tr><td>614079f1a0d0938f9c30a1585f617fa278816d53</td><td>Automatic Detection of ADHD and ASD from Expressive Behaviour in
<br/>RGBD Data
<br/><b>School of Computer Science, The University of Nottingham</b><br/>2Nottingham City Asperger Service & ADHD Clinic
<br/><b>Institute of Mental Health, The University of Nottingham</b></td><td>('2736086', 'Shashank Jaiswal', 'shashank jaiswal')<br/>('1795528', 'Michel F. Valstar', 'michel f. valstar')<br/>('38690723', 'Alinda Gillott', 'alinda gillott')<br/>('2491166', 'David Daley', 'david daley')</td><td></td></tr><tr><td>0d746111135c2e7f91443869003d05cde3044beb</td><td>PARTIAL FACE DETECTION FOR CONTINUOUS AUTHENTICATION
<br/>(cid:63)Department of Electrical and Computer Engineering and the Center for Automation Research,
<br/><b>Rutgers, The State University of New Jersey, 723 CoRE, 94 Brett Rd, Piscataway, NJ</b><br/><b>UMIACS, University of Maryland, College Park, MD</b><br/>§Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043
</td><td>('3152615', 'Upal Mahbub', 'upal mahbub')<br/>('1741177', 'Vishal M. Patel', 'vishal m. patel')<br/>('2406413', 'Brandon Barbello', 'brandon barbello')<br/>('9215658', 'Rama Chellappa', 'rama chellappa')</td><td>umahbub@umiacs.umd.edu, vishal.m.patel@rutgers.edu,
<br/>dchandra@google.com, bbarbello@google.com, rama@umiacs.umd.edu
</td></tr><tr><td>0da75b0d341c8f945fae1da6c77b6ec345f47f2a</td><td>121
<br/>The Effect of Computer-Generated Descriptions on Photo-
<br/>Sharing Experiences of People With Visual Impairments
<br/><b>YUHANG ZHAO, Information Science, Cornell Tech, Cornell University</b><br/>SHAOMEI WU, Facebook Inc.
<br/>LINDSAY REYNOLDS, Facebook Inc.
<br/><b>SHIRI AZENKOT, Information Science, Cornell Tech, Cornell University</b><br/>Like sighted people, visually impaired people want to share photographs on social networking services, but
<br/>find it difficult to identify and select photos from their albums. We aimed to address this problem by
<br/>incorporating state-of-the-art computer-generated descriptions into Facebook’s photo-sharing feature. We
<br/>interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed
<br/>a photo description feature for the Facebook mobile application. We evaluated this feature with six
<br/>participants in a seven-day diary study. We found that participants used the descriptions to recall and
<br/>organize their photos, but they hesitated to upload photos without a sighted person’s input. In addition to
<br/>basic information about photo content, participants wanted to know more details about salient objects and
<br/>people, and whether the photos reflected their personal aesthetic. We discuss these findings from the lens
<br/>of self-disclosure and self-presentation theories and propose new computer vision research directions that
<br/>will better support visual content sharing by visually impaired people.
<br/>CCS Concepts: • Information interfaces and presentations → Multimedia and information systems; •
<br/>Computer and society → Social issues
<br/>impairments; computer-generated descriptions; SNSs; photo sharing; self-disclosure; self-
<br/>KEYWORDS
<br/>Visual
<br/>presentation
<br/>ACM Reference format:
<br/>2017. The Effect of Computer-Generated Descriptions On Photo-Sharing Experiences of People With Visual
<br/>Impairments. Proc. ACM Hum.-Comput. Interact. 1, 1. 121 (January 2017), 24 pages.
<br/>DOI: 10.1145/3134756
<br/>1 INTRODUCTION
<br/>Sharing memories and experiences via photos is a common way to engage with others on social
<br/>networking services (SNSs) [39,46,51]. For instance, Facebook users uploaded more than 350
<br/>million photos a day [24] and Twitter, which initially supported only text in tweets, now has
<br/>more than 28.4% of tweets containing images [39]. Visually impaired people (both blind and low
<br/>vision) have a strong presence on SNS and are interested in sharing photos [50]. They take
<br/>photos for the same reasons that sighted people do: sharing daily moments with their sighted
<br/>
<br/>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
<br/>provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
<br/>the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
</td><td></td><td></td></tr><tr><td>0d88ab0250748410a1bc990b67ab2efb370ade5d</td><td>Author(s) :
<br/>ERROR HANDLING IN MULTIMODAL BIOMETRIC SYSTEMS USING
<br/>RELIABILITY MEASURES (ThuPmOR6)
<br/>(EPFL, Switzerland)
<br/>(EPFL, Switzerland)
<br/>(EPFL, Switzerland)
<br/>(EPFL, Switzerland)
<br/>Plamen Prodanov
</td><td>('1753932', 'Krzysztof Kryszczuk', 'krzysztof kryszczuk')<br/>('1994765', 'Jonas Richiardi', 'jonas richiardi')<br/>('2439888', 'Andrzej Drygajlo', 'andrzej drygajlo')</td><td></td></tr><tr><td>0db43ed25d63d801ce745fe04ca3e8b363bf3147</td><td>Kernel Principal Component Analysis and its Applications in
<br/>Face Recognition and Active Shape Models
<br/><b>Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 USA</b></td><td>('4019552', 'Quan Wang', 'quan wang')</td><td>wangq10@rpi.edu
</td></tr><tr><td>0daf696253a1b42d2c9d23f1008b32c65a9e4c1e</td><td>Unsupervised Discovery of Facial Events
<br/>CMU-RI-TR-10-10
<br/>May 2010
<br/><b>Robotics Institute</b><br/><b>Carnegie Mellon University</b><br/>Pittsburgh, Pennsylvania 15213
<br/><b>c(cid:13) Carnegie Mellon University</b></td><td>('1757386', 'Feng Zhou', 'feng zhou')</td><td></td></tr><tr><td>0d538084f664b4b7c0e11899d08da31aead87c32</td><td>Deformable Part Descriptors for
<br/>Fine-grained Recognition and Attribute Prediction
<br/>Forrest Iandola1
<br/><b>ICSI / UC Berkeley 2Brigham Young University</b></td><td>('40565777', 'Ning Zhang', 'ning zhang')<br/>('2071606', 'Ryan Farrell', 'ryan farrell')<br/>('1753210', 'Trevor Darrell', 'trevor darrell')</td><td>1{nzhang,forresti,trevor}@eecs.berkeley.edu
<br/>2farrell@cs.byu.edu
</td></tr><tr><td>0dccc881cb9b474186a01fd60eb3a3e061fa6546</td><td>Effective Face Frontalization in Unconstrained Images
<br/><b>The open University of Israel. 2Adience</b><br/>Figure 1: Frontalized faces. Top: Input photos; bottom: our frontalizations,
<br/>obtained without estimating 3D facial shapes.
<br/>“Frontalization” is the process of synthesizing frontal facing views of faces
<br/>appearing in single unconstrained photos. Recent reports have suggested
<br/>that this process may substantially boost the performance of face recogni-
<br/>tion systems. This, by transforming the challenging problem of recognizing
<br/>faces viewed from unconstrained viewpoints to the easier problem of rec-
<br/>ognizing faces in constrained, forward facing poses. Previous frontalization
<br/>methods did this by attempting to approximate 3D facial shapes for each
<br/>query image. We observe that 3D face shape estimation from unconstrained
<br/>photos may be a harder problem than frontalization and can potentially in-
<br/>troduce facial misalignments. Instead, we explore the simpler approach of
<br/>using a single, unmodified, 3D surface as an approximation to the shape of
<br/>all input faces. We show that this leads to a straightforward, efficient and
<br/>easy to implement method for frontalization. More importantly, it produces
<br/>aesthetic new frontal views and is surprisingly effective when used for face
<br/>recognition and gender estimation.
<br/>Observation 1: For frontalization, one rough estimate of the 3D facial shape
<br/>seems as good as another, demonstrated by the following example:
<br/>Figure 2: Frontalization process. (a) facial features detected on a query
<br/>face and on a reference face (b) which was produced by rendering a tex-
<br/>tured 3D, CG model (c); (d) 2D query coordinates and corresponding 3D
<br/>coordinates on the model provide an estimated projection matrix, used to
<br/>back-project query texture to the reference coordinate system; (e) estimated
<br/>self-occlusions shown overlaid on the frontalized result (warmer colors re-
<br/>flect more occlusions.) Facial appearances in these regions are borrowed
<br/>from corresponding symmetric face regions; (f) our final frontalized result.
<br/>The top row shows surfaces estimated for the same query (left) by Hass-
<br/>ner [2] (mid) and DeepFaces [6] (right). Frontalizations are shown at the
<br/>bottom using our single-3D approach (left), Hassner (mid) and DeepFaces
<br/>(right). Clearly, both surfaces are rough approximations to the facial shape.
<br/>Moreover, despite the different surfaces, all results seem qualitatively simi-
<br/>lar, calling to question the need for shape estimation for frontalization.
<br/>Result 1: A novel frontalization method using a single, unmodified 3D ref-
<br/>erence shape is described in the paper (illustrated in Fig. 2).
<br/>Observation 2: A single, unmodified 3D reference shape produces aggres-
<br/>sively aligned faces, as can be observed in Fig. 3.
<br/>Result 2: Frontalized, strongly aligned faces elevate LFW [5] verification
<br/>accuracy and gender estimation rates on the Adience benchmark [1].
<br/>Conclusion: On the role of 2D appearance vs. 3D shape in face recognition,
<br/>our results suggest that 3D shape estimation may be unnecessary.
</td><td>('1756099', 'Tal Hassner', 'tal hassner')<br/>('35840854', 'Shai Harel', 'shai harel')<br/>('1753918', 'Eran Paz', 'eran paz')<br/>('1792038', 'Roee Enbar', 'roee enbar')</td><td></td></tr><tr><td>0d467adaf936b112f570970c5210bdb3c626a717</td><td></td><td></td><td></td></tr><tr><td>0d6b28691e1aa2a17ffaa98b9b38ac3140fb3306</td><td>Review of Perceptual Resemblance of Local
<br/>Plastic Surgery Facial Images using Near Sets
<br/>1,2 Department of Computer Technology,
<br/>YCCE Nagpur, India
</td><td>('9083090', 'Prachi V. Wagde', 'prachi v. wagde')<br/>('9218400', 'Roshni Khedgaonkar', 'roshni khedgaonkar')</td><td></td></tr><tr><td>0de91641f37b0a81a892e4c914b46d05d33fd36e</td><td>RAPS: Robust and Efficient Automatic Construction of Person-Specific
<br/>Deformable Models
<br/>∗Department of Computing,
<br/><b>Imperial College London</b><br/>180 Queens Gate,
<br/>†EEMCS,
<br/><b>University of Twente</b><br/>Drienerlolaan 5,
<br/>London SW7 2AZ, U.K.
<br/>7522 NB Enschede, The Netherlands
</td><td>('3320415', 'Christos Sagonas', 'christos sagonas')<br/>('1780393', 'Yannis Panagakis', 'yannis panagakis')<br/>('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')<br/>('1694605', 'Maja Pantic', 'maja pantic')</td><td>{c.sagonas, i.panagakis, s.zafeiriou, m.pantic}@imperial.ac.uk
</td></tr><tr><td>0df0d1adea39a5bef318b74faa37de7f3e00b452</td><td>Appearance-Based Gaze Estimation in the Wild
<br/>1Perceptual User Interfaces Group, 2Scalable Learning and Perception Group
<br/><b>Max Planck Institute for Informatics, Saarbr ucken, Germany</b></td><td>('2520795', 'Xucong Zhang', 'xucong zhang')<br/>('1751242', 'Yusuke Sugano', 'yusuke sugano')<br/>('1739548', 'Mario Fritz', 'mario fritz')<br/>('3194727', 'Andreas Bulling', 'andreas bulling')</td><td>{xczhang,sugano,mfritz,bulling}@mpi-inf.mpg.de
</td></tr><tr><td>0d3bb75852098b25d90f31d2f48fd0cb4944702b</td><td>A DATA-DRIVEN APPROACH TO CLEANING LARGE FACE DATASETS
<br/><b>Advanced Digital Sciences Center (ADSC), University of Illinois at Urbana-Champaign, Singapore</b></td><td>('1702224', 'Stefan Winkler', 'stefan winkler')</td><td></td></tr><tr><td>0db8e6eb861ed9a70305c1839eaef34f2c85bbaf</td><td></td><td></td><td></td></tr><tr><td>0d0b880e2b531c45ee8227166a489bf35a528cb9</td><td>Structure Preserving Object Tracking
<br/><b>Computer Vision Lab, Delft University of Technology</b><br/>Mekelweg 4, 2628 CD Delft, The Netherlands
</td><td>('2883723', 'Lu Zhang', 'lu zhang')<br/>('1803520', 'Laurens van der Maaten', 'laurens van der maaten')</td><td>{lu.zhang, l.j.p.vandermaaten}@tudelft.nl
</td></tr><tr><td>0d3882b22da23497e5de8b7750b71f3a4b0aac6b</td><td>Research Article
<br/>Context Is Routinely Encoded
<br/>During Emotion Perception
<br/>21(4) 595 –599
<br/>© The Author(s) 2010
<br/>Reprints and permission:
<br/>sagepub.com/journalsPermissions.nav
<br/>DOI: 10.1177/0956797610363547
<br/>http://pss.sagepub.com
<br/><b>Boston College; 2Psychiatric Neuroimaging Program, Massachusetts General Hospital, Harvard Medical School; and 3Athinoula A. Martinos</b><br/>Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
</td><td>('1731779', 'Lisa Feldman Barrett', 'lisa feldman barrett')</td><td></td></tr><tr><td>0dbf4232fcbd52eb4599dc0760b18fcc1e9546e9</td><td></td><td></td><td></td></tr><tr><td>0d760e7d762fa449737ad51431f3ff938d6803fe</td><td>LCDet: Low-Complexity Fully-Convolutional Neural Networks for
<br/>Object Detection in Embedded Systems
<br/>UC San Diego ∗
<br/>Gokce Dane
<br/>Qualcomm Inc.
<br/>UC San Diego
<br/>Qualcomm Inc.
<br/>UC San Diego
</td><td>('2906509', 'Subarna Tripathi', 'subarna tripathi')<br/>('1801046', 'Byeongkeun Kang', 'byeongkeun kang')<br/>('3484765', 'Vasudev Bhaskaran', 'vasudev bhaskaran')<br/>('30518518', 'Truong Nguyen', 'truong nguyen')</td><td>stripathi@ucsd.edu
<br/>gokced@qti.qualcomm.com
<br/>bkkang@ucsd.edu
<br/>vasudevb@qti.qualcomm.com
<br/>tqn001@eng.ucsd.edu
</td></tr><tr><td>0d3068b352c3733c9e1cc75e449bf7df1f7b10a4</td><td>Context based Facial Expression Analysis in the
<br/>Wild
<br/><b>School of Computer Science, CECS, Australian National University, Australia</b><br/>http://users.cecs.anu.edu.au/∼adhall
</td><td>('1735697', 'Abhinav Dhall', 'abhinav dhall')</td><td>abhinav.dhall@anu.edu.au
</td></tr><tr><td>0dd72887465046b0f8fc655793c6eaaac9c03a3d</td><td>Real-time Head Orientation from a Monocular
<br/>Camera using Deep Neural Network
<br/>KAIST, Republic of Korea
</td><td>('3250619', 'Byungtae Ahn', 'byungtae ahn')<br/>('2870153', 'Jaesik Park', 'jaesik park')</td><td>[btahn,jspark]@rcv.kaist.ac.kr, iskweon77@kaist.ac.kr
</td></tr><tr><td>0d087aaa6e2753099789cd9943495fbbd08437c0</td><td></td><td></td><td></td></tr><tr><td>0d8415a56660d3969449e77095be46ef0254a448</td><td></td><td></td><td></td></tr><tr><td>0dfa460a35f7cab4705726b6367557b9f7842c65</td><td>Modeling Spatial-Temporal Clues in a Hybrid Deep
<br/>Learning Framework for Video Classification
<br/>School of Computer Science, Shanghai Key Lab of Intelligent Information Processing,
<br/><b>Fudan University, Shanghai, China</b></td><td>('3099139', 'Zuxuan Wu', 'zuxuan wu')<br/>('31825486', 'Xi Wang', 'xi wang')<br/>('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')<br/>('1743864', 'Hao Ye', 'hao ye')<br/>('1713721', 'Xiangyang Xue', 'xiangyang xue')</td><td>{zxwu, xwang10, ygj, haoye10, xyxue}@fudan.edu.cn
</td></tr><tr><td>0d14261e69a4ad4140ce17c1d1cea76af6546056</td><td>Adding Facial Actions into 3D Model Search to Analyse
<br/>Behaviour in an Unconstrained Environment
<br/><b>Imaging Science and Biomedical Engineering, The University of Manchester, UK</b></td><td>('1753123', 'Angela Caunce', 'angela caunce')</td><td></td></tr><tr><td>0dbacb4fd069462841ebb26e1454b4d147cd8e98</td><td>Recent Advances in Discriminant Non-negative
<br/>Matrix Factorization
<br/><b>Aristotle University of Thessaloniki</b><br/>Thessaloniki, Greece, 54124
</td><td>('1793625', 'Symeon Nikitidis', 'symeon nikitidis')<br/>('1737071', 'Anastasios Tefas', 'anastasios tefas')<br/>('1698588', 'Ioannis Pitas', 'ioannis pitas')</td><td>Email: {nikitidis,tefas,pitas}@aiia.csd.auth.gr
</td></tr><tr><td>0db36bf08140d53807595b6313201a7339470cfe</td><td>Moving Vistas: Exploiting Motion for Describing Scenes
<br/>Department of Electrical and Computer Engineering
<br/><b>Center for Automation Research, UMIACS, University of Maryland, College Park, MD</b></td><td>('34711525', 'Nitesh Shroff', 'nitesh shroff')<br/>('9215658', 'Rama Chellappa', 'rama chellappa')</td><td>{nshroff,pturaga,rama}@umiacs.umd.edu
</td></tr><tr><td>0d781b943bff6a3b62a79e2c8daf7f4d4d6431ad</td><td>EmotiW 2016: Video and Group-Level Emotion
<br/>Recognition Challenges
<br/>Roland Goecke
<br/>David R. Cheriton School of
<br/>Human-Centred Technology
<br/>David R. Cheriton School of
<br/>Computer Science
<br/><b>University of Waterloo</b><br/>Canada
<br/><b>University of Canberra</b><br/>Centre
<br/>Australia
<br/>Computer Science
<br/><b>University of Waterloo</b><br/>Canada
<br/>Tom Gedeon
<br/>David R. Cheriton School of
<br/>Information Human Centred
<br/>Computer Science
<br/><b>University of Waterloo</b><br/>Canada
<br/><b>Australian National University</b><br/>Computing
<br/>Australia
</td><td>('1735697', 'Abhinav Dhall', 'abhinav dhall')<br/>('2942991', 'Jyoti Joshi', 'jyoti joshi')<br/>('1773895', 'Jesse Hoey', 'jesse hoey')</td><td>abhinav.dhall@uwaterloo.ca
<br/>roland.goecke@ieee.org
<br/>jyoti.joshi@uwaterloo.ca
<br/>jhoey@cs.uwaterloo.ca
<br/>tom.gedeon@anu.edu.au
</td></tr><tr><td>0d735e7552af0d1dcd856a8740401916e54b7eee</td><td></td><td></td><td></td></tr><tr><td>0d06b3a4132d8a2effed115a89617e0a702c957a</td><td></td><td></td><td></td></tr><tr><td>0d2dd4fc016cb6a517d8fb43a7cc3ff62964832e</td><td></td><td></td><td></td></tr><tr><td>0d33b6c8b4d1a3cb6d669b4b8c11c2a54c203d1a</td><td>Detection and Tracking of Faces in Videos: A Review
<br/>© 2016 IJEDR | Volume 4, Issue 2 | ISSN: 2321-9939
<br/>of Related Work
<br/>1Student, 2Assistant Professor
<br/>1, 2Dept. of Electronics & Comm., S S I E T, Punjab, India
<br/>________________________________________________________________________________________________________
</td><td>('48816689', 'Seema Saini', 'seema saini')</td><td></td></tr><tr><td>0d1d9a603b08649264f6e3b6d5a66bf1e1ac39d2</td><td><b>University of Nebraska - Lincoln</b><br/>US Army Research
<br/>2015
<br/>U.S. Department of Defense
<br/>Effects of emotional expressions on persuasion
<br/><b>University of Southern California</b><br/><b>University of Southern California</b><br/><b>University of Southern California</b><br/><b>University of Southern California</b><br/>Follow this and additional works at: http://digitalcommons.unl.edu/usarmyresearch
<br/>Wang, Yuqiong; Lucas, Gale; Khooshabeh, Peter; de Melo, Celso; and Gratch, Jonathan, "Effects of emotional expressions on
<br/>persuasion" (2015). US Army Research. Paper 340.
<br/>http://digitalcommons.unl.edu/usarmyresearch/340
</td><td>('2522587', 'Yuqiong Wang', 'yuqiong wang')<br/>('2419453', 'Gale Lucas', 'gale lucas')<br/>('2635945', 'Peter Khooshabeh', 'peter khooshabeh')<br/>('1977901', 'Celso de Melo', 'celso de melo')<br/>('1730824', 'Jonathan Gratch', 'jonathan gratch')</td><td>DigitalCommons@University of Nebraska - Lincoln
<br/>University of Southern California, wangyuqiong@ymail.com
<br/>This Article is brought to you for free and open access by the U.S. Department of Defense at DigitalCommons@University of Nebraska - Lincoln. It has
<br/>been accepted for inclusion in US Army Research by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln.
</td></tr><tr><td>0da4c3d898ca2fff9e549d18f513f4898e960aca</td><td>Wang, Y., Thomas, J., Weissgerber, S. C., Kazemini, S., Ul-Haq, I., &
<br/>Quadflieg, S. (2015). The Headscarf Effect Revisited: Further Evidence for a
<br/>336. 10.1068/p7940
<br/>Peer reviewed version
<br/>Link to published version (if available):
<br/>10.1068/p7940
<br/>Link to publication record in Explore Bristol Research
<br/>PDF-document
<br/><b>University of Bristol - Explore Bristol Research</b><br/>General rights
<br/>This document is made available in accordance with publisher policies. Please cite only the published
<br/>version using the reference above. Full terms of use are available:
<br/>http://www.bristol.ac.uk/pure/about/ebr-terms.html
<br/>Take down policy
<br/>Explore Bristol Research is a digital archive and the intention is that deposited content should not be
<br/>removed. However, if you believe that this version of the work breaches copyright law please contact
<br/>• Your contact details
<br/><b>Bibliographic details for the item, including a URL</b><br/>• An outline of the nature of the complaint
<br/>On receipt of your message the Open Access Team will immediately investigate your claim, make an
<br/>initial judgement of the validity of the claim and, where appropriate, withdraw the item in question
<br/>from public view.
<br/> </td><td></td><td>open-access@bristol.ac.uk and include the following information in your message:
</td></tr><tr><td>951368a1a8b3c5cd286726050b8bdf75a80f7c37</td><td>A Family of Online Boosting Algorithms
<br/><b>University of California, San Diego</b><br/><b>University of California, Merced</b><br/><b>University of California, San Diego</b></td><td>('2490700', 'Boris Babenko', 'boris babenko')<br/>('37144787', 'Ming-Hsuan Yang', 'ming-hsuan yang')<br/>('1769406', 'Serge Belongie', 'serge belongie')</td><td>bbabenko@cs.ucsd.edu
<br/>mhyang@ucmerced.edu
<br/>sjb@cs.ucsd.edu
</td></tr><tr><td>956e9b69b3366ed3e1670609b53ba4a7088b8b7e</td><td>Semi-supervised dimensionality reduction for image retrieval
<br/><b>aIBM China Research Lab, Beijing, China</b><br/><b>bTsinghua University, Beijing, China</b></td><td></td><td></td></tr><tr><td>956317de62bd3024d4ea5a62effe8d6623a64e53</td><td>Lighting Analysis and Texture Modification of 3D Human
<br/>Face Scans
<br/>Author
<br/>Zhang, Paul, Zhao, Sanqiang, Gao, Yongsheng
<br/>Published
<br/>2007
<br/>Conference Title
<br/>Digital Image Computing Techniques and Applications
<br/>DOI
<br/>https://doi.org/10.1109/DICTA.2007.4426825
<br/>Copyright Statement
<br/>© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/
<br/>republish this material for advertising or promotional purposes or for creating new collective
<br/>works for resale or redistribution to servers or lists, or to reuse any copyrighted component of
<br/>this work in other works must be obtained from the IEEE.
<br/>Downloaded from
<br/>http://hdl.handle.net/10072/17889
<br/>Link to published version
<br/>http://www.ieee.org/
<br/>Griffith Research Online
<br/>https://research-repository.griffith.edu.au
</td><td></td><td></td></tr><tr><td>959bcb16afdf303c34a8bfc11e9fcc9d40d76b1c</td><td>Temporal Coherency based Criteria for Predicting
<br/>Video Frames using Deep Multi-stage Generative
<br/>Adversarial Networks
<br/>Visualization and Perception Laboratory
<br/>Department of Computer Science and Engineering
<br/><b>Indian Institute of Technology Madras, Chennai, India</b></td><td>('29901316', 'Prateep Bhattacharjee', 'prateep bhattacharjee')<br/>('1680398', 'Sukhendu Das', 'sukhendu das')</td><td>1prateepb@cse.iitm.ac.in, 2sdas@iitm.ac.in
</td></tr><tr><td>951f21a5671a4cd14b1ef1728dfe305bda72366f</td><td>International Journal of Science and Research (IJSR)
<br/>ISSN (Online): 2319-7064
<br/>Impact Factor (2012): 3.358
<br/>Use of ℓ2/3-norm Sparse Representation for Facial
<br/>Expression Recognition
<br/><b>MATS University, MATS School of Engineering and Technology, Arang, Raipur, India</b><br/><b>MATS University, MATS School of Engineering and Technology, Arang, Raipur, India</b><br/>in
<br/>three
<br/>to discriminate
<br/>it
<br/>from
<br/>represents emotion,
</td><td></td><td></td></tr><tr><td>95f26d1c80217706c00b6b4b605a448032b93b75</td><td>New Robust Face Recognition Methods Based on Linear
<br/>Regression
<br/><b>Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong Province, China, 2 Key Laboratory of Network</b><br/>Oriented Intelligent Computation, Shenzhen, Guangdong Province, China
</td><td>('2208128', 'Jian-Xun Mi', 'jian-xun mi')<br/>('2650895', 'Jin-Xing Liu', 'jin-xing liu')<br/>('40342210', 'Jiajun Wen', 'jiajun wen')</td><td></td></tr><tr><td>95f12d27c3b4914e0668a268360948bce92f7db3</td><td>Interactive Facial Feature Localization
<br/><b>University of Illinois at Urbana Champaign, Urbana, IL 61801, USA</b><br/>2 Adobe Systems Inc., San Jose, CA 95110, USA
<br/>3 Facebook Inc., Menlo Park, CA 94025, USA
</td><td>('36474335', 'Vuong Le', 'vuong le')<br/>('1721019', 'Jonathan Brandt', 'jonathan brandt')<br/>('1739208', 'Thomas S. Huang', 'thomas s. huang')</td><td></td></tr><tr><td>9547a7bce2b85ef159b2d7c1b73dea82827a449f</td><td>Facial Expression Recognition Using Gabor Motion Energy Filters
<br/>Dept. Computer Science Engineering
<br/>UC San Diego
<br/>Marian S. Bartlett
<br/><b>Institute for Neural Computation</b><br/>UC San Diego
</td><td>('4072965', 'Tingfan Wu', 'tingfan wu')<br/>('1741200', 'Javier R. Movellan', 'javier r. movellan')</td><td>tingfan@gmail.com
<br/>{marni,movellan}@mplab.ucsd.edu
</td></tr><tr><td>9513503867b29b10223f17c86e47034371b6eb4f</td><td>Comparison of optimisation algorithms for
<br/>deformable template matching
<br/><b>Link oping University, Computer Vision Laboratory</b><br/>ISY, SE-581 83 Link¨oping, SWEDEN
</td><td>('1797883', 'Vasileios Zografos', 'vasileios zografos')</td><td>zografos@isy.liu.se ⋆
</td></tr><tr><td>955e2a39f51c0b6f967199942d77625009e580f9</td><td>NAMING FACES ON THE WEB
<br/>a thesis
<br/>submitted to the department of computer engineering
<br/><b>and the institute of engineering and science</b><br/><b>of bilkent university</b><br/>in partial fulfillment of the requirements
<br/>for the degree of
<br/>master of science
<br/>By
<br/>July, 2010
</td><td>('34946851', 'Hilal Zitouni', 'hilal zitouni')</td><td></td></tr><tr><td>956c634343e49319a5e3cba4f2bd2360bdcbc075</td><td>IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 4, AUGUST 2006
<br/>873
<br/>A Novel Incremental Principal Component Analysis
<br/>and Its Application for Face Recognition
</td><td>('1776124', 'Haitao Zhao', 'haitao zhao')<br/>('1768574', 'Pong Chi Yuen', 'pong chi yuen')</td><td></td></tr><tr><td>95ea564bd983129ddb5535a6741e72bb1162c779</td><td>Multi-Task Learning by Deep Collaboration and
<br/>Application in Facial Landmark Detection
<br/><b>Laval University, Qu bec, Canada</b></td><td>('2758280', 'Ludovic Trottier', 'ludovic trottier')<br/>('2310695', 'Philippe Giguère', 'philippe giguère')<br/>('1700926', 'Brahim Chaib-draa', 'brahim chaib-draa')</td><td>ludovic.trottier.1@ulaval.ca
<br/>{philippe.giguere,brahim.chaib-draa}@ift.ulaval.ca
</td></tr><tr><td>958c599a6f01678513849637bec5dc5dba592394</td><td>Noname manuscript No.
<br/>(will be inserted by the editor)
<br/>Generalized Zero-Shot Learning for Action
<br/>Recognition with Web-Scale Video Data
<br/>Received: date / Accepted: date
</td><td>('2473509', 'Kun Liu', 'kun liu')<br/>('8984539', 'Wenbing Huang', 'wenbing huang')</td><td></td></tr><tr><td>950171acb24bb24a871ba0d02d580c09829de372</td><td>Speeding up 2D-Warping for Pose-Invariant Face Recognition
<br/><b>Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Germany</b></td><td>('1804963', 'Harald Hanselmann', 'harald hanselmann')<br/>('1685956', 'Hermann Ney', 'hermann ney')</td><td>surname@cs.rwth-aachen.de
</td></tr><tr><td>59be98f54bb4ed7a2984dc6a3c84b52d1caf44eb</td><td>A Deep-Learning Approach to Facial Expression Recognition
<br/>with Candid Images
<br/><b>CUNY City College</b><br/>Alibaba. Inc
<br/><b>IBM China Research Lab</b><br/><b>CUNY Graduate Center and City College</b></td><td>('40617554', 'Wei Li', 'wei li')<br/>('1713016', 'Min Li', 'min li')<br/>('1703625', 'Zhong Su', 'zhong su')<br/>('4697712', 'Zhigang Zhu', 'zhigang zhu')</td><td>lwei000@citymail.cuny.edu
<br/>mushi.lm@alibaba.inc
<br/>suzhong@cn.ibm.com
<br/>zhu@cs.ccny.cuny.edu
</td></tr><tr><td>59fc69b3bc4759eef1347161e1248e886702f8f7</td><td>Final Report of Final Year Project
<br/>HKU-Face: A Large Scale Dataset for
<br/>Deep Face Recognition
<br/>3035141841
<br/>COMP4801 Final Year Project
<br/>Project Code: 17007
</td><td>('40456402', 'Haoyu Li', 'haoyu li')</td><td></td></tr><tr><td>591a737c158be7b131121d87d9d81b471c400dba</td><td>Affect Valence Inference From Facial Action Unit Spectrograms
<br/>MIT Media Lab
<br/>MA 02139, USA
<br/>MIT Media Lab
<br/>MA 02139, USA
<br/><b>Harvard University</b><br/>MA 02138, USA
<br/>Rosalind Picard
<br/>MIT Media Lab
<br/>MA 02139, USA
</td><td>('1801452', 'Daniel McDuff', 'daniel mcduff')<br/>('1754451', 'Rana El Kaliouby', 'rana el kaliouby')<br/>('2010950', 'Karim Kassam', 'karim kassam')</td><td>djmcduff@mit.edu
<br/>kaliouby@mit.edu
<br/>kskassam@fas.harvard.edu
<br/>picard@mit.edu
</td></tr><tr><td>59bfeac0635d3f1f4891106ae0262b81841b06e4</td><td>Face Verification Using the LARK Face
<br/>Representation
</td><td>('3326805', 'Hae Jong Seo', 'hae jong seo')<br/>('1718280', 'Peyman Milanfar', 'peyman milanfar')</td><td></td></tr><tr><td>59efb1ac77c59abc8613830787d767100387c680</td><td>DIF : Dataset of Intoxicated Faces for Drunk Person
<br/>Identification
<br/><b>Indian Institute of Technology Ropar</b><br/><b>Indian Institute of Technology Ropar</b></td><td>('46241736', 'Devendra Pratap Yadav', 'devendra pratap yadav')<br/>('1735697', 'Abhinav Dhall', 'abhinav dhall')</td><td>2014csb1010@iitrpr.ac.in
<br/>abhinav@iitrpr.ac.in
</td></tr><tr><td>590628a9584e500f3e7f349ba7e2046c8c273fcf</td><td></td><td></td><td></td></tr><tr><td>593234ba1d2e16a887207bf65d6b55bbc7ea2247</td><td>Combining Language Sources and Robust
<br/>Semantic Relatedness for Attribute-Based
<br/>Knowledge Transfer
<br/>1 Department of Computer Science, TU Darmstadt
<br/><b>Max Planck Institute for Informatics, Saarbr ucken, Germany</b></td><td>('34849128', 'Marcus Rohrbach', 'marcus rohrbach')<br/>('37718254', 'Michael Stark', 'michael stark')<br/>('1697100', 'Bernt Schiele', 'bernt schiele')</td><td></td></tr><tr><td>59eefa01c067a33a0b9bad31c882e2710748ea24</td><td>IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
<br/>Fast Landmark Localization
<br/>with 3D Component Reconstruction and CNN for
<br/>Cross-Pose Recognition
</td><td>('24020847', 'Hung-Cheng Shie', 'hung-cheng shie')<br/>('9640380', 'Cheng-Hua Hsieh', 'cheng-hua hsieh')</td><td></td></tr><tr><td>59e2037f5079794cb9128c7f0900a568ced14c2a</td><td>Clothing and People - A Social Signal Processing Perspective
<br/><b>Faculty of Mathematics and Computer Science, University of Barcelona, Barcelona, Spain</b><br/>2 Computer Vision Center, Barcelona, Spain
<br/><b>University of Verona, Verona, Italy</b></td><td>('2084534', 'Maedeh Aghaei', 'maedeh aghaei')<br/>('10724083', 'Federico Parezzan', 'federico parezzan')<br/>('2837527', 'Mariella Dimiccoli', 'mariella dimiccoli')<br/>('1724155', 'Petia Radeva', 'petia radeva')<br/>('1723008', 'Marco Cristani', 'marco cristani')</td><td></td></tr><tr><td>59dac8b460a89e03fa616749a08e6149708dcc3a</td><td>A Convergent Solution to Matrix Bidirectional Projection Based Feature
<br/>Extraction with Application to Face Recognition ∗
<br/><b>School of Computer, National University of Defense Technology</b><br/>No 137, Yanwachi Street, Kaifu District,
<br/>Changsha, Hunan Province, 410073, P.R. China
</td><td>('3144121', 'Yubin Zhan', 'yubin zhan')<br/>('1969736', 'Jianping Yin', 'jianping yin')<br/>('33793976', 'Xinwang Liu', 'xinwang liu')</td><td>E-mail: {YubinZhan,JPYin,XWLiu}@nudt.edu.cn
</td></tr><tr><td>59e9934720baf3c5df3a0e1e988202856e1f83ce</td><td>UA-DETRAC: A New Benchmark and Protocol for
<br/>Multi-Object Detection and Tracking
<br/><b>University at Albany, SUNY</b><br/>2 School of Computer and Control Engineering, UCAS
<br/>3 Department of Electrical and Computer Engineering, UCSD
<br/>4 National Laboratory of Pattern Recognition, CASIA
<br/><b>University at Albany, SUNY</b><br/><b>Division of Computer Science and Engineering, Hanyang University</b><br/>7 Electrical Engineering and Computer Science, UCM
</td><td>('39774417', 'Longyin Wen', 'longyin wen')<br/>('1910738', 'Dawei Du', 'dawei du')<br/>('1773408', 'Zhaowei Cai', 'zhaowei cai')<br/>('39643145', 'Ming-Ching Chang', 'ming-ching chang')<br/>('3245785', 'Honggang Qi', 'honggang qi')<br/>('33047058', 'Jongwoo Lim', 'jongwoo lim')<br/>('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')</td><td></td></tr><tr><td>59d225486161b43b7bf6919b4a4b4113eb50f039</td><td>Complex Event Recognition from Images with Few Training Examples
<br/>Irfan Essa∗
<br/><b>Georgia Institute of Technology</b><br/><b>University of Southern California</b></td><td>('2308598', 'Unaiza Ahsan', 'unaiza ahsan')<br/>('1726241', 'Chen Sun', 'chen sun')<br/>('1945508', 'James Hays', 'james hays')</td><td>uahsan3@gatech.edu
<br/>chensun@google.com
<br/>hays@gatech.edu
<br/>irfan@cc.gatech.edu
</td></tr><tr><td>5945464d47549e8dcaec37ad41471aa70001907f</td><td>Noname manuscript No.
<br/>(will be inserted by the editor)
<br/>Every Moment Counts: Dense Detailed Labeling of Actions in Complex
<br/>Videos
<br/>Received: date / Accepted: date
</td><td>('34149749', 'Serena Yeung', 'serena yeung')<br/>('3216322', 'Li Fei-Fei', 'li fei-fei')</td><td></td></tr><tr><td>59c9d416f7b3d33141cc94567925a447d0662d80</td><td>Universität des Saarlandes
<br/>Max-Planck-Institut für Informatik
<br/>AG5
<br/>Matrix factorization over max-times
<br/>algebra for data mining
<br/>Masterarbeit im Fach Informatik
<br/>Master’s Thesis in Computer Science
<br/>von / by
<br/>angefertigt unter der Leitung von / supervised by
<br/>begutachtet von / reviewers
<br/>November 2013
<br/>UNIVERSITASSARAVIENSIS</td><td>('2297723', 'Sanjar Karaev', 'sanjar karaev')<br/>('1804891', 'Pauli Miettinen', 'pauli miettinen')<br/>('1804891', 'Pauli Miettinen', 'pauli miettinen')<br/>('1751591', 'Gerhard Weikum', 'gerhard weikum')</td><td></td></tr><tr><td>59bece468ed98397d54865715f40af30221aa08c</td><td>Deformable Part-based Robust Face Detection
<br/>under Occlusion by Using Face Decomposition
<br/>into Face Components
<br/>Darijan Marčetić, Slobodan Ribarić
<br/><b>University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia</b></td><td></td><td>{darijan.marcetic, slobodan.ribaric}@fer.hr
</td></tr><tr><td>59a35b63cf845ebf0ba31c290423e24eb822d245</td><td>The FaceSketchID System: Matching Facial
<br/>Composites to Mugshots
<br/>tedious, and may not
</td><td>('34393045', 'Hu Han', 'hu han')<br/>('6680444', 'Anil K. Jain', 'anil k. jain')</td><td></td></tr><tr><td>59f325e63f21b95d2b4e2700c461f0136aecc171</td><td>3070
<br/>978-1-4577-1302-6/11/$26.00 ©2011 IEEE
<br/>FOR FACE RECOGNITION
<br/>1. INTRODUCTION
</td><td></td><td></td></tr><tr><td>59420fd595ae745ad62c26ae55a754b97170b01f</td><td>Objects as Attributes for Scene Classification
<br/><b>Stanford University</b></td><td>('33642044', 'Li-Jia Li', 'li-jia li')<br/>('2888806', 'Hao Su', 'hao su')<br/>('7892285', 'Yongwhan Lim', 'yongwhan lim')<br/>('3216322', 'Li Fei-Fei', 'li fei-fei')</td><td></td></tr><tr><td>599adc0dcd4ebcc2a868feedd243b5c3c1bd1d0a</td><td>How Robust is 3D Human Pose Estimation to Occlusion?
<br/><b>Visual Computing Institute, RWTH Aachen University</b><br/>2Robert Bosch GmbH, Corporate Research
</td><td>('2699877', 'Timm Linder', 'timm linder')<br/>('1789756', 'Bastian Leibe', 'bastian leibe')</td><td>{sarandi,leibe}@vision.rwth-aachen.de
<br/>{timm.linder,kaioliver.arras}@de.bosch.com
</td></tr><tr><td>5922e26c9eaaee92d1d70eae36275bb226ecdb2e</td><td>Boosting Classification Based Similarity
<br/>Learning by using Standard Distances
<br/>Departament d’Informàtica, Universitat de València
<br/>Av. de la Universitat s/n. 46100-Burjassot (Spain)
</td><td>('2275648', 'Emilia López-Iñesta', 'emilia lópez-iñesta')<br/>('3138833', 'Miguel Arevalillo-Herráez', 'miguel arevalillo-herráez')<br/>('2627759', 'Francisco Grimaldo', 'francisco grimaldo')</td><td>eloi@alumni.uv.es,miguel.arevalillo@uv.es
<br/>francisco.grimaldo@uv.es
</td></tr><tr><td>59d8fa6fd91cdb72cd0fa74c04016d79ef5a752b</td><td>The Menpo Facial Landmark Localisation Challenge:
<br/>A step towards the solution
<br/>Department of Computing
<br/><b>Imperial College London</b></td><td>('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')<br/>('2814229', 'George Trigeorgis', 'george trigeorgis')<br/>('1688922', 'Grigorios Chrysos', 'grigorios chrysos')<br/>('3234063', 'Jiankang Deng', 'jiankang deng')<br/>('1719912', 'Jie Shen', 'jie shen')</td><td>{s.zafeiriou, g.trigeorgis, g.chrysos, j.deng16, jie.shen07}@imperial.ac.uk
</td></tr><tr><td>59e75aad529b8001afc7e194e21668425119b864</td><td>Membrane Nonrigid Image Registration
<br/>Department of Computer Science
<br/><b>Drexel University</b><br/>Philadelphia, PA
</td><td>('1708819', 'Ko Nishino', 'ko nishino')</td><td></td></tr><tr><td>59d45281707b85a33d6f50c6ac6b148eedd71a25</td><td>Rank Minimization across Appearance and Shape for AAM Ensemble Fitting
<br/>2The Commonwealth Scientific and Industial Research Organization (CSIRO)
<br/><b>Queensland University of Technology</b></td><td>('2699730', 'Xin Cheng', 'xin cheng')<br/>('1729760', 'Sridha Sridharan', 'sridha sridharan')<br/>('1820249', 'Simon Lucey', 'simon lucey')</td><td>1{x2.cheng,s.sridharan}@qut.edu.au
<br/>2{jason.saragih,simon.lucey}@csiro.au
</td></tr><tr><td>59319c128c8ac3c88b4ab81088efe8ae9c458e07</td><td>Effective Computer Model For Recognizing
<br/>Nationality From Frontal Image
<br/>Bat-Erdene.B
<br/>Information and Communication Management School
<br/><b>The University of the Humanities</b><br/>Ulaanbaatar, Mongolia
</td><td></td><td>e-mail: basubaer@gmail.com
</td></tr><tr><td>59a6c9333c941faf2540979dcfcb5d503a49b91e</td><td>Sampling Clustering
<br/><b>School of Computer Science and Technology, Shandong University, China</b></td><td>('51016741', 'Ching Tarn', 'ching tarn')<br/>('2413471', 'Yinan Zhang', 'yinan zhang')<br/>('48260402', 'Ye Feng', 'ye feng')</td><td>∗i@ctarn.io
</td></tr><tr><td>59031a35b0727925f8c47c3b2194224323489d68</td><td>Sparse Variation Dictionary Learning for Face Recognition with A Single
<br/>Training Sample Per Person
<br/>ETH Zurich
<br/>Switzerland
</td><td>('5828998', 'Meng Yang', 'meng yang')<br/>('1681236', 'Luc Van Gool', 'luc van gool')</td><td>{yang,vangool}@vision.ee.ethz.ch
</td></tr><tr><td>926c67a611824bc5ba67db11db9c05626e79de96</td><td>1913
<br/>Enhancing Bilinear Subspace Learning
<br/>by Element Rearrangement
</td><td>('38188040', 'Dong Xu', 'dong xu')<br/>('1698982', 'Shuicheng Yan', 'shuicheng yan')<br/>('1686911', 'Stephen Lin', 'stephen lin')<br/>('1739208', 'Thomas S. Huang', 'thomas s. huang')<br/>('9546964', 'Shih-Fu Chang', 'shih-fu chang')</td><td></td></tr><tr><td>923ede53b0842619831e94c7150e0fc4104e62f7</td><td>978-1-4799-9988-0/16/$31.00 ©2016 IEEE
<br/>1293
<br/>ICASSP 2016
</td><td></td><td></td></tr><tr><td>92b61b09d2eed4937058d0f9494d9efeddc39002</td><td>Under review in IJCV manuscript No.
<br/>(will be inserted by the editor)
<br/>BoxCars: Improving Vehicle Fine-Grained Recognition using
<br/>3D Bounding Boxes in Traffic Surveillance
<br/>Received: date / Accepted: date
</td><td>('34891870', 'Jakub Sochor', 'jakub sochor')</td><td></td></tr><tr><td>9264b390aa00521f9bd01095ba0ba4b42bf84d7e</td><td>Displacement Template with Divide-&-Conquer
<br/>Algorithm for Significantly Improving
<br/>Descriptor based Face Recognition Approaches
<br/><b>Wenzhou University, China</b><br/><b>University of Northern British Columbia, Canada</b><br/><b>Aberystwyth University, UK</b></td><td>('1692551', 'Liang Chen', 'liang chen')<br/>('33500699', 'Ling Yan', 'ling yan')<br/>('1990125', 'Yonghuai Liu', 'yonghuai liu')<br/>('39388942', 'Lixin Gao', 'lixin gao')<br/>('3779849', 'Xiaoqin Zhang', 'xiaoqin zhang')</td><td></td></tr><tr><td>92be73dffd3320fe7734258961fe5a5f2a43390e</td><td>TRANSFERRING FACE VERIFICATION NETS TO PAIN AND EXPRESSION REGRESSION
<br/>Dept. of {Computer Science1, Electrical & Computer Engineering2, Radiation Oncology3, Cognitive Science4}
<br/><b>Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA</b><br/>5Dept. of EE, UESTC, 2006 Xiyuan Ave, Chengdu, Sichuan 611731, China
<br/><b>Tsinghua University, Beijing 100084, China</b></td><td>('39369840', 'Feng Wang', 'feng wang')<br/>('40031188', 'Xiang Xiang', 'xiang xiang')<br/>('1692867', 'Chang Liu', 'chang liu')<br/>('1709073', 'Trac D. Tran', 'trac d. tran')<br/>('3207112', 'Austin Reiter', 'austin reiter')<br/>('1678633', 'Gregory D. Hager', 'gregory d. hager')<br/>('2095823', 'Harry Quon', 'harry quon')<br/>('1709439', 'Jian Cheng', 'jian cheng')<br/>('1746141', 'Alan L. Yuille', 'alan l. yuille')</td><td></td></tr><tr><td>920a92900fbff22fdaaef4b128ca3ca8e8d54c3e</td><td>LEARNING PATTERN TRANSFORMATION MANIFOLDS WITH PARAMETRIC ATOM
<br/>SELECTION
<br/>Ecole Polytechnique F´ed´erale de Lausanne (EPFL)
<br/>Signal Processing Laboratory (LTS4)
<br/>Switzerland-1015 Lausanne
</td><td>('12636684', 'Elif Vural', 'elif vural')<br/>('1703189', 'Pascal Frossard', 'pascal frossard')</td><td></td></tr><tr><td>9207671d9e2b668c065e06d9f58f597601039e5e</td><td>Face Detection Using a 3D Model on
<br/>Face Keypoints
</td><td>('2455529', 'Adrian Barbu', 'adrian barbu')<br/>('3019469', 'Gary Gramajo', 'gary gramajo')</td><td></td></tr><tr><td>924b14a9e36d0523a267293c6d149bca83e73f3b</td><td>Volume 5, Number 2, pp. 133 -164
<br/>Development and Evaluation of a Method
<br/>Employed to Identify Internal State
<br/>Utilizing Eye Movement Data
<br/>(cid:2) Graduate School of Media and
<br/><b>Governance, Keio University</b><br/>(JAPAN)
<br/>(cid:3) Faculty of Environmental
<br/><b>Information, Keio University</b><br/>(JAPAN)
</td><td>('31726964', 'Noriyuki Aoyama', 'noriyuki aoyama')<br/>('1889276', 'Tadahiko Fukuda', 'tadahiko fukuda')</td><td></td></tr><tr><td>9282239846d79a29392aa71fc24880651826af72</td><td>Antonakos et al. EURASIP Journal on Image and Video Processing 2014, 2014:14
<br/>http://jivp.eurasipjournals.com/content/2014/1/14
<br/>RESEARCH
<br/>Open Access
<br/>Classification of extreme facial events in sign
<br/>language videos
</td><td>('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')<br/>('1738119', 'Vassilis Pitsikalis', 'vassilis pitsikalis')<br/>('1750686', 'Petros Maragos', 'petros maragos')</td><td></td></tr><tr><td>92115b620c7f653c847f43b6c4ff0470c8e55dab</td><td>Training Deformable Object Models for Human
<br/>Detection Based on Alignment and Clustering
<br/>Department of Computer Science,
<br/>Centre of Biological Signalling Studies (BIOSS),
<br/><b>University of Freiburg, Germany</b></td><td>('2127987', 'Benjamin Drayer', 'benjamin drayer')<br/>('1710872', 'Thomas Brox', 'thomas brox')</td><td>{drayer,brox}@cs.uni-freiburg.de
</td></tr><tr><td>928b8eb47288a05611c140d02441660277a7ed54</td><td>Exploiting Images for Video Recognition with Hierarchical Generative
<br/>Adversarial Networks
<br/>1 Beijing Laboratory of Intelligent Information Technology, School of Computer Science,
<br/><b>Big Data Research Center, University of Electronic Science and Technology of China</b><br/><b>Beijing Institute of Technology</b></td><td>('3450614', 'Feiwu Yu', 'feiwu yu')<br/>('2125709', 'Xinxiao Wu', 'xinxiao wu')<br/>('9177510', 'Yuchao Sun', 'yuchao sun')<br/>('2055900', 'Lixin Duan', 'lixin duan')</td><td>{yufeiwu,wuxinxiao,sunyuchao}@bit.edu.cn, lxduan@uestc.edu.cn
</td></tr><tr><td>926e97d5ce2a6e070f8ec07c5aa7f91d3df90ba0</td><td>Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural
<br/>Networks
<br/>Department of Electrical and Computer Engineering
<br/><b>University of Denver, Denver, CO</b></td><td>('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')</td><td>behzad.hasani@du.edu and mmahoor@du.edu
</td></tr><tr><td>92c2dd6b3ac9227fce0a960093ca30678bceb364</td><td>Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published
<br/>version when available.
<br/>Title
<br/>On color texture normalization for active appearance models
<br/>Author(s)
<br/>Ionita, Mircea C.; Corcoran, Peter M.; Buzuloiu, Vasile
<br/>Publication
<br/>Date
<br/>2009-05-12
<br/>Publication
<br/>Information
<br/>Ionita, M. C., Corcoran, P., & Buzuloiu, V. (2009). On Color
<br/>Texture Normalization for Active Appearance Models. Image
<br/>Processing, IEEE Transactions on, 18(6), 1372-1378.
<br/>Publisher
<br/>IEEE
<br/>Link to
<br/>publisher's
<br/>version
<br/>http://dx.doi.org/10.1109/TIP.2009.2017163
<br/>Item record
<br/>http://hdl.handle.net/10379/1350
<br/>Some rights reserved. For more information, please see the item record link above.
<br/>Downloaded 2018-11-06T00:40:53Z
</td><td></td><td></td></tr><tr><td>92e464a5a67582d5209fa75e3b29de05d82c7c86</td><td>Reconstruction for Feature Disentanglement in Pose-invariant Face Recognition
<br/><b>Rutgers University, NJ, USA</b><br/>2NEC Labs America, CA, USA
</td><td>('4340744', 'Xi Peng', 'xi peng')<br/>('39960064', 'Xiang Yu', 'xiang yu')<br/>('1729571', 'Kihyuk Sohn', 'kihyuk sohn')</td><td>{xpeng.cs, dnm}@rutgers.edu, {xiangyu, ksohn, manu}@nec-labs.com
</td></tr><tr><td>927ba64123bd4a8a31163956b3d1765eb61e4426</td><td>Customer satisfaction measuring based on the most
<br/>significant facial emotion
<br/>To cite this version:
<br/>most significant facial emotion. 15th IEEE International Multi-Conference on Systems, Signals
<br/>Devices (SSD 2018), Mar 2018, Hammamet, Tunisia. <hal-01790317>
<br/>HAL Id: hal-01790317
<br/>https://hal-upec-upem.archives-ouvertes.fr/hal-01790317
<br/>Submitted on 11 May 2018
<br/>HAL is a multi-disciplinary open access
<br/>archive for the deposit and dissemination of sci-
<br/>entific research documents, whether they are pub-
<br/>lished or not. The documents may come from
<br/>teaching and research institutions in France or
<br/><b>abroad, or from public or private research centers</b><br/>L’archive ouverte pluridisciplinaire HAL, est
<br/>destinée au dépôt et à la diffusion de documents
<br/>scientifiques de niveau recherche, publiés ou non,
<br/>émanant des établissements d’enseignement et de
<br/>recherche français ou étrangers, des laboratoires
<br/>publics ou privés.
</td><td>('50101862', 'Rostom Kachouri', 'rostom kachouri')<br/>('50101862', 'Rostom Kachouri', 'rostom kachouri')</td><td></td></tr><tr><td>922838dd98d599d1d229cc73896d55e7a769aa7c</td><td>Learning Hierarchical Representations for Face Verification
<br/>with Convolutional Deep Belief Networks
<br/>Erik Learned-Miller
<br/><b>University of Massachusetts</b><br/><b>University of Michigan</b><br/><b>University of Massachusetts</b><br/>Amherst, MA
<br/>Ann Arbor, MI
<br/>Amherst, MA
</td><td>('3219900', 'Gary B. Huang', 'gary b. huang')<br/>('1697141', 'Honglak Lee', 'honglak lee')</td><td>gbhuang@cs.umass.edu
<br/>honglak@eecs.umich.edu
<br/>elm@cs.umass.edu
</td></tr><tr><td>9294739e24e1929794330067b84f7eafd286e1c8</td><td>Expression Recognition using Elastic Graph Matching
<br/>21,
<br/>21,
<br/>21,
<br/>, Cairong Zhou 2
<br/><b>Research Center for Learning Science, Southeast University, Nanjing 210096, China</b><br/><b>Southeast University, Nanjing 210096, China</b></td><td>('40622743', 'Yujia Cao', 'yujia cao')<br/>('40608983', 'Wenming Zheng', 'wenming zheng')<br/>('1718117', 'Li Zhao', 'li zhao')</td><td>Email: yujia_cao@seu.edu.cn
</td></tr><tr><td>92fada7564d572b72fd3be09ea3c39373df3e27c</td><td></td><td></td><td></td></tr><tr><td>927ad0dceacce2bb482b96f42f2fe2ad1873f37a</td><td>Interest-Point based Face Recognition System
<br/>87
<br/>X
<br/>Interest-Point based Face Recognition System
<br/>Spain
<br/>1. Introduction
<br/>Among all applications of face recognition systems, surveillance is one of the most
<br/>challenging ones. In such an application, the goal is to detect known criminals in crowded
<br/>environments, like airports or train stations. Some attempts have been made, like those of
<br/>Tokio (Engadget, 2006) or Mainz (Deutsche Welle, 2006), with limited success.
<br/>The first task to be carried out in an automatic surveillance system involves the detection of
<br/>all the faces in the images taken by the video cameras. Current face detection algorithms are
<br/>highly reliable and thus, they will not be the focus of our work. Some of the best performing
<br/>examples are the Viola-Jones algorithm (Viola & Jones, 2004) or the Schneiderman-Kanade
<br/>algorithm (Schneiderman & Kanade, 2000).
<br/>The second task to be carried out involves the comparison of all detected faces among the
<br/>database of known criminals. The ideal behaviour of an automatic system performing this
<br/>task would be to get a 100% correct identification rate, but this behaviour is far from the
<br/>capabilities of current face recognition algorithms. Assuming that there will be false
<br/>identifications, supervised surveillance systems seem to be the most realistic option: the
<br/>automatic system issues an alarm whenever it detects a possible match with a criminal, and
<br/>a human decides whether it is a false alarm or not. Figure 1 shows an example.
<br/>However, even in a supervised scenario the requirements for the face recognition algorithm
<br/>are extremely high: the false alarm rate must be low enough as to allow the human operator
<br/>to cope with it; and the percentage of undetected criminals must be kept to a minimum in
<br/>order to ensure security. Fulfilling both requirements at the same time is the main challenge,
<br/>as a reduction in false alarm rate usually implies an increase of the percentage of undetected
<br/>criminals.
<br/>We propose a novel face recognition system based in the use of interest point detectors and
<br/>local descriptors. In order to check the performances of our system, and particularly its
<br/>performances in a surveillance application, we present experimental results in terms of
<br/>Receiver Operating Characteristic curves or ROC curves. From the experimental results, it
<br/>becomes clear that our system outperforms classical appearance based approaches.
<br/>www.intechopen.com
</td><td>('35178717', 'Cesar Fernandez', 'cesar fernandez')<br/>('3686544', 'Maria Asuncion Vicente', 'maria asuncion vicente')<br/>('2422580', 'Miguel Hernandez', 'miguel hernandez')</td><td></td></tr><tr><td>929bd1d11d4f9cbc638779fbaf958f0efb82e603</td><td>This is the author’s version of a work that was submitted/accepted for pub-
<br/>lication in the following source:
<br/>Zhang, Ligang & Tjondronegoro, Dian W. (2010) Improving the perfor-
<br/>mance of facial expression recognition using dynamic, subtle and regional
<br/>features.
<br/>In Kok, WaiWong, B. Sumudu, U. Mendis, & Abdesselam ,
<br/>Bouzerdoum (Eds.) Neural Information Processing. Models and Applica-
<br/>tions, Lecture Notes in Computer Science, Sydney, N.S.W, pp. 582-589.
<br/>This file was downloaded from: http://eprints.qut.edu.au/43788/
<br/>c(cid:13) Copyright 2010 Springer-Verlag
<br/>Conference proceedings published, by Springer Verlag, will be available
<br/>via Lecture Notes in Computer Science http://www.springer.de/comp/lncs/
<br/>Notice: Changes introduced as a result of publishing processes such as
<br/>copy-editing and formatting may not be reflected in this document. For a
<br/>definitive version of this work, please refer to the published source:
<br/>http://dx.doi.org/10.1007/978-3-642-17534-3_72
</td><td></td><td></td></tr><tr><td>923ec0da8327847910e8dd71e9d801abcbc93b08</td><td>Hide-and-Seek: Forcing a Network to be Meticulous for
<br/>Weakly-supervised Object and Action Localization
<br/><b>University of California, Davis</b></td><td>('19553871', 'Krishna Kumar Singh', 'krishna kumar singh')<br/>('1883898', 'Yong Jae Lee', 'yong jae lee')</td><td></td></tr><tr><td>0c741fa0966ba3ee4fc326e919bf2f9456d0cd74</td><td>Facial Age Estimation by Learning from Label Distributions
<br/><b>School of Mathematical Sciences, Monash University, VIC 3800, Australia</b><br/><b>School of Computer Science and Engineering, Southeast University, Nanjing 210096, China</b><br/><b>National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China</b></td><td>('1735299', 'Xin Geng', 'xin geng')<br/>('2848275', 'Kate Smith-Miles', 'kate smith-miles')<br/>('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')</td><td></td></tr><tr><td>0c435e7f49f3e1534af0829b7461deb891cf540a</td><td>Capturing Global Semantic Relationships for Facial Action Unit Recognition
<br/><b>Rensselaer Polytechnic Institute</b><br/><b>School of Electrical Engineering and Automation, Harbin Institute of Technology</b><br/><b>School of Computer Science and Technology, University of Science and Technology of China</b></td><td>('2860279', 'Ziheng Wang', 'ziheng wang')<br/>('1830523', 'Yongqiang Li', 'yongqiang li')<br/>('1791319', 'Shangfei Wang', 'shangfei wang')<br/>('1726583', 'Qiang Ji', 'qiang ji')</td><td>{wangz10,liy23,jiq}@rpi.edu
<br/>sfwang@ustc.edu.cn
</td></tr><tr><td>0cb7e4c2f6355c73bfc8e6d5cdfad26f3fde0baf</td><td>International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 3, May 2014
<br/>FACIAL EXPRESSION RECOGNITION BASED ON
<br/><b>Computer Science, Engineering and Mathematics School, Flinders University, Australia</b><br/><b>Computer Science, Engineering and Mathematics School, Flinders University, Australia</b></td><td>('3105876', 'Humayra Binte Ali', 'humayra binte ali')<br/>('1739260', 'David M W Powers', 'david m w powers')</td><td></td></tr><tr><td>0c30f6303dc1ff6d05c7cee4f8952b74b9533928</td><td>Pareto Discriminant Analysis
<br/>Karim T. Abou–Moustafa
<br/>Centre of Intelligent Machines
<br/><b>The Robotics Institute</b><br/>Centre of Intelligent Machines
<br/><b>McGill University</b><br/><b>Carnegie Mellon University</b><br/><b>McGill University</b></td><td>('1707876', 'Fernando De la Torre', 'fernando de la torre')<br/>('1701344', 'Frank P. Ferrie', 'frank p. ferrie')</td><td>karimt@cim.mcgill.ca
<br/>ftorre@cs.cmu.edu
<br/>ferrie@cim.mcgill.ca
</td></tr><tr><td>0ccc535d12ad2142a8310d957cc468bbe4c63647</td><td>Better Exploiting OS-CNNs for Better Event Recognition in Images
<br/><b>Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology, CAS, China</b></td><td>('33345248', 'Limin Wang', 'limin wang')<br/>('1915826', 'Zhe Wang', 'zhe wang')<br/>('2072196', 'Sheng Guo', 'sheng guo')<br/>('33427555', 'Yu Qiao', 'yu qiao')</td><td>{07wanglimin, buptwangzhe2012, guosheng1001}@gmail.com, yu.qiao@siat.ac.cn
</td></tr><tr><td>0c8a0a81481ceb304bd7796e12f5d5fa869ee448</td><td>International Journal of Fuzzy Logic and Intelligent Systems, vol. 10, no. 2, June 2010, pp. 95-100
<br/>A Spatial Regularization of LDA for Face Recognition
<br/><b>Gangnung-Wonju National University</b><br/>123 Chibyun-Dong, Kangnung, 210-702, Korea
</td><td>('39845108', 'Lae-Jeong Park', 'lae-jeong park')</td><td>Tel : +82-33-640-2389, Fax : +82-33-646-0740, E-mail : ljpark@gwnu.ac.kr
</td></tr><tr><td>0c36c988acc9ec239953ff1b3931799af388ef70</td><td>Face Detection Using Improved Faster RCNN
<br/>Huawei Cloud BU, China
<br/>Figure1.Face detection results of FDNet1.0
</td><td>('2568329', 'Changzheng Zhang', 'changzheng zhang')<br/>('5084124', 'Xiang Xu', 'xiang xu')<br/>('2929196', 'Dandan Tu', 'dandan tu')</td><td>{zhangzhangzheng, xuxiang12, tudandan}@huawei.com
</td></tr><tr><td>0c5ddfa02982dcad47704888b271997c4de0674b</td><td></td><td></td><td></td></tr><tr><td>0c79a39a870d9b56dc00d5252d2a1bfeb4c295f1</td><td>Face Recognition in Videos by Label Propagation
<br/><b>International Institute of Information Technology, Hyderabad, India</b></td><td>('37956314', 'Vijay Kumar', 'vijay kumar')<br/>('3185334', 'Anoop M. Namboodiri', 'anoop m. namboodiri')</td><td>{vijaykumar.r@research., anoop@, jawahar@}iiit.ac.in
</td></tr><tr><td>0cccf576050f493c8b8fec9ee0238277c0cfd69a</td><td></td><td></td><td></td></tr><tr><td>0cdb49142f742f5edb293eb9261f8243aee36e12</td><td>Combined Learning of Salient Local Descriptors and Distance Metrics
<br/>for Image Set Face Verification
<br/>NICTA, PO Box 6020, St Lucia, QLD 4067, Australia
<br/><b>University of Queensland, School of ITEE, QLD 4072, Australia</b></td><td>('1781182', 'Conrad Sanderson', 'conrad sanderson')<br/>('3026404', 'Yongkang Wong', 'yongkang wong')<br/>('2270092', 'Brian C. Lovell', 'brian c. lovell')</td><td></td></tr><tr><td>0c069a870367b54dd06d0da63b1e3a900a257298</td><td>Author manuscript, published in "ICANN 2011 - International Conference on Artificial Neural Networks (2011)"
</td><td></td><td></td></tr><tr><td>0c75c7c54eec85e962b1720755381cdca3f57dfb</td><td>2212
<br/>Face Landmark Fitting via Optimized Part
<br/>Mixtures and Cascaded Deformable Model
</td><td>('39960064', 'Xiang Yu', 'xiang yu')<br/>('1768190', 'Junzhou Huang', 'junzhou huang')<br/>('1753384', 'Shaoting Zhang', 'shaoting zhang')<br/>('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')</td><td></td></tr><tr><td>0cf2eecf20cfbcb7f153713479e3206670ea0e9c</td><td>Privacy-Protective-GAN for Face De-identification
<br/><b>Temple University</b></td><td>('50117915', 'Yifan Wu', 'yifan wu')<br/>('46319628', 'Fan Yang', 'fan yang')<br/>('1805398', 'Haibin Ling', 'haibin ling')</td><td>{yifan.wu, fyang, hbling} @temple.edu
</td></tr><tr><td>0ca36ecaf4015ca4095e07f0302d28a5d9424254</td><td>Improving Bag-of-Visual-Words Towards Effective Facial Expressive
<br/>Image Classification
<br/>1Univ. Grenoble Alpes, CNRS, Grenoble INP∗ , GIPSA-lab, 38000 Grenoble, France
<br/>Keywords:
<br/>BoVW, k-means++, Relative Conjunction Matrix, SIFT, Spatial Pyramids, TF.IDF.
</td><td>('10762131', 'Dawood Al Chanti', 'dawood al chanti')<br/>('1788869', 'Alice Caplier', 'alice caplier')</td><td>dawood.alchanti@gmail.com
</td></tr><tr><td>0c1d85a197a1f5b7376652a485523e616a406273</td><td>Joint Registration and Representation Learning for Unconstrained Face
<br/>Identification
<br/><b>University of Canberra, Australia, Data61 - CSIRO and ANU, Australia</b><br/><b>Khalifa University, Abu Dhabi, United Arab Emirates</b></td><td>('2008898', 'Munawar Hayat', 'munawar hayat')<br/>('1802072', 'Naoufel Werghi', 'naoufel werghi')</td><td>{munawar.hayat,roland.goecke}@canberra.edu.au, salman.khan@csiro.au, naoufel.werghi@kustar.ac.ae
</td></tr><tr><td>0ca66283f4fb7dbc682f789fcf6d6732006befd5</td><td>Active Dictionary Learning for Image Representation
<br/>Department of Electrical and Computer Engineering
<br/><b>Rutgers, The State University of New Jersey, Piscataway, NJ</b></td><td>('37799945', 'Tong Wu', 'tong wu')<br/>('9208982', 'Anand D. Sarwate', 'anand d. sarwate')<br/>('2138101', 'Waheed U. Bajwa', 'waheed u. bajwa')</td><td></td></tr><tr><td>0c7f27d23a162d4f3896325d147f412c40160b52</td><td>Models and Algorithms for
<br/>Vision through the Atmosphere
<br/>Submitted in partial fulfillment of the
<br/>requirements for the degree
<br/>of Doctor of Philosophy
<br/>in the Graduate School of Arts and Sciences
<br/><b>COLUMBIA UNIVERSITY</b><br/>2003
</td><td>('1779052', 'Srinivasa G. Narasimhan', 'srinivasa g. narasimhan')</td><td></td></tr><tr><td>0cfca73806f443188632266513bac6aaf6923fa8</td><td>Predictive Uncertainty in Large Scale Classification
<br/>using Dropout - Stochastic Gradient Hamiltonian
<br/>Monte Carlo.
<br/>Vergara, Diego∗1, Hern´andez, Sergio∗2, Valdenegro-Toro, Mat´ıas∗∗3 and Jorquera, Felipe∗4.
<br/>∗Laboratorio de Procesamiento de Informaci´on Geoespacial, Universidad Cat´olica del Maule, Chile.
<br/>∗∗German Research Centre for Artificial Intelligence, Bremen, Germany.
</td><td></td><td>Email: 1diego.vergara@alu.ucm.cl, 2shernandez@ucm.cl,3matias.valdenegro@dfki.de,
<br/>4f.jorquera.uribe@gmail.com
</td></tr><tr><td>0c20fd90d867fe1be2459223a3cb1a69fa3d44bf</td><td>A Monte Carlo Strategy to Integrate Detection
<br/>and Model-Based Face Analysis
<br/>Department for Mathematics and Computer Science
<br/><b>University of Basel, Switzerland</b></td><td>('2591294', 'Andreas Forster', 'andreas forster')<br/>('34460642', 'Bernhard Egger', 'bernhard egger')<br/>('1687079', 'Thomas Vetter', 'thomas vetter')</td><td>sandro.schoenborn,andreas.forster,bernhard.egger,thomas.vetter@unibas.ch
</td></tr><tr><td>0c2875bb47db3698dbbb3304aca47066978897a4</td><td>Recurrent Models for Situation Recognition
<br/><b>University of Illinois at Urbana-Champaign</b></td><td>('36508529', 'Arun Mallya', 'arun mallya')<br/>('1749609', 'Svetlana Lazebnik', 'svetlana lazebnik')</td><td>{amallya2,slazebni}@illinois.edu
</td></tr><tr><td>0c3f7272a68c8e0aa6b92d132d1bf8541c062141</td><td>Hindawi Publishing Corporation
<br/>e Scientific World Journal
<br/>Volume 2014, Article ID 672630, 6 pages
<br/>http://dx.doi.org/10.1155/2014/672630
<br/>Research Article
<br/>Kruskal-Wallis-Based Computationally Efficient Feature
<br/>Selection for Face Recognition
<br/><b>Foundation University, Rawalpindi 46000, Pakistan</b><br/><b>Shaheed Zulfikar Ali Bhutto Institute of Science and Technology Islamabad</b><br/>Islamabad 44000, Pakistan
<br/><b>International Islamic University, Islamabad 44000, Pakistan</b><br/>Received 5 December 2013; Accepted 10 February 2014; Published 21 May 2014
<br/>Academic Editors: S. Balochian, V. Bhatnagar, and Y. Zhang
<br/>which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<br/>Face recognition in today’s technological world, and face recognition applications attain much more importance. Most of the
<br/>existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images.
<br/>The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute
<br/>to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more
<br/>discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers
<br/>are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are
<br/>performed on standard face database images and results are compared with existing techniques.
<br/>1. Introduction
<br/>Face recognition is becoming more acceptable in the domain
<br/>of computer vision and pattern recognition. The authenti-
<br/>cation systems based on the traditional ID card and pass-
<br/>word are nowadays replaced by the techniques which are
<br/>more preferable in order to handle the security issues. The
<br/>authentication systems based on biometrics are one of the
<br/>substitutes which are independent of the user’s memory and
<br/>not subjected to loss. Among those systems, face recognition
<br/>gains special attention because of the security it provides and
<br/>because it is independent of the high accuracy equipment
<br/>unlike iris and recognition based on the fingerprints.
<br/>Feature selection in pattern recognition is specifying the
<br/>subset of significant features to decrease the data dimensions
<br/>and at the same time it provides the set of selective features.
<br/>Image is represented by set of features in methods used for
<br/>feature extraction and each feature plays a vital role in the
<br/>process of recognition. The feature selection algorithm drops
<br/>all the unrelated features with the highly acceptable precision
<br/>rate as compared to some other pattern classification problem
<br/>in which higher precision rate cannot be obtained by greater
<br/>number of feature sets [1].
<br/>The feature selected by the classifiers plays a vital role
<br/>in producing the best features that are vigorous to the
<br/>inconsistent environment, for example, change in expressions
<br/>and other barriers. Local (texture-based) and global (holistic)
<br/>approaches are the two approaches used for face recognition
<br/>[2]. Local approaches characterized the face in the form of
<br/>geometric measurements which matches the unfamiliar face
<br/>with the closest face from database. Geometric measurements
<br/>contain angles and the distance of different facial points,
<br/>for example, mouth position, nose length, and eyes. Global
<br/>features are extracted by the use of algebraic methods like
<br/>PCA (principle component analysis) and ICA (independent
<br/>component analysis) [3]. PCA shows a quick response to
<br/>light and variation as it serves inner and outer classes
<br/>fairly. In face recognition, LDA (linear discriminate analysis)
<br/>usually performs better than PCA but separable creation is
<br/>not precise in classification. Good recognition rates can be
<br/>produced by transformation techniques like DCT (discrete
<br/>cosine transform) and DWT (discrete wavelet transform) [4].
</td><td>('8652075', 'Sajid Ali Khan', 'sajid ali khan')<br/>('9955306', 'Ayyaz Hussain', 'ayyaz hussain')<br/>('1959869', 'Abdul Basit', 'abdul basit')<br/>('2388005', 'Sheeraz Akram', 'sheeraz akram')<br/>('8652075', 'Sajid Ali Khan', 'sajid ali khan')</td><td>Correspondence should be addressed to Sajid Ali Khan; sajidalibn@gmail.com
</td></tr><tr><td>0cbc4dcf2aa76191bbf641358d6cecf38f644325</td><td>Visage: A Face Interpretation Engine for
<br/>Smartphone Applications
<br/><b>Dartmouth College, 6211 Sudiko Lab, Hanover, NH 03755, USA</b><br/><b>Intel Lab, 2200 Mission College Blvd, Santa Clara, CA 95054, USA</b><br/>3 Microsoft Research Asia, No. 5 Dan Ling St., Haidian District, Beijing, China
</td><td>('1840450', 'Xiaochao Yang', 'xiaochao yang')<br/>('1702472', 'Chuang-Wen You', 'chuang-wen you')<br/>('1884089', 'Hong Lu', 'hong lu')<br/>('1816301', 'Mu Lin', 'mu lin')<br/>('2772904', 'Nicholas D. Lane', 'nicholas d. lane')<br/>('1690035', 'Andrew T. Campbell', 'andrew t. campbell')</td><td>{Xiaochao.Yang,chuang-wen.you}@dartmouth.edu,hong.lu@intel.com,
<br/>mu.lin@dartmouth.edu,niclane@microsoft.com,campbell@cs.dartmouth.edu
</td></tr><tr><td>0ce8a45a77e797e9d52604c29f4c1e227f604080</td><td>International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3,No. 6,December 2013
<br/>ZERNIKE MOMENT-BASED FEATURE EXTRACTION
<br/>FOR FACIAL RECOGNITION OF IDENTICAL TWINS
<br/>1Department of Electrical,Computer and Biomedical Engineering, Qazvin branch,
<br/><b>Amirkabir University of Technology, Tehran</b><br/><b>IslamicAzad University, Qazvin, Iran</b><br/>Iran
</td><td>('13302047', 'Hoda Marouf', 'hoda marouf')<br/>('1692435', 'Karim Faez', 'karim faez')</td><td></td></tr><tr><td>0ce3a786aed896d128f5efdf78733cc675970854</td><td>Learning the Face Prior
<br/>for Bayesian Face Recognition
<br/>Department of Information Engineering,
<br/><b>The Chinese University of Hong Kong, China</b></td><td>('2312486', 'Chaochao Lu', 'chaochao lu')<br/>('1741901', 'Xiaoou Tang', 'xiaoou tang')</td><td></td></tr><tr><td>0c54e9ac43d2d3bab1543c43ee137fc47b77276e</td><td></td><td></td><td></td></tr><tr><td>0c5afb209b647456e99ce42a6d9d177764f9a0dd</td><td>97
<br/>Recognizing Action Units for
<br/>Facial Expression Analysis
</td><td>('40383812', 'Ying-li Tian', 'ying-li tian')<br/>('1733113', 'Takeo Kanade', 'takeo kanade')<br/>('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')</td><td></td></tr><tr><td>0c59071ddd33849bd431165bc2d21bbe165a81e0</td><td>Person Recognition in Personal Photo Collections
<br/><b>Max Planck Institute for Informatics</b><br/>Saarbrücken, Germany
</td><td>('2390510', 'Seong Joon Oh', 'seong joon oh')<br/>('1798000', 'Rodrigo Benenson', 'rodrigo benenson')<br/>('1739548', 'Mario Fritz', 'mario fritz')<br/>('1697100', 'Bernt Schiele', 'bernt schiele')</td><td>{joon,benenson,mfritz,schiele}@mpi-inf.mpg.de
</td></tr><tr><td>0c377fcbc3bbd35386b6ed4768beda7b5111eec6</td><td>258
<br/>A Unified Probabilistic Framework
<br/>for Spontaneous Facial Action Modeling
<br/>and Understanding
</td><td>('1686235', 'Yan Tong', 'yan tong')<br/>('1713712', 'Jixu Chen', 'jixu chen')<br/>('1726583', 'Qiang Ji', 'qiang ji')</td><td></td></tr><tr><td>0c12cbb9b9740dfa2816b8e5cde69c2f5a715c58</td><td>Memory-Augmented Attribute Manipulation Networks for
<br/>Interactive Fashion Search
<br/><b>Southwest Jiaotong University</b><br/><b>National University of Singapore</b><br/><b>AI Institute</b></td><td>('33901950', 'Bo Zhao', 'bo zhao')<br/>('33221685', 'Jiashi Feng', 'jiashi feng')<br/>('1814091', 'Xiao Wu', 'xiao wu')<br/>('1698982', 'Shuicheng Yan', 'shuicheng yan')</td><td>zhaobo@my.swjtu.edu.cn, elezhf@nus.edu.sg, wuxiaohk@swjtu.edu.cn, yanshuicheng@360.cn
</td></tr><tr><td>0cb2dd5f178e3a297a0c33068961018659d0f443</td><td></td><td>('2964917', 'Cameron Whitelam', 'cameron whitelam')<br/>('1885566', 'Emma Taborsky', 'emma taborsky')<br/>('1917247', 'Austin Blanton', 'austin blanton')<br/>('8033275', 'Brianna Maze', 'brianna maze')<br/>('15282121', 'Tim Miller', 'tim miller')<br/>('6680444', 'Anil K. Jain', 'anil k. jain')<br/>('40205896', 'James A. Duncan', 'james a. duncan')<br/>('2040584', 'Kristen Allen', 'kristen allen')<br/>('39403529', 'Jordan Cheney', 'jordan cheney')<br/>('2136478', 'Patrick Grother', 'patrick grother')</td><td></td></tr><tr><td>0cd8895b4a8f16618686f622522726991ca2a324</td><td>Discrete Choice Models for Static Facial Expression
<br/>Recognition
<br/><b>Ecole Polytechnique Federale de Lausanne, Signal Processing Institute</b><br/>2 Ecole Polytechnique Federale de Lausanne, Operation Research Group
<br/>Ecublens, 1015 Lausanne, Switzerland
<br/>Ecublens, 1015 Lausanne, Switzerland
</td><td>('1794461', 'Gianluca Antonini', 'gianluca antonini')<br/>('2916630', 'Matteo Sorci', 'matteo sorci')<br/>('1690395', 'Michel Bierlaire', 'michel bierlaire')<br/>('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')</td><td>{Matteo.Sorci,Gianluca.Antonini,JP.Thiran}@epfl.ch
<br/>Michel.Bierlaire@epfl.ch
</td></tr><tr><td>0cf7da0df64557a4774100f6fde898bc4a3c4840</td><td>Shape Matching and Object Recognition using Low Distortion Correspondences
<br/>Department of Electrical Engineering and Computer Science
<br/>U.C. Berkeley
</td><td>('39668247', 'Alexander C. Berg', 'alexander c. berg')<br/>('1689212', 'Jitendra Malik', 'jitendra malik')</td><td>faberg,millert,malikg@eecs.berkeley.edu
</td></tr><tr><td>0cbe059c181278a373292a6af1667c54911e7925</td><td>Owl and Lizard: Patterns of Head Pose and Eye
<br/>Pose in Driver Gaze Classification
<br/><b>Massachusetts Institute of Technology (MIT</b><br/><b>Chalmers University of Technology, SAFER</b></td><td>('7137846', 'Joonbum Lee', 'joonbum lee')<br/>('1901227', 'Bryan Reimer', 'bryan reimer')<br/>('35816778', 'Trent Victor', 'trent victor')</td><td></td></tr><tr><td>0c4659b35ec2518914da924e692deb37e96d6206</td><td>1236
<br/>Registering a MultiSensor Ensemble of Images
</td><td>('1822837', 'Jeff Orchard', 'jeff orchard')<br/>('6056877', 'Richard Mann', 'richard mann')</td><td></td></tr><tr><td>0c6e29d82a5a080dc1db9eeabbd7d1529e78a3dc</td><td>Learning Bayesian Network Classifiers for Facial Expression Recognition using
<br/>both Labeled and Unlabeled Data
<br/><b>Beckman Institute, University of Illinois at Urbana-Champaign, IL, USA</b><br/>iracohen, huang
<br/> Escola Polit´ecnica, Universidade de S˜ao Paulo, S˜ao Paulo, Brazil
<br/>fgcozman, marcelo.cirelo
</td><td>('1774778', 'Ira Cohen', 'ira cohen')<br/>('1703601', 'Nicu Sebe', 'nicu sebe')<br/>('1739208', 'Thomas S. Huang', 'thomas s. huang')</td><td>@ifp.uiuc.edu
<br/> Leiden Institute of Advanced Computer Science, Leiden University, The Netherlands, nicu@liacs.nl
<br/>@usp.br
</td></tr><tr><td>0ced7b814ec3bb9aebe0fcf0cac3d78f36361eae</td><td>Available Online at www.ijcsmc.com
<br/>International Journal of Computer Science and Mobile Computing
<br/> A Monthly Journal of Computer Science and Information Technology
<br/>ISSN 2320–088X
<br/>IMPACT FACTOR: 6.017
<br/>
<br/> IJCSMC, Vol. 6, Issue. 1, January 2017, pg.221 – 227
<br/>Central Local Directional Pattern Value
<br/>Flooding Co-occurrence Matrix based
<br/>Features for Face Recognition
<br/><b>Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad</b></td><td>('40221166', 'Chandra Sekhar Reddy', 'chandra sekhar reddy')<br/>('40221166', 'Chandra Sekhar Reddy', 'chandra sekhar reddy')</td><td></td></tr><tr><td>0c53ef79bb8e5ba4e6a8ebad6d453ecf3672926d</td><td>SUBMITTED TO JOURNAL
<br/>Weakly Supervised PatchNets: Describing and
<br/>Aggregating Local Patches for Scene Recognition
</td><td>('40184588', 'Zhe Wang', 'zhe wang')<br/>('39709927', 'Limin Wang', 'limin wang')<br/>('40457196', 'Yali Wang', 'yali wang')<br/>('3047890', 'Bowen Zhang', 'bowen zhang')<br/>('40285012', 'Yu Qiao', 'yu qiao')</td><td></td></tr><tr><td>0c60eebe10b56dbffe66bb3812793dd514865935</td><td></td><td></td><td></td></tr><tr><td>0c05f60998628884a9ac60116453f1a91bcd9dda</td><td>Optimizing Open-Ended Crowdsourcing: The Next Frontier in
<br/>Crowdsourced Data Management
<br/><b>University of Illinois</b><br/><b>cid:63)Stanford University</b></td><td>('32953042', 'Akash Das Sarma', 'akash das sarma')<br/>('8336538', 'Vipul Venkataraman', 'vipul venkataraman')</td><td></td></tr><tr><td>6601a0906e503a6221d2e0f2ca8c3f544a4adab7</td><td>SRTM-2 2/9/06 3:27 PM Page 321
<br/>Detection of Ancient Settlement Mounds:
<br/>Archaeological Survey Based on the
<br/>SRTM Terrain Model
<br/>B.H. Menze, J.A. Ur, and A.G. Sherratt
</td><td></td><td></td></tr><tr><td>660b73b0f39d4e644bf13a1745d6ee74424d4a16</td><td></td><td></td><td>3,250+OPEN ACCESS BOOKS106,000+INTERNATIONALAUTHORS AND EDITORS113+ MILLIONDOWNLOADSBOOKSDELIVERED TO151 COUNTRIESAUTHORS AMONGTOP 1%MOST CITED SCIENTIST12.2%AUTHORS AND EDITORSFROM TOP 500 UNIVERSITIESSelection of our books indexed in theBook Citation Index in Web of Science™Core Collection (BKCI)Chapter from the book Reviews, Refinements and New Ideas in Face RecognitionDownloaded from: http://www.intechopen.com/books/reviews-refinements-and-new-ideas-in-face-recognitionPUBLISHED BYWorld's largest Science,Technology & Medicine Open Access book publisherInterested in publishing with InTechOpen?Contact us at book.department@intechopen.com</td></tr><tr><td>66d512342355fb77a4450decc89977efe7e55fa2</td><td>Under review as a conference paper at ICLR 2018
<br/>LEARNING NON-LINEAR TRANSFORM WITH DISCRIM-
<br/>INATIVE AND MINIMUM INFORMATION LOSS PRIORS
<br/>Anonymous authors
<br/>Paper under double-blind review
</td><td></td><td></td></tr><tr><td>66aad5b42b7dda077a492e5b2c7837a2a808c2fa</td><td>A Novel PCA-Based Bayes Classifier
<br/>and Face Analysis
<br/>1 Centre de Visi´o per Computador,
<br/>Universitat Aut`onoma de Barcelona, Barcelona, Spain
<br/>2 Department of Computer Science,
<br/><b>Nanjing University of Science and Technology</b><br/>Nanjing, People’s Republic of China
<br/>3 HEUDIASYC - CNRS Mixed Research Unit,
<br/><b>Compi`egne University of Technology</b><br/>60205 Compi`egne cedex, France
</td><td>('1761329', 'Zhong Jin', 'zhong jin')<br/>('1742818', 'Franck Davoine', 'franck davoine')<br/>('35428318', 'Zhen Lou', 'zhen lou')</td><td>zhong.jin@cvc.uab.es
<br/>jyyang@mail.njust.edu.cn
<br/>franck.davoine@hds.utc.fr
</td></tr><tr><td>66b9d954dd8204c3a970d86d91dd4ea0eb12db47</td><td>Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition
<br/>in Image Sequences of Increasing Complexity
<br/><b>IBM T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY</b><br/><b>Robotics Institute, Carnegie Mellon University, Pittsburgh, PA</b><br/><b>University of Pittsburgh, Pittsburgh, PA</b></td><td>('40383812', 'Ying-li Tian', 'ying-li tian')<br/>('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')</td><td>Email: yltian@us.ibm.com,
<br/>tk@cs.cmu.edu
<br/>jeffcohn@pitt.edu
</td></tr><tr><td>6643a7feebd0479916d94fb9186e403a4e5f7cbf</td><td>Chapter 8
<br/>3D Face Recognition
</td><td>('1737428', 'Nick Pears', 'nick pears')</td><td></td></tr><tr><td>661ca4bbb49bb496f56311e9d4263dfac8eb96e9</td><td>Datasheets for Datasets
</td><td>('2076288', 'Timnit Gebru', 'timnit gebru')<br/>('1722360', 'Hal Daumé', 'hal daumé')</td><td></td></tr><tr><td>66dcd855a6772d2731b45cfdd75f084327b055c2</td><td>Quality Classified Image Analysis with Application
<br/>to Face Detection and Recognition
<br/>International Doctoral Innovation Centre
<br/><b>University of Nottingham Ningbo China</b><br/>School of Computer Science
<br/><b>University of Nottingham Ningbo China</b><br/><b>College of Information Engineering</b><br/><b>Shenzhen University, Shenzhen, China</b></td><td>('1684164', 'Fei Yang', 'fei yang')<br/>('1737486', 'Qian Zhang', 'qian zhang')<br/>('2155597', 'Miaohui Wang', 'miaohui wang')<br/>('1698461', 'Guoping Qiu', 'guoping qiu')</td><td></td></tr><tr><td>666939690c564641b864eed0d60a410b31e49f80</td><td>What Visual Attributes Characterize an Object Class ?
<br/><b>National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of</b><br/>Sciences, No.95, Zhongguancun East Road, Beijing, 100190, China
<br/>2Microsoft Research, No.5, Dan Ling Street, Haidian District, Beijing 10080, China
</td><td>('3247966', 'Jianlong Fu', 'jianlong fu')<br/>('1783122', 'Jinqiao Wang', 'jinqiao wang')<br/>('3349534', 'Xin-Jing Wang', 'xin-jing wang')<br/>('3663422', 'Yong Rui', 'yong rui')<br/>('1694235', 'Hanqing Lu', 'hanqing lu')</td><td>1fjlfu, jqwang, luhqg@nlpr.ia.ac.cn, 2fxjwang, yongruig@microsoft.com
</td></tr><tr><td>66330846a03dcc10f36b6db9adf3b4d32e7a3127</td><td>Polylingual Multimodal Learning
<br/><b>Institute AIFB, Karlsruhe Institute of Technology, Germany</b></td><td>('3219864', 'Aditya Mogadala', 'aditya mogadala')</td><td>{aditya.mogadala}@kit.edu
</td></tr><tr><td>66d087f3dd2e19ffe340c26ef17efe0062a59290</td><td>Dog Breed Identification
<br/>Brian Mittl
<br/>Vijay Singh
</td><td></td><td>wlarow@stanford.edu
<br/>bmittl@stanford.edu
<br/>vpsingh@stanford.edu
</td></tr><tr><td>6618cff7f2ed440a0d2fa9e74ad5469df5cdbe4c</td><td>Ordinal Regression with Multiple Output CNN for Age Estimation
<br/><b>Xidian University 2Xi an Jiaotong University 3Microsoft Research Asia</b></td><td>('1786361', 'Zhenxing Niu', 'zhenxing niu')<br/>('1745420', 'Gang Hua', 'gang hua')<br/>('10699750', 'Xinbo Gao', 'xinbo gao')<br/>('36497527', 'Mo Zhou', 'mo zhou')<br/>('40367806', 'Le Wang', 'le wang')</td><td>{zhenxingniu,cdluminate}@gmail.com, lewang@mail.xjtu.edu.cn, xinbogao@mail.xidian.edu.cn
<br/>ganghua@gmail.com
</td></tr><tr><td>666300af8ffb8c903223f32f1fcc5c4674e2430b</td><td>Changing Fashion Cultures
<br/><b>National Institute of Advanced Industrial Science and Technology (AIST</b><br/>Tsukuba, Ibaraki, Japan
<br/><b>Tokyo Denki University</b><br/>Adachi, Tokyo, Japan
</td><td>('3408038', 'Kaori Abe', 'kaori abe')<br/>('5014206', 'Teppei Suzuki', 'teppei suzuki')<br/>('9935341', 'Shunya Ueta', 'shunya ueta')<br/>('1732705', 'Yutaka Satoh', 'yutaka satoh')<br/>('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')<br/>('2462801', 'Akio Nakamura', 'akio nakamura')</td><td>{abe.keroko, suzuki-teppei, shunya.ueta, yu.satou, hirokatsu.kataoka}@aist.go.jp
<br/>nkmr-a@cck.dendai.ac.jp
</td></tr><tr><td>66029f1be1a5cee9a4e3e24ed8fcb65d5d293720</td><td>HWANG AND GRAUMAN: ACCOUNTING FOR IMPORTANCE IN IMAGE RETRIEVAL
<br/>Accounting for the Relative Importance of
<br/>Objects in Image Retrieval
<br/><b>The University of Texas</b><br/>Austin, TX, USA
</td><td>('35788904', 'Sung Ju Hwang', 'sung ju hwang')<br/>('1794409', 'Kristen Grauman', 'kristen grauman')</td><td>sjhwang@cs.utexas.edu
<br/>grauman@cs.utexas.edu
</td></tr><tr><td>6691dfa1a83a04fdc0177d8d70e3df79f606b10f</td><td>Illumination Modeling and Normalization for Face Recognition
<br/><b>Institute of Automation</b><br/>Chinese Academy of Sciences
<br/>Beijing, 100080, China
</td><td>('29948255', 'Haitao Wang', 'haitao wang')<br/>('34679741', 'Stan Z. Li', 'stan z. li')<br/>('1744302', 'Yangsheng Wang', 'yangsheng wang')<br/>('38248052', 'Weiwei Zhang', 'weiwei zhang')</td><td>{htwang, wys, wwzhang}@nlpr.ia.ac.cn
</td></tr><tr><td>66a2c229ac82e38f1b7c77a786d8cf0d7e369598</td><td>Proceedings of the 2016 Industrial and Systems Engineering Research Conference
<br/>H. Yang, Z. Kong, and MD Sarder, eds.
<br/>A Probabilistic Adaptive Search System
<br/>for Exploring the Face Space
<br/>Escuela Superior Politecnica del Litoral (ESPOL)
<br/>Guayaquil-Ecuador
</td><td>('3123974', 'Andres G. Abad', 'andres g. abad')<br/>('3044670', 'Luis I. Reyes Castro', 'luis i. reyes castro')</td><td></td></tr><tr><td>66886997988358847615375ba7d6e9eb0f1bb27f</td><td></td><td></td><td></td></tr><tr><td>66837add89caffd9c91430820f49adb5d3f40930</td><td></td><td></td><td></td></tr><tr><td>66a9935e958a779a3a2267c85ecb69fbbb75b8dc</td><td>FAST AND ROBUST FIXED-RANK MATRIX RECOVERY
<br/>Fast and Robust Fixed-Rank Matrix
<br/>Recovery
<br/>Antonio Lopez
</td><td>('34210410', 'Julio Guerrero', 'julio guerrero')</td><td></td></tr><tr><td>66533107f9abdc7d1cb8f8795025fc7e78eb1122</td><td>Vi a Sevig f a Ue h wih E(cid:11)ecive ei Readig
<br/>i a Whee chai baed Rbic A
<br/>W y g Sgy Dae i iy g S g iz ad Ze ga Biey
<br/>y EECS AST 373 1 g Dg Y g G Taej 305 701 REA
<br/>z VR Cee ETR 161 ajg Dg Y g G Taej 305 350 REA
<br/>Abac
<br/>Thee exi he c eaive aciviy bewee a h
<br/>a beig ad ehabi iai b beca e he h
<br/>a eae ehabi iai b i he ae evi
<br/>e ad ha he bee(cid:12) f ehabi iai b
<br/> ch a ai ay bi e f ci. ei
<br/>eadig i e f he eeia f ci f h a
<br/>fied y ehabi iai b i de ie he
<br/>cf ad afey f a wh eed he. Fi f
<br/>a he vea c e f a ew whee chai baed
<br/>bic a ye ARES ad i h a b
<br/>ieaci ech gie ae eeed. Ag he
<br/>ech gie we cceae vi a evig ha
<br/>a w hi bic a eae a y via
<br/>vi a feedback. E(cid:11)ecive iei eadig ch a
<br/>ecgizig he iive ad egaive eaig f he
<br/> e i efed he bai f chage f he facia
<br/>exei a d i ha i g y e aed he
<br/> e iei whi e hi bic a vide he
<br/> e wih a beveage. F he eÆcie vi a ifa
<br/>i ceig g a aed iage ae ed
<br/>c he ee caea head ha i caed i he
<br/>ed e(cid:11)ec f he bic a. The vi a evig
<br/>wih e(cid:11)ecive iei eadig i ccef y a ied
<br/> eve a beveage f he e.
<br/>d ci
<br/>Whee chai baed bic ye ae ai y ed
<br/>ai he e de y ad he diab ed wh have hadi
<br/>ca i ey ad f ci i ib. S ch a
<br/>ye ci f a weed whee chai ad a bic
<br/>a ad ha y a bi e caabi iy h gh
<br/>he whee chai b a a ai ay f ci via
<br/>he bic a ad h ake ib e he c
<br/>exiece f a e ad a b i he ae evi
<br/>e.
<br/> hi cae he e eed ieac wih
<br/>he bic a i cfab e ad afe way. w
<br/>Fig e 1: The whee chai baed bic a ad i
<br/>h a b ieaci ech gie.
<br/>eve i ha bee eed ha ay diÆc ie exi
<br/>i h a bf ieaci i exiig ehabi iai
<br/>b. F exa e a a c f he bic
<br/>a ake a high cgiive ad he e a whi e
<br/>hyica y diab ed e ay have diÆc ie i
<br/>eaig jyick dexe y hig b f
<br/>de icae vee [4]. addii AUS eva
<br/>ai e eed ha he diÆc hig
<br/>ig ehabi iai b i ay cad f a
<br/>a adj e ad ay f ci kee i
<br/>id a he begiig [4]. Theefe h a fied y
<br/>h a b ieaci i e f eeia echi e
<br/>i a whee chai baed bic a.
<br/> hi ae we cide he whee chai baed
<br/>bic ye ARES AST Rehabi iai E
<br/>gieeig Sevice ye which we ae deve ig
<br/>a a evice bic ye f he diab ed ad he
<br/>e de y ad dic i h a b ieaci ech
<br/>i e Fig. 1. Ag h a b ieaci ech
<br/>i e vi a evig i dea wih a a aj ic.
</td><td></td><td>zbie@ee.kai.ac.k
</td></tr><tr><td>66810438bfb52367e3f6f62c24f5bc127cf92e56</td><td>Face Recognition of Illumination Tolerance in 2D
<br/>Subspace Based on the Optimum Correlation
<br/>Filter
<br/>Xu Yi
<br/>Department of Information Engineering, Hunan Industry Polytechnic, Changsha, China
<br/>images will be tested to project
</td><td></td><td></td></tr><tr><td>66af2afd4c598c2841dbfd1053bf0c386579234e</td><td>Noname manuscript No.
<br/>(will be inserted by the editor)
<br/>Context Assisted Face Clustering Framework with
<br/>Human-in-the-Loop
<br/>Received: date / Accepted: date
</td><td>('3338094', 'Liyan Zhang', 'liyan zhang')<br/>('1686199', 'Sharad Mehrotra', 'sharad mehrotra')</td><td></td></tr><tr><td>66f02fbcad13c6ee5b421be2fc72485aaaf6fcb5</td><td>The AAAI-17 Workshop on
<br/>Human-Aware Artificial Intelligence
<br/>WS-17-10
<br/>Using Co-Captured Face, Gaze and Verbal Reactions to Images of
<br/>Varying Emotional Content for Analysis and Semantic Alignment
<br/><b>Muhlenberg College</b><br/><b>Rochester Institute of Technology</b><br/><b>Rochester Institute of Technology</b></td><td>('40114708', 'Trevor Walden', 'trevor walden')<br/>('2459642', 'Preethi Vaidyanathan', 'preethi vaidyanathan')<br/>('37459359', 'Reynold Bailey', 'reynold bailey')<br/>('1695716', 'Cecilia O. Alm', 'cecilia o. alm')</td><td>ag249083@muhlenberg.edu
<br/>tjw5866@rit.edu
<br/>{pxv1621, emilypx, rjbvcs, coagla}@rit.edu
</td></tr><tr><td>66e9fb4c2860eb4a15f713096020962553696e12</td><td>A New Urban Objects Detection Framework
<br/>Using Weakly Annotated Sets
<br/><b>University of S ao Paulo - USP, S ao Paulo - Brazil</b><br/><b>New York University</b></td><td>('40014199', 'Claudio Silva', 'claudio silva')<br/>('1748049', 'Roberto M. Cesar', 'roberto m. cesar')</td><td>{keiji, gabriel.augusto.ferreira, rmcesar}@usp.br
<br/>csilva@nyu.edu
</td></tr><tr><td>66e6f08873325d37e0ec20a4769ce881e04e964e</td><td>Int J Comput Vis (2014) 108:59–81
<br/>DOI 10.1007/s11263-013-0695-z
<br/>The SUN Attribute Database: Beyond Categories for Deeper Scene
<br/>Understanding
<br/>Received: 27 February 2013 / Accepted: 28 December 2013 / Published online: 18 January 2014
<br/>© Springer Science+Business Media New York 2014
</td><td>('40541456', 'Genevieve Patterson', 'genevieve patterson')<br/>('12532254', 'James Hays', 'james hays')</td><td></td></tr><tr><td>661da40b838806a7effcb42d63a9624fcd684976</td><td>53
<br/>An Illumination Invariant Accurate
<br/>Face Recognition with Down Scaling
<br/>of DCT Coefficients
<br/>Department of Computer Science and Engineering, Amity School of Engineering and Technology, New Delhi, India
<br/>In this paper, a novel approach for illumination normal-
<br/>ization under varying lighting conditions is presented.
<br/>Our approach utilizes the fact that discrete cosine trans-
<br/>form (DCT) low-frequency coefficients correspond to
<br/>illumination variations in a digital image. Under varying
<br/>illuminations, the images captured may have low con-
<br/>trast; initially we apply histogram equalization on these
<br/>for contrast stretching. Then the low-frequency DCT
<br/>coefficients are scaled down to compensate the illumi-
<br/>nation variations. The value of scaling down factor and
<br/>the number of low-frequency DCT coefficients, which
<br/>are to be rescaled, are obtained experimentally. The
<br/>classification is done using k−nearest neighbor classi-
<br/>fication and nearest mean classification on the images
<br/>obtained by inverse DCT on the processed coefficients.
<br/>The correlation coefficient and Euclidean distance ob-
<br/>tained using principal component analysis are used as
<br/>distance metrics in classification. We have tested our
<br/>face recognition method using Yale Face Database B.
<br/>The results show that our method performs without any
<br/>error (100% face recognition performance), even on the
<br/>most extreme illumination variations. There are different
<br/>schemes in the literature for illumination normalization
<br/>under varying lighting conditions, but no one is claimed
<br/>to give 100% recognition rate under all illumination
<br/>variations for this database. The proposed technique is
<br/>computationally efficient and can easily be implemented
<br/>for real time face recognition system.
<br/>Keywords: discrete cosine transform, correlation co-
<br/>efficient, face recognition, illumination normalization,
<br/>nearest neighbor classification
<br/>1. Introduction
<br/>Two-dimensional pattern classification plays a
<br/>crucial role in real-world applications. To build
<br/>high-performance surveillance or information
<br/>security systems, face recognition has been
<br/>known as the key application attracting enor-
<br/>mous researchers highlighting on related topics
<br/>[1,2]. Even though current machine recognition
<br/>systems have reached a certain level of matu-
<br/>rity, their success is limited by the real appli-
<br/>cations constraints, like pose, illumination and
<br/>expression. The FERET evaluation shows that
<br/>the performance of a face recognition system
<br/>decline seriously with the change of pose and
<br/>illumination conditions [31].
<br/>To solve the variable illumination problem a
<br/>variety of approaches have been proposed [3, 7-
<br/>11, 26-29]. Early work in illumination invariant
<br/>face recognition focused on image representa-
<br/>tions that are mostly insensitive to changes in
<br/>illumination. There were approaches in which
<br/>the image representations and distance mea-
<br/>sures were evaluated on a tightly controlled face
<br/>database that varied the face pose, illumination,
<br/>and expression. The image representations in-
<br/>clude edge maps, 2D Gabor-like filters, first and
<br/>second derivatives of the gray-level image, and
<br/>the logarithmic transformations of the intensity
<br/>image along with these representations [4].
<br/>The different approaches to solve the prob-
<br/>lem of illumination invariant face recognition
<br/>can be broadly classified into two main cate-
<br/>gories. The first category is named as passive
<br/>approach in which the visual spectrum images
<br/>are analyzed to overcome this problem. The
<br/>approaches belonging to other category named
<br/>active, attempt to overcome this problem by
<br/>employing active imaging techniques to obtain
<br/>face images captured in consistent illumina-
<br/>tion condition, or images of illumination invari-
<br/>ant modalities. There is a hierarchical catego-
<br/>rization of these two approaches. An exten-
<br/>sive review of both approaches is given in [5].
</td><td>('2650871', 'Virendra P. Vishwakarma', 'virendra p. vishwakarma')<br/>('2100294', 'Sujata Pandey', 'sujata pandey')<br/>('11690561', 'M. N. Gupta', 'm. n. gupta')</td><td></td></tr><tr><td>66886f5af67b22d14177119520bd9c9f39cdd2e6</td><td>T. KOBAYASHI: LEARNING ADDITIVE KERNEL
<br/>Learning Additive Kernel For Feature
<br/>Transformation and Its Application to CNN
<br/>Features
<br/><b>National Institute of Advanced Industrial</b><br/>Science and Technology
<br/>Tsukuba, Japan
</td><td>('1800592', 'Takumi Kobayashi', 'takumi kobayashi')</td><td>takumi.kobayashi@aist.go.jp
</td></tr><tr><td>3edb0fa2d6b0f1984e8e2c523c558cb026b2a983</td><td>Automatic Age Estimation Based on
<br/>Facial Aging Patterns
</td><td>('1735299', 'Xin Geng', 'xin geng')<br/>('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')<br/>('2848275', 'Kate Smith-Miles', 'kate smith-miles')</td><td></td></tr><tr><td>3e69ed088f588f6ecb30969bc6e4dbfacb35133e</td><td>ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 2011
<br/>Improving Performance of Texture Based Face
<br/>Recognition Systems by Segmenting Face Region
<br/><b>St. Xavier s Catholic College of Engineering, Nagercoil, India</b><br/><b>Manonmaniam Sundaranar University, Tirunelveli, India</b></td><td>('9375880', 'R. Reena Rose', 'r. reena rose')<br/>('3311251', 'A. Suruliandi', 'a. suruliandi')</td><td>mailtoreenarose@yahoo.in
<br/>suruliandi@yahoo.com
</td></tr><tr><td>3e0a1884448bfd7f416c6a45dfcdfc9f2e617268</td><td>Understanding and Controlling User Linkability in
<br/>Decentralized Learning
<br/><b>Max Planck Institute for Informatics</b><br/>Saarland Informatics Campus
<br/>Saarbrücken, Germany
</td><td>('9517443', 'Tribhuvanesh Orekondy', 'tribhuvanesh orekondy')<br/>('2390510', 'Seong Joon Oh', 'seong joon oh')<br/>('1697100', 'Bernt Schiele', 'bernt schiele')</td><td>{orekondy,joon,schiele,mfritz}@mpi-inf.mpg.de
</td></tr><tr><td>3e4b38b0574e740dcbd8f8c5dfe05dbfb2a92c07</td><td>FACIAL EXPRESSION RECOGNITION WITH LOCAL BINARY PATTERNS
<br/>AND LINEAR PROGRAMMING
<br/>Xiaoyi Feng1, 2, Matti Pietikäinen1, Abdenour Hadid1
<br/>1 Machine Vision Group, Infotech Oulu and Dept. of Electrical and Information Engineering
<br/><b>P. O. Box 4500 Fin-90014 University of Oulu, Finland</b><br/><b>College of Electronics and Information, Northwestern Polytechnic University</b><br/>710072 Xi’an, China
<br/>In this work, we propose a novel approach to recognize facial expressions from static
<br/>images. First, the Local Binary Patterns (LBP) are used to efficiently represent the facial
<br/>images and then the Linear Programming (LP) technique is adopted to classify the seven
<br/>facial expressions anger, disgust, fear, happiness, sadness, surprise and neutral.
<br/>Experimental results demonstrate an average recognition accuracy of 93.8% on the JAFFE
<br/>database, which outperforms the rates of all other reported methods on the same database.
<br/>Introduction
<br/>Facial expression recognition from static
<br/>images is a more challenging problem
<br/>than from image sequences because less
<br/>information for expression actions
<br/>is
<br/>available. However, information in a
<br/>single image is sometimes enough for
<br/>expression recognition, and
<br/>in many
<br/>applications it is also useful to recognize
<br/>single image’s facial expression.
<br/>In the recent years, numerous approaches
<br/>to facial expression analysis from static
<br/>images have been proposed [1] [2]. These
<br/>methods
<br/>face
<br/>representation and similarity measure.
<br/>For instance, Zhang [3] used two types of
<br/>features: the geometric position of 34
<br/>manually selected fiducial points and a
<br/>set of Gabor wavelet coefficients at these
<br/>points. These two types of features were
<br/>used both independently and jointly with
<br/>a multi-layer perceptron for classification.
<br/>Guo and Dyer [4] also adopted a similar
<br/>face representation, combined with linear
<br/>to carry out
<br/>programming
<br/>selection
<br/>simultaneous
<br/>and
<br/>classifier
<br/>they reported
<br/>technique
<br/>feature
<br/>training, and
<br/>differ
<br/>generally
<br/>in
<br/>a
<br/>simple
<br/>imperative question
<br/>better result. Lyons et al. used a similar face
<br/>representation with
<br/>LDA-based
<br/>classification scheme [5]. All the above methods
<br/>required the manual selection of fiducial points.
<br/>Buciu et al. used ICA and Gabor representation for
<br/>facial expression recognition and reported good result
<br/>on the same database [6]. However, a suitable
<br/>combination of feature extraction and classification is
<br/>still one
<br/>for expression
<br/>recognition.
<br/>In this paper, we propose a novel method for facial
<br/>expression recognition. In the feature extraction step,
<br/>the Local Binary Pattern (LBP) operator is used to
<br/>describe facial expressions. In the classification step,
<br/>seven expressions (anger, disgust, fear, happiness,
<br/>sadness, surprise and neutral) are decomposed into 21
<br/>expression pairs such as anger-fear, happiness-
<br/>sadness etc. 21 classifiers are produced by the Linear
<br/>Programming (LP) technique, each corresponding to
<br/>one of the 21 expression pairs. A simple binary tree
<br/>tournament scheme with pairwise comparisons is
<br/>used for classifying unknown expressions.
<br/>Face Representation with Local Binary Patterns
<br/>
<br/>Fig.1 shows the basic LBP operator [7], in which the
<br/>original 3×3 neighbourhood at the left is thresholded
<br/>by the value of the centre pixel, and a binary pattern
</td><td></td><td>{xiaoyi,mkp,hadid}@ee.oulu.fi
<br/>fengxiao@nwpu.edu.cn
</td></tr><tr><td>3ee7a8107a805370b296a53e355d111118e96b7c</td><td></td><td></td><td></td></tr><tr><td>3ebce6710135d1f9b652815e59323858a7c60025</td><td>Component-based Face Detection
<br/>(cid:1)Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA
<br/><b>cid:2)Honda RandD Americas, Inc., Boston, MA, USA</b><br/><b>University of Siena, Siena, Italy</b></td><td>('1684626', 'Bernd Heisele', 'bernd heisele')</td><td>(cid:1)heisele, serre, tp(cid:2) @ai.mit.edu pontil@dii.unisi.it
</td></tr><tr><td>3e4acf3f2d112fc6516abcdddbe9e17d839f5d9b</td><td>Deep Value Networks Learn to
<br/>Evaluate and Iteratively Refine Structured Outputs
</td><td>('3037160', 'Michael Gygli', 'michael gygli')</td><td></td></tr><tr><td>3e3f305dac4fbb813e60ac778d6929012b4b745a</td><td>Feature sampling and partitioning for visual vocabulary
<br/>generation on large action classification datasets.
<br/><b>Oxford Brookes University</b><br/><b>University of Oxford</b></td><td>('3019396', 'Michael Sapienza', 'michael sapienza')<br/>('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')</td><td></td></tr><tr><td>3ea8a6dc79d79319f7ad90d663558c664cf298d4</td><td></td><td>('40253814', 'IRA COHEN', 'ira cohen')</td><td></td></tr><tr><td>3e4f84ce00027723bdfdb21156c9003168bc1c80</td><td>1979
<br/>© EURASIP, 2011 - ISSN 2076-1465
<br/>19th European Signal Processing Conference (EUSIPCO 2011)
<br/>INTRODUCTION
</td><td></td><td></td></tr><tr><td>3e04feb0b6392f94554f6d18e24fadba1a28b65f</td><td>14
<br/>Subspace Image Representation for Facial
<br/>Expression Analysis and Face Recognition
<br/>and its Relation to the Human Visual System
<br/><b>Aristotle University of Thessaloniki GR</b><br/>Thessaloniki, Box 451, Greece.
<br/>2 Electronics Department, Faculty of Electrical Engineering and Information
<br/><b>Technology, University of Oradea 410087, Universitatii 1, Romania</b><br/>Summary. Two main theories exist with respect to face encoding and representa-
<br/>tion in the human visual system (HVS). The first one refers to the dense (holistic)
<br/>representation of the face, where faces have “holon”-like appearance. The second one
<br/>claims that a more appropriate face representation is given by a sparse code, where
<br/>only a small fraction of the neural cells corresponding to face encoding is activated.
<br/>Theoretical and experimental evidence suggest that the HVS performs face analysis
<br/>(encoding, storing, face recognition, facial expression recognition) in a structured
<br/>and hierarchical way, where both representations have their own contribution and
<br/>goal. According to neuropsychological experiments, it seems that encoding for face
<br/>recognition, relies on holistic image representation, while a sparse image represen-
<br/>tation is used for facial expression analysis and classification. From the computer
<br/>vision perspective, the techniques developed for automatic face and facial expres-
<br/>sion recognition fall into the same two representation types. Like in Neuroscience,
<br/>the techniques which perform better for face recognition yield a holistic image rep-
<br/>resentation, while those techniques suitable for facial expression recognition use a
<br/>sparse or local image representation. The proposed mathematical models of image
<br/>formation and encoding try to simulate the efficient storing, organization and coding
<br/>of data in the human cortex. This is equivalent with embedding constraints in the
<br/>model design regarding dimensionality reduction, redundant information minimiza-
<br/>tion, mutual information minimization, non-negativity constraints, class informa-
<br/>tion, etc. The presented techniques are applied as a feature extraction step followed
<br/>by a classification method, which also heavily influences the recognition results.
<br/>Key words: Human Visual System; Dense, Sparse and Local Image Repre-
<br/>sentation and Encoding, Face and Facial Expression Analysis and Recogni-
<br/>tion.
<br/>R.P. W¨urtz (ed.), Organic Computing. Understanding Complex Systems,
<br/>doi: 10.1007/978-3-540-77657-4 14, © Springer-Verlag Berlin Heidelberg 2008
</td><td>('2336758', 'Ioan Buciu', 'ioan buciu')<br/>('1698588', 'Ioannis Pitas', 'ioannis pitas')</td><td>pitas@zeus.csd.auth.gr
<br/>ibuciu@uoradea.ro
</td></tr><tr><td>3e685704b140180d48142d1727080d2fb9e52163</td><td>Single Image Action Recognition by Predicting
<br/>Space-Time Saliency
</td><td>('32998919', 'Marjaneh Safaei', 'marjaneh safaei')<br/>('1691260', 'Hassan Foroosh', 'hassan foroosh')</td><td></td></tr><tr><td>3e51d634faacf58e7903750f17111d0d172a0bf1</td><td>A COMPRESSIBLE TEMPLATE PROTECTION SCHEME
<br/>FOR FACE RECOGNITION BASED ON SPARSE REPRESENTATION
<br/><b>Tokyo Metropolitan University</b><br/>6–6 Asahigaoka, Hino-shi, Tokyo 191–0065, Japan
<br/>† NTT Network Innovation Laboratories, Japan
</td><td>('32403098', 'Yuichi Muraki', 'yuichi muraki')<br/>('11129971', 'Masakazu Furukawa', 'masakazu furukawa')<br/>('1728060', 'Masaaki Fujiyoshi', 'masaaki fujiyoshi')<br/>('34638424', 'Yoshihide Tonomura', 'yoshihide tonomura')<br/>('1737217', 'Hitoshi Kiya', 'hitoshi kiya')</td><td></td></tr><tr><td>3e40991ab1daa2a4906eb85a5d6a01a958b6e674</td><td>LIPNET: END-TO-END SENTENCE-LEVEL LIPREADING
<br/><b>University of Oxford, Oxford, UK</b><br/>Google DeepMind, London, UK 2
<br/>CIFAR, Canada 3
<br/>{yannis.assael,brendan.shillingford,
</td><td>('3365565', 'Yannis M. Assael', 'yannis m. assael')<br/>('3144580', 'Brendan Shillingford', 'brendan shillingford')<br/>('1766767', 'Shimon Whiteson', 'shimon whiteson')</td><td>shimon.whiteson,nando.de.freitas}@cs.ox.ac.uk
</td></tr><tr><td>3e687d5ace90c407186602de1a7727167461194a</td><td>Photo Tagging by Collection-Aware People Recognition
<br/>UFF
<br/>UFF
<br/>Asla S´a
<br/>FGV
<br/>IMPA
</td><td>('2901520', 'Cristina Nader Vasconcelos', 'cristina nader vasconcelos')<br/>('19264449', 'Vinicius Jardim', 'vinicius jardim')<br/>('1746637', 'Paulo Cezar Carvalho', 'paulo cezar carvalho')</td><td>crisnv@ic.uff.br
<br/>vinicius@id.uff.br
<br/>asla.sa@fgv.br
<br/>pcezar@impa.br
</td></tr><tr><td>3e3a87eb24628ab075a3d2bde3abfd185591aa4c</td><td>Effects of sparseness and randomness of
<br/>pairwise distance matrix on t-SNE results
<br/><b>BECS, Aalto University, Helsinki, Finland</b></td><td>('32430508', 'Eli Parviainen', 'eli parviainen')</td><td></td></tr><tr><td>3e207c05f438a8cef7dd30b62d9e2c997ddc0d3f</td><td>Objects as context for detecting their semantic parts
<br/><b>University of Edinburgh</b></td><td>('20758701', 'Abel Gonzalez-Garcia', 'abel gonzalez-garcia')<br/>('1996209', 'Davide Modolo', 'davide modolo')<br/>('1749692', 'Vittorio Ferrari', 'vittorio ferrari')</td><td>a.gonzalez-garcia@sms.ed.ac.uk
<br/>davide.modolo@gmail.com
<br/>vferrari@staffmail.ed.ac.uk
</td></tr><tr><td>5040f7f261872a30eec88788f98326395a44db03</td><td>PAPAMAKARIOS, PANAGAKIS, ZAFEIRIOU: GENERALISED SCALABLE ROBUST PCA
<br/>Generalised Scalable Robust Principal
<br/>Component Analysis
<br/>Department of Computing
<br/><b>Imperial College London</b><br/>London, UK
</td><td>('2369138', 'Georgios Papamakarios', 'georgios papamakarios')<br/>('1780393', 'Yannis Panagakis', 'yannis panagakis')<br/>('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')</td><td>georgios.papamakarios13@imperial.ac.uk
<br/>i.panagakis@imperial.ac.uk
<br/>s.zafeiriou@imperial.ac.uk
</td></tr><tr><td>50f0c495a214b8d57892d43110728e54e413d47d</td><td>Submitted 8/11; Revised 3/12; Published 8/12
<br/>Pairwise Support Vector Machines and their Application to Large
<br/>Scale Problems
<br/><b>Institute for Numerical Mathematics</b><br/>Technische Universit¨at Dresden
<br/>01062 Dresden, Germany
<br/>Cognitec Systems GmbH
<br/>Grossenhainer Str. 101
<br/>01127 Dresden, Germany
<br/>Editor: Corinna Cortes
</td><td>('25796572', 'Carl Brunner', 'carl brunner')<br/>('1833903', 'Andreas Fischer', 'andreas fischer')<br/>('2201239', 'Klaus Luig', 'klaus luig')<br/>('2439730', 'Thorsten Thies', 'thorsten thies')</td><td>C.BRUNNER@GMX.NET
<br/>ANDREAS.FISCHER@TU-DRESDEN.DE
<br/>LUIG@COGNITEC.COM
<br/>THIES@COGNITEC.COM
</td></tr><tr><td>501096cca4d0b3d1ef407844642e39cd2ff86b37</td><td>Illumination Invariant Face Image
<br/>Representation using Quaternions
<br/>Dayron Rizo-Rodr´ıguez, Heydi M´endez-V´azquez, and Edel Garc´ıa-Reyes
<br/>Advanced Technologies Application Center. 7a # 21812 b/ 218 and 222,
<br/>Rpto. Siboney, Playa, P.C. 12200, La Habana, Cuba.
</td><td></td><td>{drizo,hmendez,egarcia}@cenatav.co.cu
</td></tr><tr><td>500fbe18afd44312738cab91b4689c12b4e0eeee</td><td>ChaLearn Looking at People 2015 new competitions:
<br/>Age Estimation and Cultural Event Recognition
<br/><b>University of Barcelona</b><br/>Computer Vision Center, UAB
<br/>Jordi Gonz`alez
<br/>Xavier Bar´o
<br/>Univ. Aut`onoma de Barcelona
<br/>Computer Vision Center, UAB
<br/>Universitat Oberta de Catalunya
<br/>Computer Vision Center, UAB
<br/><b>University of Barcelona</b><br/>Univ. Aut`onoma de Barcelona
<br/>Computer Vision Center, UAB
<br/><b>University of Barcelona</b><br/>Computer Vision Center, UAB
<br/>INAOE
<br/>Ivan Huerta
<br/><b>University of Venezia</b><br/>Clopinet, Berkeley
</td><td>('7855312', 'Sergio Escalera', 'sergio escalera')<br/>('40378482', 'Pablo Pardo', 'pablo pardo')<br/>('37811966', 'Junior Fabian', 'junior fabian')<br/>('3305641', 'Marc Oliu', 'marc oliu')<br/>('1742688', 'Hugo Jair Escalante', 'hugo jair escalante')<br/>('1743797', 'Isabelle Guyon', 'isabelle guyon')</td><td>Email: sergio@maia.ub.es
<br/>Email: ppardoga7@gmail.com
<br/>Email: poal@cvc.uab.es
<br/>Email: xbaro@uoc.edu
<br/>Email: jfabian@cvc.uab.es
<br/>Email: moliusimon@gmail.com
<br/>Email: hugo.jair@gmail.com
<br/>Email: huertacasado@iuav.it
<br/>Email: guyon@chalearn.org
</td></tr><tr><td>501eda2d04b1db717b7834800d74dacb7df58f91</td><td></td><td>('3846862', 'Pedro Miguel Neves Marques', 'pedro miguel neves marques')</td><td></td></tr><tr><td>5083c6be0f8c85815ead5368882b584e4dfab4d1</td><td> Please do not quote. In press, Handbook of affective computing. New York, NY: Oxford
<br/>Automated Face Analysis for Affective Computing
</td><td>('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')</td><td></td></tr><tr><td>506c2fbfa9d16037d50d650547ad3366bb1e1cde</td><td>Convolutional Channel Features: Tailoring CNN to Diverse Tasks
<br/>Junjie Yan
<br/>Zhen Lei
<br/>Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
<br/><b>Institute of Automation, Chinese Academy of Sciences, China</b></td><td>('1716231', 'Bin Yang', 'bin yang')<br/>('34679741', 'Stan Z. Li', 'stan z. li')</td><td>{zlei, szli}@nlpr.ia.ac.cn
<br/>{yb.derek, yanjjie}@gmail.com
</td></tr><tr><td>500b92578e4deff98ce20e6017124e6d2053b451</td><td></td><td></td><td></td></tr><tr><td>504028218290d68859f45ec686f435f473aa326c</td><td>Multi-Fiber Networks for Video Recognition
<br/><b>National University of Singapore</b><br/>2 Facebook Research
<br/><b>Qihoo 360 AI Institute</b></td><td>('1713312', 'Yunpeng Chen', 'yunpeng chen')<br/>('1944225', 'Yannis Kalantidis', 'yannis kalantidis')<br/>('2757639', 'Jianshu Li', 'jianshu li')<br/>('1698982', 'Shuicheng Yan', 'shuicheng yan')<br/>('33221685', 'Jiashi Feng', 'jiashi feng')</td><td>{chenyunpeng, jianshu}@u.nus.edu, yannisk@fb.com,
<br/>{eleyans, elefjia}@nus.edu.sg
</td></tr><tr><td>5058a7ec68c32984c33f357ebaee96c59e269425</td><td>A Comparative Evaluation of Regression Learning
<br/>Algorithms for Facial Age Estimation
<br/>1 Herta Security
<br/>Pau Claris 165 4-B, 08037 Barcelona, Spain
<br/><b>DPDCE, University IUAV</b><br/>Santa Croce 1957, 30135 Venice, Italy
</td><td>('1733945', 'Andrea Prati', 'andrea prati')</td><td>carles.fernandez@hertasecurity.com
<br/>huertacasado@iuav.it, aprati@iuav.it
</td></tr><tr><td>50ff21e595e0ebe51ae808a2da3b7940549f4035</td><td>IEEE TRANSACTIONS ON LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 2017
<br/>Age Group and Gender Estimation in the Wild with
<br/>Deep RoR Architecture
</td><td>('32164792', 'Ke Zhang', 'ke zhang')<br/>('35038034', 'Ce Gao', 'ce gao')<br/>('3451321', 'Liru Guo', 'liru guo')<br/>('2598874', 'Miao Sun', 'miao sun')<br/>('3451660', 'Xingfang Yuan', 'xingfang yuan')<br/>('3244463', 'Tony X. Han', 'tony x. han')<br/>('2626320', 'Zhenbing Zhao', 'zhenbing zhao')<br/>('2047712', 'Baogang Li', 'baogang li')</td><td></td></tr><tr><td>5042b358705e8d8e8b0655d07f751be6a1565482</td><td>International Journal of
<br/>Emerging Research in Management &Technology
<br/>ISSN: 2278-9359 (Volume-4, Issue-8)
<br/>Research Article
<br/> August
<br/> 2015
<br/>Review on Emotion Detection in Image
<br/>CSE & PCET, PTU HOD, CSE & PCET, PTU
<br/> Punjab, India Punj ab, India
</td><td></td><td></td></tr><tr><td>50e47857b11bfd3d420f6eafb155199f4b41f6d7</td><td>International Journal of Computer, Consumer and Control (IJ3C), Vol. 2, No.1 (2013)
<br/>3D Human Face Reconstruction Using a Hybrid of Photometric
<br/>Stereo and Independent Component Analysis
</td><td>('1734467', 'Cheng-Jian Lin', 'cheng-jian lin')<br/>('3318507', 'Shyi-Shiun Kuo', 'shyi-shiun kuo')<br/>('18305737', 'Hsueh-Yi Lin', 'hsueh-yi lin')<br/>('2911354', 'Cheng-Yi Yu', 'cheng-yi yu')</td><td></td></tr><tr><td>50eb75dfece76ed9119ec543e04386dfc95dfd13</td><td>Learning Visual Entities and their Visual Attributes from Text Corpora
<br/>Dept. of Computer Science
<br/>K.U.Leuven, Belgium
<br/>Dept. of Computer Science
<br/>K.U.Leuven, Belgium
<br/>Dept. of Computer Science
<br/>K.U.Leuven, Belgium
</td><td>('2955093', 'Erik Boiy', 'erik boiy')<br/>('1797588', 'Koen Deschacht', 'koen deschacht')<br/>('1802161', 'Marie-Francine Moens', 'marie-francine moens')</td><td>erik.boiy@cs.kuleuven.be
<br/>koen.deschacht@cs.kuleuven.be
<br/>sien.moens@cs.kuleuven.be
</td></tr><tr><td>5050807e90a925120cbc3a9cd13431b98965f4b9</td><td>To appear in the ECCV Workshop on Parts and Attributes, Oct. 2012.
<br/>Unsupervised Learning of Discriminative
<br/>Relative Visual Attributes
<br/><b>Boston University</b><br/><b>Hacettepe University</b></td><td>('2863531', 'Shugao Ma', 'shugao ma')<br/>('2011587', 'Nazli Ikizler-Cinbis', 'nazli ikizler-cinbis')</td><td></td></tr><tr><td>50a0930cb8cc353e15a5cb4d2f41b365675b5ebf</td><td></td><td></td><td></td></tr><tr><td>508702ed2bf7d1b0655ea7857dd8e52d6537e765</td><td>ZUO, ORGANISCIAK, SHUM, YANG: SST-VLAD AND SST-FV FOR VAR
<br/>Saliency-Informed Spatio-Temporal Vector
<br/>of Locally Aggregated Descriptors and
<br/>Fisher Vectors for Visual Action Recognition
<br/>Department of Computer and
<br/>Information Sciences
<br/><b>Northumbria University</b><br/>Newcastle upon Tyne, NE1 8ST, UK
</td><td>('40760781', 'Zheming Zuo', 'zheming zuo')<br/>('34975328', 'Daniel Organisciak', 'daniel organisciak')<br/>('2840036', 'Hubert P. H. Shum', 'hubert p. h. shum')<br/>('1706028', 'Longzhi Yang', 'longzhi yang')</td><td>zheming.zuo@northumbria.ac.uk
<br/>daniel.organisciak@northumbria.ac.uk
<br/>hubert.shum@northumbria.ac.uk
<br/>longzhi.yang@northumbria.ac.uk
</td></tr><tr><td>50eb2ee977f0f53ab4b39edc4be6b760a2b05f96</td><td>Australian Journal of Basic and Applied Sciences, 11(5) April 2017, Pages: 1-11
<br/>AUSTRALIAN JOURNAL OF BASIC AND
<br/>APPLIED SCIENCES
<br/>ISSN:1991-8178 EISSN: 2309-8414
<br/>Journal home page: www.ajbasweb.com
<br/>Emotion Recognition Based on Texture Analysis of Facial Expressions
<br/>Using Wavelets Transform
<br/>1Suhaila N. Mohammed and 2Loay E. George
<br/><b>Assistant Lecturer, College of Science, Baghdad University, Baghdad, Iraq</b><br/><b>College of Science, Baghdad University, Baghdad, Iraq</b><br/>Address For Correspondence:
<br/><b>Suhaila N. Mohammed, Baghdad University, College of Science, Baghdad, Iraq</b><br/>A R T I C L E I N F O
<br/>Article history:
<br/>Received 18 January 2017
<br/>Accepted 28 March 2017
<br/>Available online 15 April 2017
<br/>Keywords:
<br/>Facial Emotion, Face Detection,
<br/>Template Based Methods, Texture
<br/>Based Features, Haar Wavelets
<br/>Transform, Image Blocking, Neural
<br/>Network.
<br/>A B S T R A C T
<br/>Background: The interests toward developing accurate automatic facial emotion
<br/>recognition methodologies are growing vastly and still an ever growing research field in
<br/>the region of computer vision, artificial intelligent and automation. Auto emotion
<br/>detection systems are demanded in various fields such as medicine, education, driver
<br/>safety, games, etc. Despite the importance of this issue it still remains an unsolved
<br/>problem Objective: In this paper a facial based emotion recognition system is
<br/>introduced. Template based method is used for face region extraction by exploiting
<br/>human knowledge about face components and the corresponding symmetry property.
<br/>The system is based on texture features to work as identical feature vector. These
<br/>features are extracted from face region through using Haar wavelets transform and
<br/>blocking idea by calculating the energy of each block The feed forward neural network
<br/>classifier is used for classification task. The network is trained using a training set of
<br/>samples, and then the generated weights are used to test the recognition ability of the
<br/>system. Results: JAFFE public dataset is used for system evaluation purpose; it holds
<br/>213 facial samples for seven basic emotions. The conducted tests on the developed
<br/>system gave accuracy around 90.05% when the number of blocks is set 4x4.
<br/>Conclusion: This result is considered the highest when compared with the results of
<br/>other newly published works, especially those based on texture features in which
<br/>blocking idea allows the extraction of statistical features according to local energy of
<br/>each block; this gave chance for more features to work more effectively.
<br/>INTRODUCTION
<br/>Due to the rapid development of technologies, it is being required to build a smart system for understanding
<br/>human emotion (Ruivo et al., 2016). There are different ways to distinguish person emotions such as facial
<br/>image, voice, shape of body and others. Mehrabian explained that person impression can be expressed through
<br/>words (verbal part) by 7%, and 38% through tone of voice (vocal part) while the facial image can give the
<br/>largest rate which reaches to 55% (Rani and Garg, 2014). Also, he indicated that one of the most important ways
<br/>to display emotions is through facial expressions; where facial image contains much information (such as,
<br/>person's identification and also about mood and state of mind) which can be used to distinguish human
<br/>inspiration (Saini and Rana, 2014).
<br/>Facial emotion recognition is an active area of research with several fields of applications. Some of the
<br/>significant applications are: feedback system for e-learning, alert system for driving, social robot emotion
<br/>recognition system, medical practices...etc (Dubey and Singh, 2016).
<br/>Human emotion is composed of thousands of expressions but in the last decade the focus on analyzing only
<br/>seven basic facial expressions such as happiness, sadness, surprise, disgust, fear, natural, and anger (Singh and
<br/>Open Access Journal
<br/>Published BY AENSI Publication
<br/>© 2017 AENSI Publisher All rights reserved
<br/>This work is licensed under the Creative Commons Attribution International License (CC BY).
<br/>http://creativecommons.org/licenses/by/4.0/
<br/>To Cite This Article: Suhaila N. Mohammed and Loay E. George., Emotion Recognition Based on Texture Analysis of Facial Expressions
<br/>Using Wavelets Transform. Aust. J. Basic & Appl. Sci., 11(5): 1-11, 2017
</td><td></td><td></td></tr><tr><td>50e45e9c55c9e79aaae43aff7d9e2f079a2d787b</td><td>Hindawi Publishing Corporation
<br/>e Scientific World Journal
<br/>Volume 2015, Article ID 471371, 18 pages
<br/>http://dx.doi.org/10.1155/2015/471371
<br/>Research Article
<br/>Unbiased Feature Selection in Learning Random Forests for
<br/>High-Dimensional Data
<br/><b>Shenzhen Key Laboratory of High Performance Data Mining, Shenzhen Institutes of Advanced Technology</b><br/>Chinese Academy of Sciences, Shenzhen 518055, China
<br/><b>University of Chinese Academy of Sciences, Beijing 100049, China</b><br/><b>School of Computer Science and Engineering, Water Resources University, Hanoi 10000, Vietnam</b><br/><b>College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China</b><br/><b>Faculty of Information Technology, Vietnam National University of Agriculture, Hanoi 10000, Vietnam</b><br/>Received 20 June 2014; Accepted 20 August 2014
<br/>Academic Editor: Shifei Ding
<br/>License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
<br/>cited.
<br/>Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging
<br/>samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs
<br/>have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where
<br/>multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select
<br/>good features in learning RFs for high-dimensional data. We first remove the uninformative features using 𝑝-value assessment,
<br/>and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into
<br/>two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This
<br/>approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed
<br/>for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image
<br/>datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in
<br/>increasing the accuracy and the AUC measures.
<br/>1. Introduction
<br/>Random forests (RFs) [1] are a nonparametric method that
<br/>builds an ensemble model of decision trees from random
<br/>subsets of features and bagged samples of the training data.
<br/>RFs have shown excellent performance for both clas-
<br/>sification and regression problems. RF model works well
<br/>even when predictive features contain irrelevant features
<br/>(or noise); it can be used when the number of features is
<br/>much larger than the number of samples. However, with
<br/>randomizing mechanism in both bagging samples and feature
<br/>selection, RFs could give poor accuracy when applied to high
<br/>dimensional data. The main cause is that, in the process of
<br/>growing a tree from the bagged sample data, the subspace
<br/>of features randomly sampled from thousands of features to
<br/>split a node of the tree is often dominated by uninformative
<br/>features (or noise), and the tree grown from such bagged
<br/>subspace of features will have a low accuracy in prediction
<br/>which affects the final prediction of the RFs. Furthermore,
<br/>Breiman et al. noted that feature selection is biased in the
<br/>classification and regression tree (CART) model because it is
<br/>based on an information criteria, called multivalue problem
<br/>[2]. It tends in favor of features containing more values, even if
<br/>these features have lower importance than other ones or have
<br/>no relationship with the response feature (i.e., containing
<br/>less missing values, many categorical or distinct numerical
<br/>values) [3, 4].
<br/>In this paper, we propose a new random forests algo-
<br/>rithm using an unbiased feature sampling method to build
<br/>a good subspace of unbiased features for growing trees.
</td><td>('40538635', 'Thanh-Tung Nguyen', 'thanh-tung nguyen')<br/>('8192216', 'Joshua Zhexue Huang', 'joshua zhexue huang')<br/>('39340373', 'Thuy Thi Nguyen', 'thuy thi nguyen')<br/>('40538635', 'Thanh-Tung Nguyen', 'thanh-tung nguyen')</td><td>Correspondence should be addressed to Thanh-Tung Nguyen; tungnt@wru.vn
</td></tr><tr><td>5003754070f3a87ab94a2abb077c899fcaf936a6</td><td>Evaluation of LC-KSVD on UCF101 Action Dataset
<br/><b>University of Maryland, College Park</b><br/>2Noah’s Ark Lab, Huawei Technologies
</td><td>('3146162', 'Hyunjong Cho', 'hyunjong cho')<br/>('2445131', 'Hyungtae Lee', 'hyungtae lee')<br/>('34145947', 'Zhuolin Jiang', 'zhuolin jiang')</td><td>cho@cs.umd.edu, htlee@umd.edu, zhuolin.jiang@huawei.com
</td></tr><tr><td>503db524b9a99220d430e741c44cd9c91ce1ddf8</td><td>Who’s Better, Who’s Best: Skill Determination in Video using Deep Ranking
<br/><b>University of Bristol, Bristol, UK</b><br/>Walterio Mayol-Cuevas
</td><td>('28798386', 'Hazel Doughty', 'hazel doughty')<br/>('1728459', 'Dima Damen', 'dima damen')</td><td><Firstname>.<Surname>@bristol.ac.uk
</td></tr><tr><td>50d15cb17144344bb1879c0a5de7207471b9ff74</td><td>Divide, Share, and Conquer: Multi-task
<br/>Attribute Learning with Selective Sharing
</td><td>('3197570', 'Chao-Yeh Chen', 'chao-yeh chen')<br/>('2228235', 'Dinesh Jayaraman', 'dinesh jayaraman')<br/>('1693054', 'Fei Sha', 'fei sha')<br/>('1794409', 'Kristen Grauman', 'kristen grauman')</td><td></td></tr><tr><td>50d961508ec192197f78b898ff5d44dc004ef26d</td><td>International Journal of Computer science & Information Technology (IJCSIT), Vol 1, No 2, November 2009
<br/>A LOW INDEXED CONTENT BASED
<br/>NEURAL NETWORK APPROACH FOR
<br/>NATURAL OBJECTS RECOGNITION
<br/>1Research Scholar, JNTUH, Hyderabad, AP. India
<br/><b>Principal, JNTUH College of Engineering, jagitial, Karimnagar, AP, India</b><br/><b>Principal, Chaithanya Institute of Engineering and Technology, Kakinada, AP, India</b></td><td></td><td> shyam_gunda2002@yahoo.co.in
<br/>govardhan_cse@yahoo.co.in
<br/>tv_venkat@yahoo.com
</td></tr><tr><td>50ccc98d9ce06160cdf92aaf470b8f4edbd8b899</td><td>Towards Robust Cascaded Regression for Face Alignment in the Wild
<br/>J¨urgen Beyerer2,1
<br/><b>Vision and Fusion Laboratory (IES), Karlsruhe Institute of Technology (KIT</b><br/><b>Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (Fraunhofer IOSB</b><br/>3Signal Processing Laboratory (LTS5), ´Ecole Polytechnique F´ed´erale de Lausanne (EPFL)
</td><td>('1797975', 'Chengchao Qu', 'chengchao qu')<br/>('1697965', 'Hua Gao', 'hua gao')<br/>('2233872', 'Eduardo Monari', 'eduardo monari')<br/>('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')</td><td>firstname.lastname@iosb.fraunhofer.de
<br/>firstname.lastname@epfl.ch
</td></tr><tr><td>5028c0decfc8dd623c50b102424b93a8e9f2e390</td><td>Published as a conference paper at ICLR 2017
<br/>REVISITING CLASSIFIER TWO-SAMPLE TESTS
<br/>1Facebook AI Research, 2WILLOW project team, Inria / ENS / CNRS
</td><td>('3016461', 'David Lopez-Paz', 'david lopez-paz')<br/>('2093491', 'Maxime Oquab', 'maxime oquab')</td><td>dlp@fb.com, maxime.oquab@inria.fr
</td></tr><tr><td>505e55d0be8e48b30067fb132f05a91650666c41</td><td>A Model of Illumination Variation for Robust Face Recognition
<br/>Institut Eur´ecom
<br/>Multimedia Communications Department
<br/>BP 193, 06904 Sophia Antipolis Cedex, France
</td><td>('1723883', 'Florent Perronnin', 'florent perronnin')<br/>('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')</td><td>fflorent.perronnin, jean-luc.dugelayg@eurecom.fr
</td></tr><tr><td>507c9672e3673ed419075848b4b85899623ea4b0</td><td>Faculty of Informatics
<br/><b>Institute for Anthropomatics</b><br/>Chair Prof. Dr.-Ing. R. Stiefelhagen
<br/>Facial Image Processing and Analysis Group
<br/>Multi-View Facial Expression
<br/>Classification
<br/>ADVISORS
<br/>MARCH 2011
<br/><b>KIT University of the State of Baden-W rttemberg and National Laboratory of the Helmholtz Association</b><br/>www.kit.edu
</td><td>('33357889', 'Nikolas Hesse', 'nikolas hesse')<br/>('38113750', 'Hua Gao', 'hua gao')<br/>('40303076', 'Tobias Gehrig', 'tobias gehrig')</td><td></td></tr><tr><td>50c0de2cccf7084a81debad5fdb34a9139496da0</td><td>ORIGINAL RESEARCH
<br/>published: 30 November 2016
<br/>doi: 10.3389/fict.2016.00027
<br/>The Influence of Annotation, Corpus
<br/>Design, and Evaluation on the
<br/>Outcome of Automatic Classification
<br/>of Human Emotions
<br/><b>Institute of Neural Information Processing, Ulm University, Ulm, Germany</b><br/>The integration of emotions into human–computer interaction applications promises a
<br/>more natural dialog between the user and the technical system operators. In order
<br/>to construct such machinery, continuous measuring of the affective state of the user
<br/>becomes essential. While basic research that is aimed to capture and classify affective
<br/>signals has progressed, many issues are still prevailing that hinder easy integration
<br/>of affective signals into human–computer interaction. In this paper, we identify and
<br/>investigate pitfalls in three steps of the work-flow of affective classification studies. It starts
<br/>with the process of collecting affective data for the purpose of training suitable classifiers.
<br/>Emotional data have to be created in which the target emotions are present. Therefore,
<br/>human participants have to be stimulated suitably. We discuss the nature of these stimuli,
<br/>their relevance to human–computer interaction, and the repeatability of the data recording
<br/>setting. Second, aspects of annotation procedures are investigated, which include the
<br/>variances of
<br/>individual raters, annotation delay, the impact of the used annotation
<br/>tool, and how individual ratings are combined to a unified label. Finally, the evaluation
<br/>protocol
<br/>is examined, which includes, among others, the impact of the performance
<br/>measure on the accuracy of a classification model. We hereby focus especially on the
<br/>evaluation of classifier outputs against continuously annotated dimensions. Together with
<br/>the discussed problems and pitfalls and the ways how they affect the outcome, we
<br/>provide solutions and alternatives to overcome these issues. As the final part of the paper,
<br/>we sketch a recording scenario and a set of supporting technologies that can contribute
<br/>to solve many of the issues mentioned above.
<br/>Keywords: affective computing, affective labeling, human–computer interaction, performance measures, machine
<br/>guided labeling
<br/>1. INTRODUCTION
<br/>The integration of affective signals into human–computer interaction (HCI) is generally considered
<br/>beneficial to improve the interaction process (Picard, 2000). The analysis of affective data in HCI
<br/>can be considered both cumbersome and prone to errors. The main reason for this is that the
<br/>important steps in affective classification are particularly difficult. This includes difficulties that arise
<br/>in the recording of suitable data collections comprising episodes of affective HCI, in the uncertainty
<br/>and subjectivity of the annotations of these data, and finally in the evaluation protocol that should
<br/>account for the continuous nature of the application.
<br/>Edited by:
<br/>Anna Esposito,
<br/>Seconda Università degli Studi di
<br/>Napoli, Italy
<br/>Reviewed by:
<br/>Anna Pribilova,
<br/><b>Slovak University of Technology in</b><br/>Bratislava, Slovakia
<br/>Alda Troncone,
<br/>Seconda Università degli Studi di
<br/>Napoli, Italy
<br/>*Correspondence:
<br/>contributed equally to this work.
<br/>Specialty section:
<br/>This article was submitted to
<br/>Human-Media Interaction, a section
<br/>of the journal Frontiers in ICT
<br/>Received: 15 May 2016
<br/>Accepted: 26 October 2016
<br/>Published: 30 November 2016
<br/>Citation:
<br/>Kächele M, Schels M and
<br/>Schwenker F (2016) The Influence of
<br/>Annotation, Corpus Design, and
<br/>Evaluation on the Outcome of
<br/>Automatic Classification of Human
<br/>Emotions.
<br/>doi: 10.3389/fict.2016.00027
<br/>Frontiers in ICT | www.frontiersin.org
<br/>November 2016 | Volume 3 | Article 27
</td><td>('2144395', 'Markus Kächele', 'markus kächele')<br/>('3037635', 'Martin Schels', 'martin schels')<br/>('1685857', 'Friedhelm Schwenker', 'friedhelm schwenker')<br/>('2144395', 'Markus Kächele', 'markus kächele')<br/>('2144395', 'Markus Kächele', 'markus kächele')<br/>('3037635', 'Martin Schels', 'martin schels')</td><td>markus.kaechele@uni-ulm.de
</td></tr><tr><td>680d662c30739521f5c4b76845cb341dce010735</td><td>Int J Comput Vis (2014) 108:82–96
<br/>DOI 10.1007/s11263-014-0716-6
<br/>Part and Attribute Discovery from Relative Annotations
<br/>Received: 25 February 2013 / Accepted: 14 March 2014 / Published online: 26 April 2014
<br/>© Springer Science+Business Media New York 2014
</td><td>('35208858', 'Subhransu Maji', 'subhransu maji')</td><td></td></tr><tr><td>68f89c1ee75a018c8eff86e15b1d2383c250529b</td><td>Final Report for Project Localizing Objects and
<br/>Actions in Videos Using Accompanying Text
<br/><b>Johns Hopkins University, Center for Speech and Language Processing</b><br/>Summer Workshop 2010
<br/>J. Neumann, StreamSage/Comcast
<br/><b>F.Ferraro, University of Rochester</b><br/><b>H. He, Honkong Polytechnic University</b><br/><b>Y. Li, University of Maryland</b><br/><b>C.L. Teo, University of Maryland</b><br/>November 4, 2010
</td><td>('3167986', 'C. Fermueller', 'c. fermueller')<br/>('1743020', 'J. Kosecka', 'j. kosecka')<br/>('2601166', 'E. Tzoukermann', 'e. tzoukermann')<br/>('2995090', 'R. Chaudhry', 'r. chaudhry')<br/>('1937619', 'I. Perera', 'i. perera')<br/>('9133363', 'B. Sapp', 'b. sapp')<br/>('38873583', 'G. Singh', 'g. singh')<br/>('1870728', 'X. Yi', 'x. yi')</td><td></td></tr><tr><td>68a2ee5c5b76b6feeb3170aaff09b1566ec2cdf5</td><td>AGE CLASSIFICATION BASED ON
<br/>SIMPLE LBP TRANSITIONS
<br/><b>Aditya institute of Technology and Management, Tekkalli-532 201, A.P</b><br/>2Dr. V.Vijaya Kumar
<br/>3A. Obulesu
<br/>2Dean-Computer Sciences (CSE & IT), Anurag Group of Institutions, Hyderabad – 500088, A.P., India.,
<br/> 3Asst. Professor, Dept. Of CSE, Anurag Group of Institutions, Hyderabad – 500088, A.P., India.
</td><td>('34964075', 'Satyanarayana Murty', 'satyanarayana murty')</td><td>India, 1gsn_73@yahoo.co.in
<br/>2drvvk144@gmail.com
<br/>3obulesh.a@gmail.com
</td></tr><tr><td>68d2afd8c5c1c3a9bbda3dd209184e368e4376b9</td><td>Representation Learning by Rotating Your Faces
</td><td>('1849929', 'Luan Tran', 'luan tran')<br/>('2399004', 'Xi Yin', 'xi yin')<br/>('1759169', 'Xiaoming Liu', 'xiaoming liu')</td><td></td></tr><tr><td>68a3f12382003bc714c51c85fb6d0557dcb15467</td><td></td><td></td><td></td></tr><tr><td>6859b891a079a30ef16f01ba8b85dc45bd22c352</td><td>International Journal of Emerging Technology and Advanced Engineering
<br/>Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 10, October 2014)
<br/>2D Face Recognition Based on PCA & Comparison of
<br/>Manhattan Distance, Euclidean Distance & Chebychev
<br/>Distance
<br/><b>RCC Institute of Information Technology, Kolkata, India</b></td><td>('2467416', 'Rajib Saha', 'rajib saha')<br/>('2144187', 'Sayan Barman', 'sayan barman')</td><td></td></tr><tr><td>68d08ed9470d973a54ef7806318d8894d87ba610</td><td>Drive Video Analysis for the Detection of Traffic Near-Miss Incidents
</td><td>('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')<br/>('5014206', 'Teppei Suzuki', 'teppei suzuki')<br/>('6881850', 'Shoko Oikawa', 'shoko oikawa')<br/>('1720770', 'Yasuhiro Matsui', 'yasuhiro matsui')<br/>('1732705', 'Yutaka Satoh', 'yutaka satoh')</td><td></td></tr><tr><td>68caf5d8ef325d7ea669f3fb76eac58e0170fff0</td><td></td><td></td><td></td></tr><tr><td>68003e92a41d12647806d477dd7d20e4dcde1354</td><td>ISSN: 0976-9102 (ONLINE)
<br/>DOI: 10.21917/ijivp.2013.0101
<br/> ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, NOVEMBER 2013, VOLUME: 04, ISSUE: 02
<br/>FUZZY BASED IMAGE DIMENSIONALITY REDUCTION USING SHAPE
<br/>PRIMITIVES FOR EFFICIENT FACE RECOGNITION
<br/>1Deprtment of Computer Science and Engineering, Nalla Narasimha Reddy Education Society’s Group of Institutions, India
<br/><b>Deprtment of Computer Science and Engineering, JNTUA College of Engineering, India</b><br/>3Deprtment of Computer Science and Engineering, Anurag Group of Institutions, India
</td><td>('2086540', 'P. Chandra', 'p. chandra')<br/>('2803943', 'B. Eswara Reddy', 'b. eswara reddy')<br/>('36754879', 'Vijaya Kumar', 'vijaya kumar')</td><td>E-Mail: pchandureddy@yahoo.com
<br/>E-mail: eswarcsejntu@gmail.com
<br/>E-mail: vijayvakula@yahoo.com
</td></tr><tr><td>68d4056765c27fbcac233794857b7f5b8a6a82bf</td><td>Example-Based Face Shape Recovery Using the
<br/>Zenith Angle of the Surface Normal
<br/>Mario Castel´an1, Ana J. Almaz´an-Delf´ın2, Marco I. Ram´ırez-Sosa-Mor´an3,
<br/>and Luz A. Torres-M´endez1
<br/>1 CINVESTAV Campus Saltillo, Ramos Arizpe 25900, Coahuila, M´exico
<br/>2 Universidad Veracruzana, Facultad de F´ısica e Inteligencia Artificial, Xalapa 91000,
<br/>3 ITESM, Campus Saltillo, Saltillo 25270, Coahuila, M´exico
<br/>Veracruz, M´exico
</td><td></td><td>mario.castelan@cinvestav.edu.mx
</td></tr><tr><td>684f5166d8147b59d9e0938d627beff8c9d208dd</td><td>IEEE TRANS. NNLS, JUNE 2017
<br/>Discriminative Block-Diagonal Representation
<br/>Learning for Image Recognition
</td><td>('38448016', 'Zheng Zhang', 'zheng zhang')<br/>('40065614', 'Yong Xu', 'yong xu')<br/>('40799321', 'Ling Shao', 'ling shao')<br/>('49500178', 'Jian Yang', 'jian yang')</td><td></td></tr><tr><td>68c5238994e3f654adea0ccd8bca29f2a24087fc</td><td>PLSA-BASED ZERO-SHOT LEARNING
<br/>Centre of Image and Signal Processing
<br/>Faculty of Computer Science & Information Technology
<br/><b>University of Malaya, 50603 Kuala Lumpur, Malaysia</b></td><td>('2800072', 'Wai Lam Hoo', 'wai lam hoo')<br/>('2863960', 'Chee Seng Chan', 'chee seng chan')</td><td>{wailam88@siswa.um.edu.my; cs.chan@um.edu.my}
</td></tr><tr><td>68cf263a17862e4dd3547f7ecc863b2dc53320d8</td><td></td><td></td><td></td></tr><tr><td>68e9c837431f2ba59741b55004df60235e50994d</td><td>Detecting Faces Using Region-based Fully
<br/>Convolutional Networks
<br/>Tencent AI Lab, China
</td><td>('1996677', 'Yitong Wang', 'yitong wang')</td><td>{yitongwang,denisji,encorezhou,hawelwang,michaelzfli}@tencent.com
</td></tr><tr><td>685f8df14776457c1c324b0619c39b3872df617b</td><td>Master of Science Thesis in Electrical Engineering
<br/><b>Link ping University</b><br/>Face Recognition with
<br/>Preprocessing and Neural
<br/>Networks
</td><td></td><td></td></tr><tr><td>68484ae8a042904a95a8d284a7f85a4e28e37513</td><td>Spoofing Deep Face Recognition with Custom Silicone Masks
<br/>S´ebastien Marcel
<br/><b>Idiap Research Institute. Centre du Parc, Rue Marconi 19, Martigny (VS), Switzerland</b></td><td>('1952348', 'Sushil Bhattacharjee', 'sushil bhattacharjee')</td><td>{sushil.bhattacharjee; amir.mohammadi; sebastien.marcel}@idiap.ch
</td></tr><tr><td>687e17db5043661f8921fb86f215e9ca2264d4d2</td><td>A Robust Elastic and Partial Matching Metric for Face Recognition
<br/>Microsoft Corporate
<br/>One Microsoft Way, Redmond, WA 98052
</td><td>('1745420', 'Gang Hua', 'gang hua')<br/>('33474090', 'Amir Akbarzadeh', 'amir akbarzadeh')</td><td>{ganghua, amir}@microsoft.com
</td></tr><tr><td>688754568623f62032820546ae3b9ca458ed0870</td><td>bioRxiv preprint first posted online Sep. 27, 2016;
<br/>doi:
<br/>http://dx.doi.org/10.1101/077784
<br/>.
<br/>The copyright holder for this preprint (which was not
<br/>peer-reviewed) is the author/funder. It is made available under a
<br/>CC-BY-NC-ND 4.0 International license
<br/>.
<br/>Resting high frequency heart rate variability is not associated with the
<br/>recognition of emotional facial expressions in healthy human adults.
<br/>1 Univ. Grenoble Alpes, LPNC, F-38040, Grenoble, France
<br/>2 CNRS, LPNC UMR 5105, F-38040, Grenoble, France
<br/>3 IPSY, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
<br/>4 Fund for Scientific Research (FRS-FNRS), Brussels, Belgium
<br/>Correspondence concerning this article should be addressed to Brice Beffara, Office E250, Institut
<br/>de Recherches en Sciences Psychologiques, IPSY - Place du Cardinal Mercier, 10 bte L3.05.01 B-1348
<br/>Author note
<br/>This study explores whether the myelinated vagal connection between the heart and the brain
<br/>is involved in emotion recognition. The Polyvagal theory postulates that the activity of the
<br/>myelinated vagus nerve underlies socio-emotional skills. It has been proposed that the perception
<br/>of emotions could be one of this skills dependent on heart-brain interactions. However, this
<br/>assumption was differently supported by diverging results suggesting that it could be related to
<br/>confounded factors. In the current study, we recorded the resting state vagal activity (reflected by
<br/>High Frequency Heart Rate Variability, HF-HRV) of 77 (68 suitable for analysis) healthy human
<br/>adults and measured their ability to identify dynamic emotional facial expressions. Results show
<br/>that HF-HRV is not related to the recognition of emotional facial expressions in healthy human
<br/>adults. We discuss this result in the frameworks of the polyvagal theory and the neurovisceral
<br/>integration model.
<br/>Keywords: HF-HRV; autonomic flexibility; emotion identification; dynamic EFEs; Polyvagal
<br/>theory; Neurovisceral integration model
<br/>Word count: 9810
<br/>10
<br/>11
<br/>12
<br/>13
<br/>14
<br/>15
<br/>16
<br/>17
<br/>Introduction
<br/>The behavior of an animal is said social when involved in in-
<br/>teractions with other animals (Ward & Webster, 2016). These
<br/>interactions imply an exchange of information, signals, be-
<br/>tween at least two animals. In humans, the face is an efficient
<br/>communication channel, rapidly providing a high quantity of
<br/>information. Facial expressions thus play an important role
<br/>in the transmission of emotional information during social
<br/>interactions. The result of the communication is the combina-
<br/>tion of transmission from the sender and decoding from the
<br/>receiver (Jack & Schyns, 2015). As a consequence, the quality
<br/>of the interaction depends on the ability to both produce and
<br/>identify facial expressions. Emotions are therefore a core
<br/>feature of social bonding (Spoor & Kelly, 2004). Health
<br/>of individuals and groups depend on the quality of social
<br/>bonds in many animals (Boyer, Firat, & Leeuwen, 2015; S. L.
<br/>Brown & Brown, 2015; Neuberg, Kenrick, & Schaller, 2011),
<br/>18
<br/>19
<br/>20
<br/>21
<br/>22
<br/>23
<br/>24
<br/>25
<br/>26
<br/>27
<br/>28
<br/>29
<br/>30
<br/>31
<br/>32
<br/>33
<br/>34
<br/>35
<br/>especially in highly social species such as humans (Singer &
<br/>Klimecki, 2014).
<br/>The recognition of emotional signals produced by others is
<br/>not independent from its production by oneself (Niedenthal,
<br/>2007). The muscles of the face involved in the production of
<br/>a facial expressions are also activated during the perception of
<br/>the same facial expressions (Dimberg, Thunberg, & Elmehed,
<br/>2000). In other terms, the facial mimicry of the perceived
<br/>emotional facial expression (EFE) triggers its sensorimotor
<br/>simulation in the brain, which improves the recognition abili-
<br/>ties (Wood, Rychlowska, Korb, & Niedenthal, 2016). Beyond
<br/>that, the emotion can be seen as the body -including brain-
<br/>dynamic itself (Gallese & Caruana, 2016) which helps to un-
<br/>derstand why behavioral simulation is necessary to understand
<br/>the emotion.
<br/>The interplay between emotion production, emotion percep-
<br/>tion, social communication and body dynamics has been sum-
<br/>marized in the framework of the polyvagal theory (Porges,
</td><td>('37799937', 'Nicolas Vermeulen', 'nicolas vermeulen')<br/>('2634712', 'Martial Mermillod', 'martial mermillod')</td><td>Louvain-la-Neuve, Belgium. E-mail: brice.beffara@univ-grenoble-alpes.fr
</td></tr><tr><td>68f9cb5ee129e2b9477faf01181cd7e3099d1824</td><td>ALDA Algorithms for Online Feature Extraction
</td><td>('2784763', 'Youness Aliyari Ghassabeh', 'youness aliyari ghassabeh')<br/>('2060085', 'Hamid Abrishami Moghaddam', 'hamid abrishami moghaddam')</td><td></td></tr><tr><td>68bf34e383092eb827dd6a61e9b362fcba36a83a</td><td></td><td></td><td></td></tr><tr><td>68d40176e878ebffbc01ffb0556e8cb2756dd9e9</td><td>International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622
<br/>International Conference on Humming Bird ( 01st March 2014)
<br/>RESEARCH ARTICLE
<br/> OPEN ACCESS
<br/>Locality Repulsion Projection and Minutia Extraction Based
<br/>Similarity Measure for Face Recognition
<br/><b>AgnelAnushya P. is currently pursuing M.E (Computer Science and engineering) at Vins Christian college of</b><br/>2Ramya P. is currently working as an Asst. Professor in the dept. of Information Technology at Vins Christian
<br/><b>college of Engineering</b></td><td></td><td>Engineering. e-mail:anushyase@gmail.com.
</td></tr><tr><td>68c4a1d438ea1c6dfba92e3aee08d48f8e7f7090</td><td>AgeNet: Deeply Learned Regressor and Classifier for
<br/>Robust Apparent Age Estimation
<br/>1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
<br/><b>Institute of Computing Technology, CAS, Beijing, 100190, China</b><br/>2Tencent BestImage Team, Shanghai, 100080, China
</td><td>('1731144', 'Xin Liu', 'xin liu')<br/>('1688086', 'Shaoxin Li', 'shaoxin li')<br/>('1693589', 'Meina Kan', 'meina kan')<br/>('1698586', 'Jie Zhang', 'jie zhang')<br/>('3126238', 'Shuzhe Wu', 'shuzhe wu')<br/>('13323391', 'Wenxian Liu', 'wenxian liu')<br/>('34393045', 'Hu Han', 'hu han')<br/>('1685914', 'Shiguang Shan', 'shiguang shan')<br/>('1710220', 'Xilin Chen', 'xilin chen')</td><td>{xin.liu, meina.kan, jie.zhang, shuzhe.wu, wenxian.liu, hu.han}@vipl.ict.ac.cn
<br/>{darwinli}@tencent.com, {sgshan, xlchen}@ict.ac.cn
</td></tr><tr><td>6889d649c6bbd9c0042fadec6c813f8e894ac6cc</td><td>Analysis of Robust Soft Learning Vector
<br/>Quantization and an application to Facial
<br/>Expression Recognition
</td><td></td><td></td></tr><tr><td>68f69e6c6c66cfde3d02237a6918c9d1ee678e1b</td><td>Enhancing Concept Detection by Pruning Data with MCA-based Transaction
<br/>Weights
<br/>Department of Electrical and
<br/>Computer Engineering
<br/><b>University of Miami</b><br/>Coral Gables, FL 33124, USA
<br/>School of Computing and
<br/>Information Sciences
<br/><b>Florida International University</b><br/>Miami, FL 33199, USA
</td><td>('1685202', 'Lin Lin', 'lin lin')<br/>('1693826', 'Mei-Ling Shyu', 'mei-ling shyu')<br/>('1705664', 'Shu-Ching Chen', 'shu-ching chen')</td><td>Email: l.lin2@umiami.edu, shyu@miami.edu
<br/>Email: chens@cs.fiu.edu
</td></tr><tr><td>682760f2f767fb47e1e2ca35db3becbb6153756f</td><td>The Effect of Pets on Happiness: A Large-scale Multi-Factor
<br/>Analysis using Social Multimedia
<br/>From reducing stress and loneliness, to boosting productivity and overall well-being, pets are believed to play
<br/>a significant role in people’s daily lives. Many traditional studies have identified that frequent interactions
<br/>with pets could make individuals become healthier and more optimistic, and ultimately enjoy a happier life.
<br/>However, most of those studies are not only restricted in scale, but also may carry biases by using subjective
<br/>self-reports, interviews, and questionnaires as the major approaches. In this paper, we leverage large-scale
<br/>data collected from social media and the state-of-the-art deep learning technologies to study this phenomenon
<br/>in depth and breadth. Our study includes four major steps: 1) collecting timeline posts from around 20,000
<br/>Instagram users; 2) using face detection and recognition on 2-million photos to infer users’ demographics,
<br/>relationship status, and whether having children, 3) analyzing a user’s degree of happiness based on images
<br/>and captions via smiling classification and textual sentiment analysis; 3) applying transfer learning techniques
<br/>to retrain the final layer of the Inception v3 model for pet classification; and 4) analyzing the effects of pets
<br/>on happiness in terms of multiple factors of user demographics. Our main results have demonstrated the
<br/>efficacy of our proposed method with many new insights. We believe this method is also applicable to other
<br/>domains as a scalable, efficient, and effective methodology for modeling and analyzing social behaviors and
<br/>psychological well-being. In addition, to facilitate the research involving human faces, we also release our
<br/>dataset of 700K analyzed faces.
<br/>CCS Concepts: • Human-centered computing → Social media;
<br/>Additional Key Words and Phrases: Happiness analysis, happiness, user demographics, pet and happiness,
<br/>social multimedia, social media.
<br/>ACM Reference format:
<br/>Analysis using Social Multimedia. ACM Trans. Intell. Syst. Technol. 9, 4, Article 39 (June 2017), 15 pages.
<br/>https://doi.org/0000001.0000001
<br/>1 INTRODUCTION
<br/>Happiness has always been a subjective and multidimensional matter; its definition varies individu-
<br/>ally, and the factors impacting our feeling of happiness are diverse. A study in [21] has constructed
<br/><b>We thank the support of New York State through the Goergen Institute for Data Science, our corporate research sponsors</b><br/>Xerox and VisualDX, and NSF Award #1704309.
<br/><b>Author s addresses: X. Peng, University of Rochester; L. Chi</b><br/><b>University of Rochester and J. Luo, University of Rochester</b><br/>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
<br/>provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the
<br/>full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored.
</td><td>('1901094', 'Xuefeng Peng', 'xuefeng peng')<br/>('35678395', 'Li-Kai Chi', 'li-kai chi')<br/>('33642939', 'Jiebo Luo', 'jiebo luo')<br/>('1901094', 'Xuefeng Peng', 'xuefeng peng')<br/>('35678395', 'Li-Kai Chi', 'li-kai chi')<br/>('33642939', 'Jiebo Luo', 'jiebo luo')</td><td></td></tr><tr><td>683ec608442617d11200cfbcd816e86ce9ec0899</td><td>Dual Linear Regression Based Classification for Face Cluster Recognition
<br/><b>University of Northern British Columbia</b><br/>Prince George, BC, Canada V2N 4Z9
</td><td>('1692551', 'Liang Chen', 'liang chen')</td><td>chen.liang.97@gmail.com
</td></tr><tr><td>68c17aa1ecbff0787709be74d1d98d9efd78f410</td><td>International Journal of Optomechatronics, 6: 92–119, 2012
<br/>Copyright # Taylor & Francis Group, LLC
<br/>ISSN: 1559-9612 print=1559-9620 online
<br/>DOI: 10.1080/15599612.2012.663463
<br/>GENDER CLASSIFICATION FROM FACE IMAGES
<br/>USING MUTUAL INFORMATION AND FEATURE
<br/>FUSION
<br/>Department of Electrical Engineering and Advanced Mining Technology
<br/>Center, Universidad de Chile, Santiago, Chile
<br/>In this article we report a new method for gender classification from frontal face images
<br/>using feature selection based on mutual information and fusion of features extracted from
<br/>intensity, shape, texture, and from three different spatial scales. We compare the results of
<br/>three different mutual information measures: minimum redundancy and maximal relevance
<br/>(mRMR), normalized mutual information feature selection (NMIFS), and conditional
<br/>mutual information feature selection (CMIFS). We also show that by fusing features
<br/>extracted from six different methods we significantly improve the gender classification
<br/>results relative to those previously published, yielding 99.13% of the gender classification
<br/>rate on the FERET database.
<br/>Keywords: Feature fusion, feature selection, gender classification, mutual information, real-time gender
<br/>classification
<br/>1. INTRODUCTION
<br/>During the 90’s, one of the main issues addressed in the area of computer
<br/>vision was face detection. Many methods and applications were developed including
<br/>the face detection used in many digital cameras nowadays. Gender classification is
<br/>important in many possible applications including electronic marketing. Displays
<br/>at retail stores could show products and offers according to the person gender as
<br/>the person passes in front of a camera at the store. This is not a simple task since
<br/>faces are not rigid and depend on illumination, pose, gestures, facial expressions,
<br/>occlusions (glasses), and other facial features (makeup, beard). The high variability
<br/>in the appearance of the face directly affects their detection and classification. Auto-
<br/>matic classification of gender from face images has a wide range of possible applica-
<br/>tions, ranging from human-computer interaction to applications in real-time
<br/>electronic marketing in retail stores (Shan 2012; Bekios-Calfa et al. 2011; Chu
<br/>et al. 2010; Perez et al. 2010a).
<br/>Automatic gender classification has a wide range of possible applications for
<br/>improving human-machine interaction and face identification methods (Irick et al.
<br/>ing.uchile.cl
<br/>92
</td><td>('32271973', 'Claudio Perez', 'claudio perez')<br/>('40333310', 'Juan Tapia', 'juan tapia')<br/>('32723983', 'Claudio Held', 'claudio held')<br/>('32271973', 'Claudio Perez', 'claudio perez')<br/>('32271973', 'Claudio Perez', 'claudio perez')</td><td>Engineering, Universidad de Chile Casilla 412-3, Av. Tupper 2007, Santiago, Chile. E-mail: clperez@
</td></tr><tr><td>68f61154a0080c4aae9322110c8827978f01ac2e</td><td>Research Article
<br/>Journal of the Optical Society of America A
<br/>Recognizing blurred, non-frontal, illumination and
<br/>expression variant partially occluded faces
<br/><b>Indian Institute of Technology Madras, Chennai 600036, India</b><br/>Compiled June 26, 2016
<br/>The focus of this paper is on the problem of recognizing faces across space-varying motion blur, changes
<br/>in pose, illumination, and expression, as well as partial occlusion, when only a single image per subject
<br/>is available in the gallery. We show how the blur incurred due to relative motion between the camera and
<br/>the subject during exposure can be estimated from the alpha matte of pixels that straddle the boundary
<br/>between the face and the background. We also devise a strategy to automatically generate the trimap re-
<br/>quired for matte estimation. Having computed the motion via the matte of the probe, we account for pose
<br/>variations by synthesizing from the intensity image of the frontal gallery, a face image that matches the
<br/>pose of the probe. To handle illumination and expression variations, and partial occlusion, we model the
<br/>probe as a linear combination of nine blurred illumination basis images in the synthesized non-frontal
<br/>pose, plus a sparse occlusion. We also advocate a recognition metric that capitalizes on the sparsity of the
<br/>occluded pixels. The performance of our method is extensively validated on synthetic as well as real face
<br/>data. © 2016 Optical Society of America
<br/>OCIS codes:
<br/>(150.0150) Machine vision.
<br/>http://dx.doi.org/10.1364/ao.XX.XXXXXX
<br/>(100.0100) Image processing; (100.5010) Pattern recognition; (100.3008) Image recognition, algorithms and filters;
<br/>1. INTRODUCTION
<br/>State-of-the-art face recognition (FR) systems can outperform
<br/>even humans when presented with images captured under con-
<br/>trolled environments. However, their performance drops quite
<br/>rapidly in unconstrained settings due to image degradations
<br/>arising from blur, variations in pose, illumination, and expres-
<br/>sion, partial occlusion etc. Motion blur is commonplace today
<br/>owing to the exponential rise in the use and popularity of light-
<br/>weight and cheap hand-held imaging devices, and the ubiquity
<br/>of mobile phones equipped with cameras. Photographs cap-
<br/>tured using a hand-held device usually contain blur when the
<br/>illumination is poor because larger exposure times are needed
<br/>to compensate for the lack of light, and this increases the possi-
<br/>bility of camera shake. On the other hand, reducing the shutter
<br/>speed results in noisy images while tripods inevitably restrict
<br/>mobility. Even for a well-lit scene, the face might be blurred if
<br/>the subject is in motion. The problem is further compounded
<br/>in the case of poorly-lit dynamic scenes since the blur observed
<br/>on the face is due to the combined effects of the blur induced
<br/>by the motion of the camera and the independent motion of
<br/>the subject. In addition to blur and illumination, practical face
<br/>recognition algorithms must also possess the ability to recognize
<br/>faces across reasonable variations in pose. Partial occlusion and
<br/>facial expression changes, common in real-world applications,
<br/>escalate the challenges further. Yet another factor that governs
<br/>the performance of face recognition algorithms is the number
<br/>of images per subject available for training. In many practical
<br/>application scenarios such as law enforcement, driver license or
<br/>passport identification, where there is usually only one training
<br/>sample per subject in the database, techniques that rely on the
<br/>size and representation of the training set suffer a serious perfor-
<br/>mance drop or even fail to work. Face recognition algorithms
<br/>can broadly be classified into either discriminative or genera-
<br/>tive approaches. While the availability of large labeled datasets
<br/>and greater computing power has boosted the performance of
<br/>discriminative methods [1, 2] recently, generative approaches
<br/>continue to remain very popular [3, 4], and there is concurrent
<br/>research in both directions. The model we present in this paper
<br/>falls into the latter category. In fact, generative models are even
<br/>useful for producing training samples for learning algorithms.
<br/>Literature on face recognition from blurred images can be
<br/>broadly classified into four categories. It is important to note
<br/>that all of them (except our own earlier work in [4]) are restricted
<br/>to the convolution model for uniform blur. In the first approach
<br/>[5, 6], the blurred probe image is first deblurred using standard
<br/>deconvolution algorithms before performing recognition. How-
</td><td></td><td>*Corresponding author: jithuthatswho@gmail.com
</td></tr><tr><td>6821113166b030d2123c3cd793dd63d2c909a110</td><td>STUDIA INFORMATICA
<br/>Volume 36
<br/>2015
<br/>Number 1 (119)
<br/><b>Gdansk University of Technology, Faculty of Electronics, Telecommunication</b><br/>and Informatics
<br/>ACQUISITION AND INDEXING OF RGB-D RECORDINGS FOR
<br/>FACIAL EXPRESSIONS AND EMOTION RECOGNITION1
<br/>Summary. In this paper KinectRecorder comprehensive tool is described which
<br/>provides for convenient and fast acquisition, indexing and storing of RGB-D video
<br/>streams from Microsoft Kinect sensor. The application is especially useful as a sup-
<br/>porting tool for creation of fully indexed databases of facial expressions and emotions
<br/>that can be further used for learning and testing of emotion recognition algorithms for
<br/>affect-aware applications. KinectRecorder was successfully exploited for creation of
<br/>Facial Expression and Emotion Database (FEEDB) significantly reducing the time of
<br/>the whole project consisting of data acquisition, indexing and validation. FEEDB has
<br/>already been used as a learning and testing dataset for a few emotion recognition al-
<br/>gorithms which proved utility of the database, and the KinectRecorder tool.
<br/>Keywords: RGB-D data acquisition and indexing, facial expression recognition,
<br/>emotion recognition
<br/>AKWIZYCJA ORAZ INDEKSACJA NAGRAŃ RGB-D DO
<br/>Streszczenie. W pracy przedstawiono kompleksowe narzędzie, które pozwala na
<br/>wygodną i szybką akwizycję, indeksowanie i przechowywanie nagrań strumieni
<br/>RGB-D z czujnika Microsoft Kinect. Aplikacja jest szczególnie przydatna jako na-
<br/>mogą być następnie wykorzystywane do nauki i testowania algorytmów rozpoznawa-
<br/>nia emocji użytkownika dla aplikacji je uwzględniających. KinectRecorder został
<br/>skracając czas całego procesu, obejmującego akwizycję, indeksowanie i walidację
<br/>nagrań. Baza FEEDB została już z powodzeniem wykorzystana jako uczący i testują-
<br/>
<br/>1 The research leading to these results has received funding from the Polish-Norwegian Research Programme
<br/>operated by the National Centre for Research and Development under the Norwegian Financial Mechanism
<br/>2009-2014 in the frame of Project Contract No Pol-Nor/210629/51/2013.
</td><td>('3271448', 'Mariusz SZWOCH', 'mariusz szwoch')</td><td></td></tr><tr><td>68a04a3ae2086986877fee2c82ae68e3631d0356</td><td>THERMAL & REFLECTANCE BASED IDENTIFICATION IN CHALLENGING VARIABLE ILLUMINATIONS
<br/>Thermal and Reflectance Based Personal
<br/>Identification Methodology in Challenging
<br/>Variable Illuminations
<br/>†Department of Engineering
<br/><b>University of Cambridge</b><br/>‡Delphi Corporation,
<br/>Delphi Electronics and Safety
<br/>Cambridge, CB2 1PZ, UK
<br/>Kokomo, IN 46901-9005, USA
<br/>February 15, 2007
<br/>DRAFT
</td><td>('2214319', 'Riad Hammoud', 'riad hammoud')</td><td>{oa214,cipolla}@eng.cam.ac.uk
<br/>riad.hammoud@delphi.com
</td></tr><tr><td>6888f3402039a36028d0a7e2c3df6db94f5cb9bb</td><td>Under review as a conference paper at ICLR 2018
<br/>CLASSIFIER-TO-GENERATOR ATTACK: ESTIMATION
<br/>OF TRAINING DATA DISTRIBUTION FROM CLASSIFIER
<br/>Anonymous authors
<br/>Paper under double-blind review
</td><td></td><td></td></tr><tr><td>57f5711ca7ee5c7110b7d6d12c611d27af37875f</td><td>Illumination Invariance for Face Verification
<br/>Submitted for the Degree of
<br/>Doctor of Philosophy
<br/>from the
<br/><b>University of Surrey</b><br/>Centre for Vision, Speech and Signal Processing
<br/>School of Electronics and Physical Sciences
<br/><b>University of Surrey</b><br/>Guildford, Surrey GU2 7XH, U.K.
<br/>August 2006
</td><td>('28467739', 'J. Short', 'j. short')<br/>('28467739', 'J. Short', 'j. short')</td><td></td></tr><tr><td>570308801ff9614191cfbfd7da88d41fb441b423</td><td>Unsupervised Synchrony Discovery in Human Interaction
<br/><b>Robotics Institute, Carnegie Mellon University 3University of Pittsburgh, USA</b><br/><b>Beihang University, Beijing, China</b><br/><b>University of Miami, USA</b></td><td>('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')<br/>('1874236', 'Daniel S. Messinger', 'daniel s. messinger')</td><td></td></tr><tr><td>57bf9888f0dfcc41c5ed5d4b1c2787afab72145a</td><td>Robust Facial Expression Recognition Based on
<br/>Local Directional Pattern
<br/>Automatic facial expression recognition has many
<br/>potential applications
<br/>in different areas of human
<br/>computer interaction. However, they are not yet fully
<br/>realized due to the lack of an effective facial feature
<br/>descriptor. In this paper, we present a new appearance-
<br/>based feature descriptor, the local directional pattern
<br/>(LDP), to represent facial geometry and analyze its
<br/>performance in expression recognition. An LDP feature is
<br/>obtained by computing the edge response values in 8
<br/>directions at each pixel and encoding them into an 8 bit
<br/>binary number using the relative strength of these edge
<br/>responses. The LDP descriptor, a distribution of LDP
<br/>codes within an image or image patch, is used to describe
<br/>each expression image. The effectiveness of dimensionality
<br/>reduction techniques, such as principal component
<br/>analysis and AdaBoost, is also analyzed in terms of
<br/>computational cost saving and classification accuracy. Two
<br/>well-known machine
<br/>template
<br/>matching and support vector machine, are used for
<br/>classification using the Cohn-Kanade and Japanese
<br/>female facial expression databases. Better classification
<br/>accuracy shows the superiority of LDP descriptor against
<br/>other appearance-based feature descriptors.
<br/>learning methods,
<br/>Keywords: Image representation, facial expression
<br/>recognition, local directional pattern, features extraction,
<br/>principal component analysis, support vector machine.
<br/>
<br/>Manuscript received Mar. 15, 2010; revised July 15, 2010; accepted Aug. 2, 2010.
<br/>This work was supported by the Korea Research Foundation Grant funded by the Korean
<br/>Government (KRF-2010-0015908).
<br/><b>Kyung Hee University, Yongin, Rep. of Korea</b><br/>doi:10.4218/etrij.10.1510.0132
<br/>I. Introduction
<br/>Facial expression provides the most natural and immediate
<br/>indication about a person’s emotions and intentions [1], [2].
<br/>Therefore, automatic facial expression analysis is an important
<br/>and challenging task that has had great impact in such areas as
<br/>human-computer
<br/>interaction and data-driven animation.
<br/>Furthermore, video cameras have recently become an integral
<br/>part of many consumer devices [3] and can be used for
<br/>capturing facial images for recognition of people and their
<br/>emotions. This ability to recognize emotions can enable
<br/>customized applications [4], [5]. Even though much work has
<br/>already been done on automatic facial expression recognition
<br/>[6], [7], higher accuracy with reasonable speed still remains a
<br/>great challenge [8]. Consequently, a fast but robust facial
<br/>expression recognition system is very much needed to support
<br/>these applications.
<br/>The most critical aspect for any successful facial expression
<br/>recognition system is to find an efficient facial feature
<br/>representation [9]. An extracted facial feature can be considered
<br/>an efficient representation if it can fulfill three criteria: first, it
<br/>minimizes within-class variations of expressions while
<br/>maximizes between-class variations; second, it can be easily
<br/>extracted from the raw face image; and third, it can be
<br/>described in a low-dimensional feature space to ensure
<br/>computational speed during the classification step [10], [11].
<br/>The goal of the facial feature extraction is thus to find an
<br/>efficient and effective representation of the facial images which
<br/>would provide robustness during recognition process. Two
<br/>types of approaches have been proposed to extract facial
<br/>features for expression recognition: a geometric feature-based
<br/>system and an appearance-based system [12].
<br/>In the geometric feature extraction system, the shape and
<br/>© 2010
<br/> ETRI Journal, Volume 32, Number 5, October 2010
</td><td>('3182680', 'Taskeed Jabid', 'taskeed jabid')<br/>('9408912', 'Hasanul Kabir', 'hasanul kabir')<br/>('1685505', 'Oksam Chae', 'oksam chae')<br/>('3182680', 'Taskeed Jabid', 'taskeed jabid')</td><td>Taskeed Jabid (phone: +82 31 201 2948, email: taskeed@khu.ac.kr), Md. Hasanul Kabir
<br/>(email: hasanul@khu.ac.kr), and Oksam Chae (email: oschae@khu.ac.kr) are with the
</td></tr><tr><td>57ebeff9273dea933e2a75c306849baf43081a8c</td><td>Deep Convolutional Network Cascade for Facial Point Detection
<br/><b>The Chinese University of Hong Kong</b><br/><b>The Chinese University of Hong Kong</b><br/><b>Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences</b></td><td>('1681656', 'Yi Sun', 'yi sun')<br/>('31843833', 'Xiaogang Wang', 'xiaogang wang')<br/>('1741901', 'Xiaoou Tang', 'xiaoou tang')</td><td>sy011@ie.cuhk.edu.hk
<br/>xgwang@ee.cuhk.edu.hk
<br/>xtang@ie.cuhk.edu.hk
</td></tr><tr><td>574751dbb53777101502419127ba8209562c4758</td><td></td><td></td><td></td></tr><tr><td>5778d49c8d8d127351eee35047b8d0dc90defe85</td><td>Probabilistic Subpixel Temporal Registration
<br/>for Facial Expression Analysis
<br/><b>Queen Mary University of London</b><br/>Centre for Intelligent Sensing
</td><td>('1781916', 'Hatice Gunes', 'hatice gunes')<br/>('1713138', 'Andrea Cavallaro', 'andrea cavallaro')</td><td>fe.sariyanidi, h.gunes, a.cavallarog@qmul.ac.uk
</td></tr><tr><td>57ee3a8b0cafe211d1e9b477d210bb78b9d43bc1</td><td>Modeling the joint density of two images under a variety of transformations
<br/>Joshua Susskind
<br/><b>Institute for Neural Computation</b><br/><b>University of California, San Diego</b><br/>United States
<br/>Department of Computer Science
<br/><b>University of Frankfurt</b><br/>Germany
<br/>Department of Computer Science
<br/>Department of Computer Science
<br/>ETH Zurich
<br/>Switzerland
<br/>Geoffrey Hinton
<br/><b>University of Toronto</b><br/>Canada
</td><td>('1710604', 'Roland Memisevic', 'roland memisevic')<br/>('1742208', 'Marc Pollefeys', 'marc pollefeys')</td><td>josh@mplab.ucsd.edu
<br/>ro@cs.uni-frankfurt.de
<br/>hinton@cs.toronto.edu
<br/>marc.pollefeys@inf.ethz.ch
</td></tr><tr><td>57fd229097e4822292d19329a17ceb013b2cb648</td><td>Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
<br/>Fast Structural Binary Coding
<br/><b>University of California, San Diego</b><br/><b>University of California, San Diego</b></td><td>('2451800', 'Dongjin Song', 'dongjin song')<br/>('1722649', 'Wei Liu', 'wei liu')<br/>('3520515', 'David A. Meyer', 'david a. meyer')</td><td>La Jolla, USA, 92093-0409. Email: dosong@ucsd.edu
<br/>] Didi Research, Didi Kuaidi, Beijing, China. Email: wliu@ee.columbia.edu
<br/>La Jolla, USA, 92093-0112. Email: dmeyer@math.ucsd.edu
</td></tr><tr><td>57c59011614c43f51a509e10717e47505c776389</td><td>Unsupervised Human Action Detection by Action Matching
<br/><b>The Australian National University Queensland University of Technology</b></td><td>('1688071', 'Basura Fernando', 'basura fernando')</td><td>firstname.lastname@anu.edu.au
<br/>s.shirazi@qut.edu.au
</td></tr><tr><td>57b8b28f8748d998951b5a863ff1bfd7ca4ae6a5</td><td></td><td></td><td></td></tr><tr><td>57101b29680208cfedf041d13198299e2d396314</td><td></td><td></td><td></td></tr><tr><td>57893403f543db75d1f4e7355283bdca11f3ab1b</td><td></td><td></td><td></td></tr><tr><td>571f493c0ade12bbe960cfefc04b0e4607d8d4b2</td><td>International Journal of Research Studies in Science, Engineering and Technology
<br/>Volume 3, Issue 2, February 2016, PP 18-41
<br/>ISSN 2349-4751 (Print) & ISSN 2349-476X (Online)
<br/>Review on Content Based Image Retrieval: From Its Origin to the
<br/>New Age
<br/>Assistant Professor, ECE
<br/>Dr. B. L. Malleswari
<br/>Principal
<br/><b>Mahatma Gandhi Institute of Technology</b><br/><b>Sridevi Women's Engineering College</b><br/>Hyderabad, India
<br/>Hyderabad, India
</td><td></td><td>pasumarthinalini@gmil.com
<br/>blmalleswari@gmail.com
</td></tr><tr><td>57f8e1f461ab25614f5fe51a83601710142f8e88</td><td>Region Selection for Robust Face Verification using UMACE Filters
<br/>Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering,
<br/>Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia.
<br/>In this paper, we investigate the verification performances of four subdivided face images with varying expressions. The
<br/>objective of this study is to evaluate which part of the face image is more tolerant to facial expression and still retains its personal
<br/>characteristics due to the variations of the image. The Unconstrained Minimum Average Correlation Energy (UMACE) filter is
<br/>implemented to perform the verification process because of its advantages such as shift–invariance, ability to trade-off between
<br/>discrimination and distortion tolerance, e.g. variations in pose, illumination and facial expression. The database obtained from the
<br/>facial expression database of Advanced Multimedia Processing (AMP) Lab at CMU is used in this study. Four equal
<br/>sizes of face regions i.e. bottom, top, left and right halves are used for the purpose of this study. The results show that the bottom
<br/>half of the face region gives the best performance in terms of the PSR values with zero false accepted rate (FAR) and zero false
<br/>rejection rate (FRR) compared to the other three regions.
<br/>1. Introduction
<br/>Face recognition is a well established field of research,
<br/>and a large number of algorithms have been proposed in the
<br/>literature. Various classifiers have been explored to improve
<br/>the accuracy of face classification. The basic approach is to
<br/>use distance-base methods which measure Euclidean distance
<br/>between any two vectors and then compare it with the preset
<br/>threshold. Neural Networks are often used as classifiers due
<br/>to their powerful generation ability [1]. Support Vector
<br/>Machines (SVM) have been applied with encouraging results
<br/>[2].
<br/>In biometric applications, one of the important tasks is the
<br/>matching process between an individual biometrics against
<br/>the database that has been prepared during the enrolment
<br/>stage. For biometrics systems such as face authentication that
<br/>use images as personal characteristics, biometrics sensor
<br/>output and image pre-processing play an important role since
<br/>the quality of a biometric input can change significantly due
<br/>to illumination, noise and pose variations. Over the years,
<br/>researchers have studied the role of illumination variation,
<br/>pose variation, facial expression, and occlusions in affecting
<br/>the performance of face verification systems [3].
<br/>The Minimum Average Correlation Energy (MACE)
<br/>filters have been reported to be an alternative solution to these
<br/>problems because of the advantages such as shift-invariance,
<br/>close-form expressions and distortion-tolerance. MACE
<br/>filters have been successfully applied in the field of automatic
<br/>target recognition as well as in biometric verification [3][4].
<br/>Face and fingerprint verification using correlation filters have
<br/>been investigated in [5] and [6], respectively. Savvides et.al
<br/>performed face authentication and identification using
<br/>correlation filters based on illumination variation [7]. In the
<br/>process of implementing correlation filters, the number of
<br/>training images used depends on the level of distortions
<br/>applied to the images [5], [6].
<br/>In this study, we investigate which part of a face image is
<br/>more tolerant to facial expression and retains its personal
<br/>characteristics for the verification process. Four subdivided
<br/>face images, i.e. bottom, top, left and right halves, with
<br/>varying expressions are investigated. By identifying only the
<br/>region of the face that gives the highest verification
<br/>performance, that region can be used instead of the full-face
<br/>to reduce storage requirements.
<br/>2. Unconstrained Minimum Average Correlation
<br/>Energy (UMACE) Filter
<br/>Correlation filter theory and the descriptions of the design
<br/>of the correlation filter can be found in a tutorial survey paper
<br/>[8]. According to [4][6], correlation filter evolves from
<br/>matched filters which are optimal for detecting a known
<br/>reference image in the presence of additive white Gaussian
<br/>noise. However, the detection rate of matched filters
<br/>decreases significantly due to even the small changes of scale,
<br/>rotation and pose of the reference image.
<br/>the pre-specified peak values
<br/>In an effort to solve this problem, the Synthetic
<br/>Discriminant Function (SDF) filter and the Equal Correlation
<br/>Peak SDF (ECP SDF) filter ware introduced which allowed
<br/>several training images to be represented by a single
<br/>correlation filter. SDF filter produces pre-specified values
<br/>called peak constraints. These peak values correspond to the
<br/>authentic class or impostor class when an image is tested.
<br/>However,
<br/>to
<br/>misclassifications when the sidelobes are larger than the
<br/>controlled values at the origin.
<br/>Savvides et.al developed
<br/>the Minimum Average
<br/>Correlation Energy (MACE) filters [5]. This filter reduces the
<br/>large sidelobes and produces a sharp peak when the test
<br/>image is from the same class as the images that have been
<br/>used to design the filter. There are two kinds of variants that
<br/>can be used in order to obtain a sharp peak when the test
<br/>image belongs to the authentic class. The first MACE filter
<br/>variant minimizes the average correlation energy of the
<br/>training images while constraining the correlation output at
<br/>the origin to a specific value for each of the training images.
<br/>The second MACE filter variant is the Unconstrained
<br/>Minimum Average Correlation Energy (UMACE) filter
<br/>which also minimizes the average correlation output while
<br/>maximizing the correlation output at the origin [4].
<br/>lead
<br/>Proceedings of the International Conference onElectrical Engineering and InformaticsInstitut Teknologi Bandung, Indonesia June 17-19, 2007B-67ISBN 978-979-16338-0-2611</td><td>('5461819', 'Salina Abdul Samad', 'salina abdul samad')<br/>('2864147', 'Dzati Athiar Ramli', 'dzati athiar ramli')<br/>('2573778', 'Aini Hussain', 'aini hussain')</td><td>* E-mail: salina@vlsi.eng.ukm.my
</td></tr><tr><td>57a1466c5985fe7594a91d46588d969007210581</td><td>A Taxonomy of Face-models for System Evaluation
<br/>Motivation and Data Types
<br/>Synthetic Data Types
<br/>Unverified – Have no underlying physical or
<br/>statistical basis
<br/>Physics -Based – Based on structure and
<br/>materials combined with the properties
<br/>formally modeled in physics.
<br/>Statistical – Use statistics from real
<br/>data/experiments to estimate/learn model
<br/>parameters. Generally have measurements
<br/>of accuracy
<br/>Guided Synthetic – Individual models based
<br/>on individual people. No attempt to capture
<br/>properties of large groups, a unique model
<br/>per person. For faces, guided models are
<br/>composed of 3D structure models and skin
<br/>textures, capturing many artifacts not
<br/>easily parameterized. Can be combined with
<br/>physics-based rendering to generate samples
<br/>under different conditions.
<br/>Semi–Synethetic – Use measured data such
<br/>as 2D images or 3D facial scans. These are
<br/>not truly synthetic as they are re-rendering’s
<br/>of real measured data.
<br/>Semi and Guided Synthetic data provide
<br/>higher operational relevance while
<br/>maintaining a high degree of control.
<br/>Generating statistically significant size
<br/>datasets for face matching system
<br/>evaluation is both a laborious and
<br/>expensive process.
<br/>There is a gap in datasets that allow for
<br/>evaluation of system issues including:
<br/> Long distance recognition
<br/> Blur caused by atmospherics
<br/> Various weather conditions
<br/> End to end systems evaluation
<br/>Our contributions:
<br/> Define a taxonomy of face-models
<br/>for controlled experimentations
<br/> Show how Synthetic addresses gaps
<br/>in system evaluation
<br/> Show a process for generating and
<br/>validating synthetic models
<br/> Use these models in long distance
<br/>face recognition system evaluation
<br/>Experimental Setup
<br/>Results and Conclusions
<br/>Example Models
<br/>Original Pie
<br/>Semi-
<br/>Synthetic
<br/>FaceGen
<br/>Animetrics
<br/>http://www.facegen.com
<br/>http://www.animetrics.com/products/Forensica.php
<br/>Guided-
<br/>Synthetic
<br/>Models
<br/> Models generated using the well
<br/>known CMU PIE [18] dataset. Each of
<br/>the 68 subjects of PIE were modeled
<br/>using a right profile and frontal
<br/>image from the lights subset.
<br/> Two modeling programs were used,
<br/>Facegen and Animetrics. Both
<br/>programs create OBJ files and
<br/>textures
<br/> Models are re-rendered using
<br/>custom display software built with
<br/>OpenGL, GLUT and DevIL libraries
<br/> Custom Display Box housing a BENQ SP820 high
<br/>powered projector rated at 4000 ANSI Lumens
<br/> Canon EOS 7D withd a Sigma 800mm F5.6 EX APO
<br/>DG HSM lens a 2x adapter imaging the display
<br/>from 214 meters
<br/>Normalized Example Captures
<br/>Real PIE 1 Animetrics
<br/>FaceGen
<br/>81M inside 214M outside
<br/>Real PIE 2
<br/> Pre-cropped images were used for the
<br/>commercial core
<br/> Ground truth eye points + geometric/lighting
<br/>normalization pre processing before running
<br/>through the implementation of the V1
<br/>recognition algorithm found in [1].
<br/> Geo normalization highlights how the feature
<br/>region of the models looks very similar to
<br/>that of the real person.
<br/>Each test consisted of using 3 approximately frontal gallery images NOT used to
<br/>make the 3D model used as the probe, best score over 3 images determined score.
<br/>Even though the PIE-3D-20100224A–D sets were imaged on the same day, the V1
<br/>core scored differently on each highlighting the synthetic data’s ability to help
<br/>evaluate data capture methods and effects of varying atmospherics. The ISO setting
<br/>varied which effects the shutter speed, with higher ISO generally yielding less blur.
<br/>Dataset
<br/>Range(m)
<br/>Iso
<br/>V1
<br/>Comm.
<br/>Original PIE Images
<br/>FaceGen ScreenShots
<br/>Animetrics Screenshots
<br/>PIE-3D-20100210B
<br/>PIE-3D-20100224A
<br/>PIE-3D-20100224B
<br/>PIE-3D-20100224C
<br/>PIE-3D-20100224D
<br/>N/A
<br/>N/A
<br/>N/A
<br/>81m
<br/>214m
<br/>214m
<br/>214m
<br/>214m
<br/>N/A
<br/>N/A
<br/>N/A
<br/>500
<br/>125
<br/>125
<br/>250
<br/>400
<br/>100
<br/>47.76
<br/>100
<br/>100
<br/>58.82
<br/>45.59
<br/>81.82
<br/>79.1
<br/>100
<br/>100
<br/>100
<br/>100
<br/>100
<br/>100
<br/> The same (100 percent) recognition rate on screenshots as original images
<br/>validate the Anmetrics guided synthetic models and fails FaceGen Models.
<br/> 100% recognition means dataset is too small/easy; exapanding pose and models
<br/>underway.
<br/> Expanded the photohead methodology into 3D
<br/> Developed a robust modeling system allowing for multiple configurations of a
<br/>single real life data set.
<br/> Gabor+SVM based V1[15] significantly more impacted by atmospheric blur than
<br/>the commercial algorithm
<br/>Key References:
<br/>[6 of 21] R. Bevridge, D. Bolme, M Teixeira, and B. Draper. The CSU Face Identification Evaluation System Users Guide: Version 5.0. Technical report, CSU 2003
<br/>[8 of 21] T. Boult and W. Scheirer. Long range facial image acquisition and quality. In M. Tisarelli, S. Li, and R. Chellappa.
<br/>[15 of 21] N. Pinto, J. J. DiCarlo, and D. D. Cox. How far can you get with a modern face recognition test set using only simple features? In IEEE CVPR, 2009.
<br/>[18 of 21] T. Sim, S. Baker, and M. Bsat. The CMU Pose, Illumination and Expression (PIE) Database. In Proceedings of the IEEE F&G, May 2002.
</td><td>('31552290', 'Brian C. Parks', 'brian c. parks')<br/>('2613438', 'Walter J. Scheirer', 'walter j. scheirer')</td><td>{viyer,skirkbride,bparks,wscheirer,tboult}@vast.uccs.edu
</td></tr><tr><td>574b62c845809fd54cc168492424c5fac145bc83</td><td>Learning Warped Guidance for Blind Face
<br/>Restoration
<br/><b>School of Computer Science and Technology, Harbin Institute of Technology, China</b><br/><b>School of Data and Computer Science, Sun Yat-sen University, China</b><br/><b>University of Kentucky, USA</b></td><td>('21515518', 'Xiaoming Li', 'xiaoming li')<br/>('40508248', 'Yuting Ye', 'yuting ye')<br/>('1724520', 'Wangmeng Zuo', 'wangmeng zuo')<br/>('1737218', 'Liang Lin', 'liang lin')<br/>('38958903', 'Ruigang Yang', 'ruigang yang')</td><td>csxmli@hit.edu.cn, csmliu@outlook.com, yeyuting.jlu@gmail.com,
<br/>wmzuo@hit.edu.cn
<br/>linliang@ieee.org
<br/>ryang@cs.uky.edu
</td></tr><tr><td>57246142814d7010d3592e3a39a1ed819dd01f3b</td><td><b>MITSUBISHI ELECTRIC RESEARCH LABORATORIES</b><br/>http://www.merl.com
<br/>Verification of Very Low-Resolution Faces Using An
<br/>Identity-Preserving Deep Face Super-resolution Network
<br/>TR2018-116 August 24, 2018
</td><td></td><td></td></tr><tr><td>5721216f2163d026e90d7cd9942aeb4bebc92334</td><td></td><td></td><td></td></tr><tr><td>575141e42740564f64d9be8ab88d495192f5b3bc</td><td>Age Estimation based on Multi-Region
<br/>Convolutional Neural Network
<br/>1Center for Biometrics and Security Research & National Laboratory of Pattern
<br/><b>Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China</b><br/><b>University of Chinese Academy of Sciences</b></td><td>('40282288', 'Ting Liu', 'ting liu')<br/>('1756538', 'Jun Wan', 'jun wan')<br/>('39974958', 'Tingzhao Yu', 'tingzhao yu')<br/>('1718623', 'Zhen Lei', 'zhen lei')<br/>('34679741', 'Stan Z. Li', 'stan z. li')</td><td>{ting.liu,jun.wan,zlei,szli}@nlpr.ia.ac.cn,yutingzhao2013@ia.ac.cn
</td></tr><tr><td>5789f8420d8f15e7772580ec373112f864627c4b</td><td>Efficient Global Illumination for Morphable Models
<br/><b>University of Basel, Switzerland</b></td><td>('1801001', 'Andreas Schneider', 'andreas schneider')<br/>('34460642', 'Bernhard Egger', 'bernhard egger')<br/>('32013053', 'Lavrenti Frobeen', 'lavrenti frobeen')<br/>('1687079', 'Thomas Vetter', 'thomas vetter')</td><td>{andreas.schneider,sandro.schoenborn,bernhard.egger,l.frobeen,thomas.vetter}@unibas.ch
</td></tr><tr><td>574705812f7c0e776ad5006ae5e61d9b071eebdb</td><td>Available Online at www.ijcsmc.com
<br/>International Journal of Computer Science and Mobile Computing
<br/>A Monthly Journal of Computer Science and Information Technology
<br/>ISSN 2320–088X
<br/>IJCSMC, Vol. 3, Issue. 5, May 2014, pg.780 – 787
<br/> RESEARCH ARTICLE
<br/>A Novel Approach for Face Recognition
<br/>Using PCA and Artificial Neural Network
<br/><b>Dayananda Sagar College of Engg., India</b><br/><b>Dayananda Sagar College of Engg., India</b></td><td>('9856026', 'Karthik G', 'karthik g')<br/>('9856026', 'Karthik G', 'karthik g')</td><td>1 email : karthik.knocks@gmail.com; 2 email : hcsateesh@gmail.com
</td></tr><tr><td>5753b2b5e442eaa3be066daa4a2ca8d8a0bb1725</td><td></td><td></td><td></td></tr><tr><td>571b83f7fc01163383e6ca6a9791aea79cafa7dd</td><td>SeqFace: Make full use of sequence information for face recognition
<br/><b>College of Information Science and Technology</b><br/><b>Beijing University of Chemical Technology, China</b><br/>YUNSHITU Corp., China
</td><td>('48594708', 'Wei Hu', 'wei hu')<br/>('7524887', 'Yangyu Huang', 'yangyu huang')<br/>('8451319', 'Guodong Yuan', 'guodong yuan')<br/>('47191084', 'Fan Zhang', 'fan zhang')<br/>('50391855', 'Ruirui Li', 'ruirui li')<br/>('47113208', 'Wei Li', 'wei li')</td><td></td></tr><tr><td>574ad7ef015995efb7338829a021776bf9daaa08</td><td>AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks
<br/>for Human Action Recognition in Videos
<br/>1IIT Kanpur‡
<br/>2SRI International
<br/>3UCSD
</td><td>('24899770', 'Amlan Kar', 'amlan kar')<br/>('12692625', 'Nishant Rai', 'nishant rai')<br/>('39707211', 'Karan Sikka', 'karan sikka')<br/>('39396475', 'Gaurav Sharma', 'gaurav sharma')</td><td></td></tr><tr><td>57a14a65e8ae15176c9afae874854e8b0f23dca7</td><td>UvA-DARE (Digital Academic Repository)
<br/>Seeing mixed emotions: The specificity of emotion perception from static and dynamic
<br/>facial expressions across cultures
<br/>Fang, X.; Sauter, D.A.; van Kleef, G.A.
<br/>Published in:
<br/>Journal of Cross-Cultural Psychology
<br/>DOI:
<br/>10.1177/0022022117736270
<br/>Link to publication
<br/>Citation for published version (APA):
<br/>Fang, X., Sauter, D. A., & van Kleef, G. A. (2018). Seeing mixed emotions: The specificity of emotion perception
<br/>from static and dynamic facial expressions across cultures. Journal of Cross-Cultural Psychology, 49(1), 130-
<br/>148. DOI: 10.1177/0022022117736270
<br/>General rights
<br/>It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s),
<br/>other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
<br/>Disclaimer/Complaints regulations
<br/>If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating
<br/>your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask
<br/><b>the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam</b><br/>The Netherlands. You will be contacted as soon as possible.
<br/>Download date: 08 Aug 2018
<br/><b>UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl</b></td><td></td><td></td></tr><tr><td>57b052cf826b24739cd7749b632f85f4b7bcf90b</td><td>Fast Fashion Guided Clothing Image Retrieval:
<br/>Delving Deeper into What Feature Makes
<br/>Fashion
<br/><b>School of Data and Computer Science, Sun Yat-sen University</b><br/>Guangzhou, P.R China
</td><td>('3079146', 'Yuhang He', 'yuhang he')<br/>('40451106', 'Long Chen', 'long chen')</td><td>*Corresponding Author: chenl46@mail.sysu.edu.cn
</td></tr><tr><td>57d37ad025b5796457eee7392d2038910988655a</td><td>GEERATVEEETATF
<br/>