First pages

611961abc4dfc02b67edd8124abb08c449f5280aExploiting Image-trained CNN Architectures
for Unconstrained Video Classification
Northwestern University
Evanston IL USA
Raytheon BBN Technologies
Cambridge, MA USA
University of Toronto
('2815926', 'Shengxin Zha', 'shengxin zha')
('1689313', 'Florian Luisier', 'florian luisier')
('2996926', 'Walter Andrews', 'walter andrews')
('2897313', 'Nitish Srivastava', 'nitish srivastava')
('1776908', 'Ruslan Salakhutdinov', 'ruslan salakhutdinov')
szha@u.northwestern.edu
{fluisier,wandrews}@bbn.com
{nitish,rsalakhu}@cs.toronto.edu
610a4451423ad7f82916c736cd8adb86a5a64c59 Volume 4, Issue 11, November 2014 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
A Survey on Search Based Face Annotation Using Weakly
Labelled Facial Images
Department of Computer Engg, DYPIET Pimpri,
Savitri Bai Phule Pune University, Maharashtra India
('15731441', 'Shital A. Shinde', 'shital a. shinde')
('3392505', 'Archana Chaugule', 'archana chaugule')
6156eaad00aad74c90cbcfd822fa0c9bd4eb14c2Complex Bingham Distribution for Facial
Feature Detection
Eslam Mostafa1,2 and Aly Farag1
CVIP Lab, University of Louisville, Louisville, KY, USA
Alexandria University, Alexandria, Egypt
{eslam.mostafa,aly.farag}@louisville.edu
61ffedd8a70a78332c2bbdc9feba6c3d1fd4f1b8Greedy Feature Selection for Subspace Clustering
Greedy Feature Selection for Subspace Clustering
Department of Electrical & Computer Engineering
Rice University, Houston, TX, 77005, USA
Department of Electrical & Computer Engineering
Carnegie Mellon University, Pittsburgh, PA, 15213, USA
Department of Electrical & Computer Engineering
Rice University, Houston, TX, 77005, USA
Editor:
('1746363', 'Eva L. Dyer', 'eva l. dyer')
('1745861', 'Aswin C. Sankaranarayanan', 'aswin c. sankaranarayanan')
('1746260', 'Richard G. Baraniuk', 'richard g. baraniuk')
e.dyer@rice.edu
saswin@ece.cmu.edu
richb@rice.edu
61084a25ebe736e8f6d7a6e53b2c20d9723c4608
61542874efb0b4c125389793d8131f9f99995671Fair comparison of skin detection approaches on publicly available datasets
a. DISI, Università di Bologna, Via Sacchi 3, 47521 Cesena, Italy.
b DEI - University of Padova, Via Gradenigo, 6 - 35131- Padova, Italy
('1707759', 'Alessandra Lumini', 'alessandra lumini')
('1804258', 'Loris Nanni', 'loris nanni')
61f93ed515b3bfac822deed348d9e21d5dffe373Deep Image Set Hashing
Columbia University
Columbia University
('1710567', 'Jie Feng', 'jie feng')
('2602265', 'Svebor Karaman', 'svebor karaman')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
jiefeng@cs.columbia.edu
svebor.karaman@columbia.edu, sfchang@ee.columbia.edu
6180bc0816b1776ca4b32ced8ea45c3c9ce56b47Fast Randomized Algorithms for Convex Optimization and
Statistical Estimation
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2016-147
http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-147.html
August 14, 2016
('3173667', 'Mert Pilanci', 'mert pilanci')
61f04606528ecf4a42b49e8ac2add2e9f92c0defDeep Deformation Network for Object Landmark
Localization
NEC Laboratories America, Department of Media Analytics
('39960064', 'Xiang Yu', 'xiang yu')
('46468682', 'Feng Zhou', 'feng zhou')
{xiangyu,manu}@nec-labs.com, zhfe99@gmail.com
612075999e82596f3b42a80e6996712cc52880a3CNNs with Cross-Correlation Matching for Face Recognition in Video
Surveillance Using a Single Training Sample Per Person
University of Texas at Arlington, TX, USA
2École de technologie supérieure, Université du Québec, Montreal, Canada
('3046171', 'Mostafa Parchami', 'mostafa parchami')
('2805645', 'Saman Bashbaghi', 'saman bashbaghi')
('1697195', 'Eric Granger', 'eric granger')
mostafa.parchami@mavs.uta.edu, bashbaghi@livia.etsmtl.ca and eric.granger@etsmtl.ca
61efeb64e8431cfbafa4b02eb76bf0c58e61a0faMerging Datasets Through Deep learning
IBM Research
Yeshiva University
IBM Research
('35970154', 'Kavitha Srinivas', 'kavitha srinivas')
('51428397', 'Abraham Gale', 'abraham gale')
('2828094', 'Julian Dolby', 'julian dolby')
61e9e180d3d1d8b09f1cc59bdd9f98c497707effSemi-supervised learning of
facial attributes in video
1INRIA, WILLOW, Laboratoire d’Informatique de l’Ecole Normale Sup´erieure,
ENS/INRIA/CNRS UMR 8548
University of Oxford
('1877079', 'Neva Cherniavsky', 'neva cherniavsky')
('1785596', 'Ivan Laptev', 'ivan laptev')
('1782755', 'Josef Sivic', 'josef sivic')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
6193c833ad25ac27abbde1a31c1cabe56ce1515bTrojaning Attack on Neural Networks
Purdue University, 2Nanjing University
('3347155', 'Yingqi Liu', 'yingqi liu')
('2026855', 'Shiqing Ma', 'shiqing ma')
('3216258', 'Yousra Aafer', 'yousra aafer')
('2547748', 'Wen-Chuan Lee', 'wen-chuan lee')
('3293342', 'Juan Zhai', 'juan zhai')
('3155328', 'Weihang Wang', 'weihang wang')
('1771551', 'Xiangyu Zhang', 'xiangyu zhang')
liu1751@purdue.edu, ma229@purdue.edu, yaafer@purdue.edu, lee1938@purdue.edu, zhaijuan@nju.edu.cn,
wang1315@cs.purdue.edu, xyzhang@cs.purdue.edu
614a7c42aae8946c7ad4c36b53290860f62564411
Joint Face Detection and Alignment using
Multi-task Cascaded Convolutional Networks
('3393556', 'Kaipeng Zhang', 'kaipeng zhang')
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('32787758', 'Zhifeng Li', 'zhifeng li')
('33427555', 'Yu Qiao', 'yu qiao')
614079f1a0d0938f9c30a1585f617fa278816d53Automatic Detection of ADHD and ASD from Expressive Behaviour in
RGBD Data
School of Computer Science, The University of Nottingham
2Nottingham City Asperger Service & ADHD Clinic
Institute of Mental Health, The University of Nottingham
('2736086', 'Shashank Jaiswal', 'shashank jaiswal')
('1795528', 'Michel F. Valstar', 'michel f. valstar')
('38690723', 'Alinda Gillott', 'alinda gillott')
('2491166', 'David Daley', 'david daley')
0d746111135c2e7f91443869003d05cde3044bebPARTIAL FACE DETECTION FOR CONTINUOUS AUTHENTICATION
(cid:63)Department of Electrical and Computer Engineering and the Center for Automation Research,
Rutgers, The State University of New Jersey, 723 CoRE, 94 Brett Rd, Piscataway, NJ
UMIACS, University of Maryland, College Park, MD
§Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043
('3152615', 'Upal Mahbub', 'upal mahbub')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('2406413', 'Brandon Barbello', 'brandon barbello')
('9215658', 'Rama Chellappa', 'rama chellappa')
umahbub@umiacs.umd.edu, vishal.m.patel@rutgers.edu,
dchandra@google.com, bbarbello@google.com, rama@umiacs.umd.edu
0da75b0d341c8f945fae1da6c77b6ec345f47f2a121
The Effect of Computer-Generated Descriptions on Photo-
Sharing Experiences of People With Visual Impairments
YUHANG ZHAO, Information Science, Cornell Tech, Cornell University
SHAOMEI WU, Facebook Inc.
LINDSAY REYNOLDS, Facebook Inc.
SHIRI AZENKOT, Information Science, Cornell Tech, Cornell University
Like sighted people, visually impaired people want to share photographs on social networking services, but
find it difficult to identify and select photos from their albums. We aimed to address this problem by
incorporating state-of-the-art computer-generated descriptions into Facebook’s photo-sharing feature. We
interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed
a photo description feature for the Facebook mobile application. We evaluated this feature with six
participants in a seven-day diary study. We found that participants used the descriptions to recall and
organize their photos, but they hesitated to upload photos without a sighted person’s input. In addition to
basic information about photo content, participants wanted to know more details about salient objects and
people, and whether the photos reflected their personal aesthetic. We discuss these findings from the lens
of self-disclosure and self-presentation theories and propose new computer vision research directions that
will better support visual content sharing by visually impaired people.
CCS Concepts: • Information interfaces and presentations → Multimedia and information systems; •
Computer and society → Social issues
impairments; computer-generated descriptions; SNSs; photo sharing; self-disclosure; self-
KEYWORDS
Visual
presentation
ACM Reference format:
2017. The Effect of Computer-Generated Descriptions On Photo-Sharing Experiences of People With Visual
Impairments. Proc. ACM Hum.-Comput. Interact. 1, 1. 121 (January 2017), 24 pages.
DOI: 10.1145/3134756
1 INTRODUCTION
Sharing memories and experiences via photos is a common way to engage with others on social
networking services (SNSs) [39,46,51]. For instance, Facebook users uploaded more than 350
million photos a day [24] and Twitter, which initially supported only text in tweets, now has
more than 28.4% of tweets containing images [39]. Visually impaired people (both blind and low
vision) have a strong presence on SNS and are interested in sharing photos [50]. They take
photos for the same reasons that sighted people do: sharing daily moments with their sighted

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
0d88ab0250748410a1bc990b67ab2efb370ade5dAuthor(s) :
ERROR HANDLING IN MULTIMODAL BIOMETRIC SYSTEMS USING
RELIABILITY MEASURES (ThuPmOR6)
(EPFL, Switzerland)
(EPFL, Switzerland)
(EPFL, Switzerland)
(EPFL, Switzerland)
Plamen Prodanov
('1753932', 'Krzysztof Kryszczuk', 'krzysztof kryszczuk')
('1994765', 'Jonas Richiardi', 'jonas richiardi')
('2439888', 'Andrzej Drygajlo', 'andrzej drygajlo')
0db43ed25d63d801ce745fe04ca3e8b363bf3147Kernel Principal Component Analysis and its Applications in
Face Recognition and Active Shape Models
Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 USA
('4019552', 'Quan Wang', 'quan wang')wangq10@rpi.edu
0daf696253a1b42d2c9d23f1008b32c65a9e4c1eUnsupervised Discovery of Facial Events
CMU-RI-TR-10-10
May 2010
Robotics Institute
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
c(cid:13) Carnegie Mellon University
('1757386', 'Feng Zhou', 'feng zhou')
0d538084f664b4b7c0e11899d08da31aead87c32Deformable Part Descriptors for
Fine-grained Recognition and Attribute Prediction
Forrest Iandola1
ICSI / UC Berkeley 2Brigham Young University
('40565777', 'Ning Zhang', 'ning zhang')
('2071606', 'Ryan Farrell', 'ryan farrell')
('1753210', 'Trevor Darrell', 'trevor darrell')
1{nzhang,forresti,trevor}@eecs.berkeley.edu
2farrell@cs.byu.edu
0dccc881cb9b474186a01fd60eb3a3e061fa6546Effective Face Frontalization in Unconstrained Images
The open University of Israel. 2Adience
Figure 1: Frontalized faces. Top: Input photos; bottom: our frontalizations,
obtained without estimating 3D facial shapes.
“Frontalization” is the process of synthesizing frontal facing views of faces
appearing in single unconstrained photos. Recent reports have suggested
that this process may substantially boost the performance of face recogni-
tion systems. This, by transforming the challenging problem of recognizing
faces viewed from unconstrained viewpoints to the easier problem of rec-
ognizing faces in constrained, forward facing poses. Previous frontalization
methods did this by attempting to approximate 3D facial shapes for each
query image. We observe that 3D face shape estimation from unconstrained
photos may be a harder problem than frontalization and can potentially in-
troduce facial misalignments. Instead, we explore the simpler approach of
using a single, unmodified, 3D surface as an approximation to the shape of
all input faces. We show that this leads to a straightforward, efficient and
easy to implement method for frontalization. More importantly, it produces
aesthetic new frontal views and is surprisingly effective when used for face
recognition and gender estimation.
Observation 1: For frontalization, one rough estimate of the 3D facial shape
seems as good as another, demonstrated by the following example:
Figure 2: Frontalization process. (a) facial features detected on a query
face and on a reference face (b) which was produced by rendering a tex-
tured 3D, CG model (c); (d) 2D query coordinates and corresponding 3D
coordinates on the model provide an estimated projection matrix, used to
back-project query texture to the reference coordinate system; (e) estimated
self-occlusions shown overlaid on the frontalized result (warmer colors re-
flect more occlusions.) Facial appearances in these regions are borrowed
from corresponding symmetric face regions; (f) our final frontalized result.
The top row shows surfaces estimated for the same query (left) by Hass-
ner [2] (mid) and DeepFaces [6] (right). Frontalizations are shown at the
bottom using our single-3D approach (left), Hassner (mid) and DeepFaces
(right). Clearly, both surfaces are rough approximations to the facial shape.
Moreover, despite the different surfaces, all results seem qualitatively simi-
lar, calling to question the need for shape estimation for frontalization.
Result 1: A novel frontalization method using a single, unmodified 3D ref-
erence shape is described in the paper (illustrated in Fig. 2).
Observation 2: A single, unmodified 3D reference shape produces aggres-
sively aligned faces, as can be observed in Fig. 3.
Result 2: Frontalized, strongly aligned faces elevate LFW [5] verification
accuracy and gender estimation rates on the Adience benchmark [1].
Conclusion: On the role of 2D appearance vs. 3D shape in face recognition,
our results suggest that 3D shape estimation may be unnecessary.
('1756099', 'Tal Hassner', 'tal hassner')
('35840854', 'Shai Harel', 'shai harel')
('1753918', 'Eran Paz', 'eran paz')
('1792038', 'Roee Enbar', 'roee enbar')
0d467adaf936b112f570970c5210bdb3c626a717
0d6b28691e1aa2a17ffaa98b9b38ac3140fb3306Review of Perceptual Resemblance of Local
Plastic Surgery Facial Images using Near Sets
1,2 Department of Computer Technology,
YCCE Nagpur, India
('9083090', 'Prachi V. Wagde', 'prachi v. wagde')
('9218400', 'Roshni Khedgaonkar', 'roshni khedgaonkar')
0de91641f37b0a81a892e4c914b46d05d33fd36eRAPS: Robust and Efficient Automatic Construction of Person-Specific
Deformable Models
∗Department of Computing,
Imperial College London
180 Queens Gate,
†EEMCS,
University of Twente
Drienerlolaan 5,
London SW7 2AZ, U.K.
7522 NB Enschede, The Netherlands
('3320415', 'Christos Sagonas', 'christos sagonas')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{c.sagonas, i.panagakis, s.zafeiriou, m.pantic}@imperial.ac.uk
0df0d1adea39a5bef318b74faa37de7f3e00b452Appearance-Based Gaze Estimation in the Wild
1Perceptual User Interfaces Group, 2Scalable Learning and Perception Group
Max Planck Institute for Informatics, Saarbr ucken, Germany
('2520795', 'Xucong Zhang', 'xucong zhang')
('1751242', 'Yusuke Sugano', 'yusuke sugano')
('1739548', 'Mario Fritz', 'mario fritz')
('3194727', 'Andreas Bulling', 'andreas bulling')
{xczhang,sugano,mfritz,bulling}@mpi-inf.mpg.de
0d3bb75852098b25d90f31d2f48fd0cb4944702bA DATA-DRIVEN APPROACH TO CLEANING LARGE FACE DATASETS
Advanced Digital Sciences Center (ADSC), University of Illinois at Urbana-Champaign, Singapore
('1702224', 'Stefan Winkler', 'stefan winkler')
0db8e6eb861ed9a70305c1839eaef34f2c85bbaf
0d0b880e2b531c45ee8227166a489bf35a528cb9Structure Preserving Object Tracking
Computer Vision Lab, Delft University of Technology
Mekelweg 4, 2628 CD Delft, The Netherlands
('2883723', 'Lu Zhang', 'lu zhang')
('1803520', 'Laurens van der Maaten', 'laurens van der maaten')
{lu.zhang, l.j.p.vandermaaten}@tudelft.nl
0d3882b22da23497e5de8b7750b71f3a4b0aac6bResearch Article
Context Is Routinely Encoded
During Emotion Perception
21(4) 595 –599
© The Author(s) 2010
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0956797610363547
http://pss.sagepub.com
Boston College; 2Psychiatric Neuroimaging Program, Massachusetts General Hospital, Harvard Medical School; and 3Athinoula A. Martinos
Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
('1731779', 'Lisa Feldman Barrett', 'lisa feldman barrett')
0dbf4232fcbd52eb4599dc0760b18fcc1e9546e9
0d760e7d762fa449737ad51431f3ff938d6803feLCDet: Low-Complexity Fully-Convolutional Neural Networks for
Object Detection in Embedded Systems
UC San Diego ∗
Gokce Dane
Qualcomm Inc.
UC San Diego
Qualcomm Inc.
UC San Diego
('2906509', 'Subarna Tripathi', 'subarna tripathi')
('1801046', 'Byeongkeun Kang', 'byeongkeun kang')
('3484765', 'Vasudev Bhaskaran', 'vasudev bhaskaran')
('30518518', 'Truong Nguyen', 'truong nguyen')
stripathi@ucsd.edu
gokced@qti.qualcomm.com
bkkang@ucsd.edu
vasudevb@qti.qualcomm.com
tqn001@eng.ucsd.edu
0d3068b352c3733c9e1cc75e449bf7df1f7b10a4Context based Facial Expression Analysis in the
Wild
School of Computer Science, CECS, Australian National University, Australia
http://users.cecs.anu.edu.au/∼adhall
('1735697', 'Abhinav Dhall', 'abhinav dhall')abhinav.dhall@anu.edu.au
0dd72887465046b0f8fc655793c6eaaac9c03a3dReal-time Head Orientation from a Monocular
Camera using Deep Neural Network
KAIST, Republic of Korea
('3250619', 'Byungtae Ahn', 'byungtae ahn')
('2870153', 'Jaesik Park', 'jaesik park')
[btahn,jspark]@rcv.kaist.ac.kr, iskweon77@kaist.ac.kr
0d087aaa6e2753099789cd9943495fbbd08437c0
0d8415a56660d3969449e77095be46ef0254a448
0dfa460a35f7cab4705726b6367557b9f7842c65Modeling Spatial-Temporal Clues in a Hybrid Deep
Learning Framework for Video Classification
School of Computer Science, Shanghai Key Lab of Intelligent Information Processing,
Fudan University, Shanghai, China
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('31825486', 'Xi Wang', 'xi wang')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('1743864', 'Hao Ye', 'hao ye')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
{zxwu, xwang10, ygj, haoye10, xyxue}@fudan.edu.cn
0d14261e69a4ad4140ce17c1d1cea76af6546056Adding Facial Actions into 3D Model Search to Analyse
Behaviour in an Unconstrained Environment
Imaging Science and Biomedical Engineering, The University of Manchester, UK
('1753123', 'Angela Caunce', 'angela caunce')
0dbacb4fd069462841ebb26e1454b4d147cd8e98Recent Advances in Discriminant Non-negative
Matrix Factorization
Aristotle University of Thessaloniki
Thessaloniki, Greece, 54124
('1793625', 'Symeon Nikitidis', 'symeon nikitidis')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
Email: {nikitidis,tefas,pitas}@aiia.csd.auth.gr
0db36bf08140d53807595b6313201a7339470cfeMoving Vistas: Exploiting Motion for Describing Scenes
Department of Electrical and Computer Engineering
Center for Automation Research, UMIACS, University of Maryland, College Park, MD
('34711525', 'Nitesh Shroff', 'nitesh shroff')
('9215658', 'Rama Chellappa', 'rama chellappa')
{nshroff,pturaga,rama}@umiacs.umd.edu
0d781b943bff6a3b62a79e2c8daf7f4d4d6431adEmotiW 2016: Video and Group-Level Emotion
Recognition Challenges
Roland Goecke
David R. Cheriton School of
Human-Centred Technology
David R. Cheriton School of
Computer Science
University of Waterloo
Canada
University of Canberra
Centre
Australia
Computer Science
University of Waterloo
Canada
Tom Gedeon
David R. Cheriton School of
Information Human Centred
Computer Science
University of Waterloo
Canada
Australian National University
Computing
Australia
('1735697', 'Abhinav Dhall', 'abhinav dhall')
('2942991', 'Jyoti Joshi', 'jyoti joshi')
('1773895', 'Jesse Hoey', 'jesse hoey')
abhinav.dhall@uwaterloo.ca
roland.goecke@ieee.org
jyoti.joshi@uwaterloo.ca
jhoey@cs.uwaterloo.ca
tom.gedeon@anu.edu.au
0d735e7552af0d1dcd856a8740401916e54b7eee
0d06b3a4132d8a2effed115a89617e0a702c957a
0d2dd4fc016cb6a517d8fb43a7cc3ff62964832e
0d33b6c8b4d1a3cb6d669b4b8c11c2a54c203d1aDetection and Tracking of Faces in Videos: A Review
© 2016 IJEDR | Volume 4, Issue 2 | ISSN: 2321-9939
of Related Work
1Student, 2Assistant Professor
1, 2Dept. of Electronics & Comm., S S I E T, Punjab, India
________________________________________________________________________________________________________
('48816689', 'Seema Saini', 'seema saini')
0d1d9a603b08649264f6e3b6d5a66bf1e1ac39d2University of Nebraska - Lincoln
US Army Research
2015
U.S. Department of Defense
Effects of emotional expressions on persuasion
University of Southern California
University of Southern California
University of Southern California
University of Southern California
Follow this and additional works at: http://digitalcommons.unl.edu/usarmyresearch
Wang, Yuqiong; Lucas, Gale; Khooshabeh, Peter; de Melo, Celso; and Gratch, Jonathan, "Effects of emotional expressions on
persuasion" (2015). US Army Research. Paper 340.
http://digitalcommons.unl.edu/usarmyresearch/340
('2522587', 'Yuqiong Wang', 'yuqiong wang')
('2419453', 'Gale Lucas', 'gale lucas')
('2635945', 'Peter Khooshabeh', 'peter khooshabeh')
('1977901', 'Celso de Melo', 'celso de melo')
('1730824', 'Jonathan Gratch', 'jonathan gratch')
DigitalCommons@University of Nebraska - Lincoln
University of Southern California, wangyuqiong@ymail.com
This Article is brought to you for free and open access by the U.S. Department of Defense at DigitalCommons@University of Nebraska - Lincoln. It has
been accepted for inclusion in US Army Research by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln.
0da4c3d898ca2fff9e549d18f513f4898e960acaWang, Y., Thomas, J., Weissgerber, S. C., Kazemini, S., Ul-Haq, I., &
Quadflieg, S. (2015). The Headscarf Effect Revisited: Further Evidence for a
336. 10.1068/p7940
Peer reviewed version
Link to published version (if available):
10.1068/p7940
Link to publication record in Explore Bristol Research
PDF-document
University of Bristol - Explore Bristol Research
General rights
This document is made available in accordance with publisher policies. Please cite only the published
version using the reference above. Full terms of use are available:
http://www.bristol.ac.uk/pure/about/ebr-terms.html
Take down policy
Explore Bristol Research is a digital archive and the intention is that deposited content should not be
removed. However, if you believe that this version of the work breaches copyright law please contact
• Your contact details
Bibliographic details for the item, including a URL
• An outline of the nature of the complaint
On receipt of your message the Open Access Team will immediately investigate your claim, make an
initial judgement of the validity of the claim and, where appropriate, withdraw the item in question
from public view.
open-access@bristol.ac.uk and include the following information in your message:
951368a1a8b3c5cd286726050b8bdf75a80f7c37A Family of Online Boosting Algorithms
University of California, San Diego
University of California, Merced
University of California, San Diego
('2490700', 'Boris Babenko', 'boris babenko')
('37144787', 'Ming-Hsuan Yang', 'ming-hsuan yang')
('1769406', 'Serge Belongie', 'serge belongie')
bbabenko@cs.ucsd.edu
mhyang@ucmerced.edu
sjb@cs.ucsd.edu
956e9b69b3366ed3e1670609b53ba4a7088b8b7eSemi-supervised dimensionality reduction for image retrieval
aIBM China Research Lab, Beijing, China
bTsinghua University, Beijing, China
956317de62bd3024d4ea5a62effe8d6623a64e53Lighting Analysis and Texture Modification of 3D Human
Face Scans
Author
Zhang, Paul, Zhao, Sanqiang, Gao, Yongsheng
Published
2007
Conference Title
Digital Image Computing Techniques and Applications
DOI
https://doi.org/10.1109/DICTA.2007.4426825
Copyright Statement
© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/
republish this material for advertising or promotional purposes or for creating new collective
works for resale or redistribution to servers or lists, or to reuse any copyrighted component of
this work in other works must be obtained from the IEEE.
Downloaded from
http://hdl.handle.net/10072/17889
Link to published version
http://www.ieee.org/
Griffith Research Online
https://research-repository.griffith.edu.au
959bcb16afdf303c34a8bfc11e9fcc9d40d76b1cTemporal Coherency based Criteria for Predicting
Video Frames using Deep Multi-stage Generative
Adversarial Networks
Visualization and Perception Laboratory
Department of Computer Science and Engineering
Indian Institute of Technology Madras, Chennai, India
('29901316', 'Prateep Bhattacharjee', 'prateep bhattacharjee')
('1680398', 'Sukhendu Das', 'sukhendu das')
1prateepb@cse.iitm.ac.in, 2sdas@iitm.ac.in
951f21a5671a4cd14b1ef1728dfe305bda72366fInternational Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Impact Factor (2012): 3.358
Use of ℓ2/3-norm Sparse Representation for Facial
Expression Recognition
MATS University, MATS School of Engineering and Technology, Arang, Raipur, India
MATS University, MATS School of Engineering and Technology, Arang, Raipur, India
in
three
to discriminate
it
from
represents emotion,
95f26d1c80217706c00b6b4b605a448032b93b75New Robust Face Recognition Methods Based on Linear
Regression
Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong Province, China, 2 Key Laboratory of Network
Oriented Intelligent Computation, Shenzhen, Guangdong Province, China
('2208128', 'Jian-Xun Mi', 'jian-xun mi')
('2650895', 'Jin-Xing Liu', 'jin-xing liu')
('40342210', 'Jiajun Wen', 'jiajun wen')
95f12d27c3b4914e0668a268360948bce92f7db3Interactive Facial Feature Localization
University of Illinois at Urbana Champaign, Urbana, IL 61801, USA
2 Adobe Systems Inc., San Jose, CA 95110, USA
3 Facebook Inc., Menlo Park, CA 94025, USA
('36474335', 'Vuong Le', 'vuong le')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
9547a7bce2b85ef159b2d7c1b73dea82827a449fFacial Expression Recognition Using Gabor Motion Energy Filters
Dept. Computer Science Engineering
UC San Diego
Marian S. Bartlett
Institute for Neural Computation
UC San Diego
('4072965', 'Tingfan Wu', 'tingfan wu')
('1741200', 'Javier R. Movellan', 'javier r. movellan')
tingfan@gmail.com
{marni,movellan}@mplab.ucsd.edu
9513503867b29b10223f17c86e47034371b6eb4fComparison of optimisation algorithms for
deformable template matching
Link oping University, Computer Vision Laboratory
ISY, SE-581 83 Link¨oping, SWEDEN
('1797883', 'Vasileios Zografos', 'vasileios zografos')zografos@isy.liu.se ⋆
955e2a39f51c0b6f967199942d77625009e580f9NAMING FACES ON THE WEB
a thesis
submitted to the department of computer engineering
and the institute of engineering and science
of bilkent university
in partial fulfillment of the requirements
for the degree of
master of science
By
July, 2010
('34946851', 'Hilal Zitouni', 'hilal zitouni')
956c634343e49319a5e3cba4f2bd2360bdcbc075IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 4, AUGUST 2006
873
A Novel Incremental Principal Component Analysis
and Its Application for Face Recognition
('1776124', 'Haitao Zhao', 'haitao zhao')
('1768574', 'Pong Chi Yuen', 'pong chi yuen')
95ea564bd983129ddb5535a6741e72bb1162c779Multi-Task Learning by Deep Collaboration and
Application in Facial Landmark Detection
Laval University, Qu bec, Canada
('2758280', 'Ludovic Trottier', 'ludovic trottier')
('2310695', 'Philippe Giguère', 'philippe giguère')
('1700926', 'Brahim Chaib-draa', 'brahim chaib-draa')
ludovic.trottier.1@ulaval.ca
{philippe.giguere,brahim.chaib-draa}@ift.ulaval.ca
958c599a6f01678513849637bec5dc5dba592394Noname manuscript No.
(will be inserted by the editor)
Generalized Zero-Shot Learning for Action
Recognition with Web-Scale Video Data
Received: date / Accepted: date
('2473509', 'Kun Liu', 'kun liu')
('8984539', 'Wenbing Huang', 'wenbing huang')
950171acb24bb24a871ba0d02d580c09829de372Speeding up 2D-Warping for Pose-Invariant Face Recognition
Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Germany
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('1685956', 'Hermann Ney', 'hermann ney')
surname@cs.rwth-aachen.de
59be98f54bb4ed7a2984dc6a3c84b52d1caf44ebA Deep-Learning Approach to Facial Expression Recognition
with Candid Images
CUNY City College
Alibaba. Inc
IBM China Research Lab
CUNY Graduate Center and City College
('40617554', 'Wei Li', 'wei li')
('1713016', 'Min Li', 'min li')
('1703625', 'Zhong Su', 'zhong su')
('4697712', 'Zhigang Zhu', 'zhigang zhu')
lwei000@citymail.cuny.edu
mushi.lm@alibaba.inc
suzhong@cn.ibm.com
zhu@cs.ccny.cuny.edu
59fc69b3bc4759eef1347161e1248e886702f8f7Final Report of Final Year Project
HKU-Face: A Large Scale Dataset for
Deep Face Recognition
3035141841
COMP4801 Final Year Project
Project Code: 17007
('40456402', 'Haoyu Li', 'haoyu li')
591a737c158be7b131121d87d9d81b471c400dbaAffect Valence Inference From Facial Action Unit Spectrograms
MIT Media Lab
MA 02139, USA
MIT Media Lab
MA 02139, USA
Harvard University
MA 02138, USA
Rosalind Picard
MIT Media Lab
MA 02139, USA
('1801452', 'Daniel McDuff', 'daniel mcduff')
('1754451', 'Rana El Kaliouby', 'rana el kaliouby')
('2010950', 'Karim Kassam', 'karim kassam')
djmcduff@mit.edu
kaliouby@mit.edu
kskassam@fas.harvard.edu
picard@mit.edu
59bfeac0635d3f1f4891106ae0262b81841b06e4Face Verification Using the LARK Face
Representation
('3326805', 'Hae Jong Seo', 'hae jong seo')
('1718280', 'Peyman Milanfar', 'peyman milanfar')
59efb1ac77c59abc8613830787d767100387c680DIF : Dataset of Intoxicated Faces for Drunk Person
Identification
Indian Institute of Technology Ropar
Indian Institute of Technology Ropar
('46241736', 'Devendra Pratap Yadav', 'devendra pratap yadav')
('1735697', 'Abhinav Dhall', 'abhinav dhall')
2014csb1010@iitrpr.ac.in
abhinav@iitrpr.ac.in
590628a9584e500f3e7f349ba7e2046c8c273fcf
593234ba1d2e16a887207bf65d6b55bbc7ea2247Combining Language Sources and Robust
Semantic Relatedness for Attribute-Based
Knowledge Transfer
1 Department of Computer Science, TU Darmstadt
Max Planck Institute for Informatics, Saarbr ucken, Germany
('34849128', 'Marcus Rohrbach', 'marcus rohrbach')
('37718254', 'Michael Stark', 'michael stark')
('1697100', 'Bernt Schiele', 'bernt schiele')
59eefa01c067a33a0b9bad31c882e2710748ea24IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Fast Landmark Localization
with 3D Component Reconstruction and CNN for
Cross-Pose Recognition
('24020847', 'Hung-Cheng Shie', 'hung-cheng shie')
('9640380', 'Cheng-Hua Hsieh', 'cheng-hua hsieh')
59e2037f5079794cb9128c7f0900a568ced14c2aClothing and People - A Social Signal Processing Perspective
Faculty of Mathematics and Computer Science, University of Barcelona, Barcelona, Spain
2 Computer Vision Center, Barcelona, Spain
University of Verona, Verona, Italy
('2084534', 'Maedeh Aghaei', 'maedeh aghaei')
('10724083', 'Federico Parezzan', 'federico parezzan')
('2837527', 'Mariella Dimiccoli', 'mariella dimiccoli')
('1724155', 'Petia Radeva', 'petia radeva')
('1723008', 'Marco Cristani', 'marco cristani')
59dac8b460a89e03fa616749a08e6149708dcc3aA Convergent Solution to Matrix Bidirectional Projection Based Feature
Extraction with Application to Face Recognition ∗
School of Computer, National University of Defense Technology
No 137, Yanwachi Street, Kaifu District,
Changsha, Hunan Province, 410073, P.R. China
('3144121', 'Yubin Zhan', 'yubin zhan')
('1969736', 'Jianping Yin', 'jianping yin')
('33793976', 'Xinwang Liu', 'xinwang liu')
E-mail: {YubinZhan,JPYin,XWLiu}@nudt.edu.cn
59e9934720baf3c5df3a0e1e988202856e1f83ceUA-DETRAC: A New Benchmark and Protocol for
Multi-Object Detection and Tracking
University at Albany, SUNY
2 School of Computer and Control Engineering, UCAS
3 Department of Electrical and Computer Engineering, UCSD
4 National Laboratory of Pattern Recognition, CASIA
University at Albany, SUNY
Division of Computer Science and Engineering, Hanyang University
7 Electrical Engineering and Computer Science, UCM
('39774417', 'Longyin Wen', 'longyin wen')
('1910738', 'Dawei Du', 'dawei du')
('1773408', 'Zhaowei Cai', 'zhaowei cai')
('39643145', 'Ming-Ching Chang', 'ming-ching chang')
('3245785', 'Honggang Qi', 'honggang qi')
('33047058', 'Jongwoo Lim', 'jongwoo lim')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
59d225486161b43b7bf6919b4a4b4113eb50f039Complex Event Recognition from Images with Few Training Examples
Irfan Essa∗
Georgia Institute of Technology
University of Southern California
('2308598', 'Unaiza Ahsan', 'unaiza ahsan')
('1726241', 'Chen Sun', 'chen sun')
('1945508', 'James Hays', 'james hays')
uahsan3@gatech.edu
chensun@google.com
hays@gatech.edu
irfan@cc.gatech.edu
5945464d47549e8dcaec37ad41471aa70001907fNoname manuscript No.
(will be inserted by the editor)
Every Moment Counts: Dense Detailed Labeling of Actions in Complex
Videos
Received: date / Accepted: date
('34149749', 'Serena Yeung', 'serena yeung')
('3216322', 'Li Fei-Fei', 'li fei-fei')
59c9d416f7b3d33141cc94567925a447d0662d80Universität des Saarlandes
Max-Planck-Institut für Informatik
AG5
Matrix factorization over max-times
algebra for data mining
Masterarbeit im Fach Informatik
Master’s Thesis in Computer Science
von / by
angefertigt unter der Leitung von / supervised by
begutachtet von / reviewers
November 2013
UNIVERSITASSARAVIENSIS
('2297723', 'Sanjar Karaev', 'sanjar karaev')
('1804891', 'Pauli Miettinen', 'pauli miettinen')
('1804891', 'Pauli Miettinen', 'pauli miettinen')
('1751591', 'Gerhard Weikum', 'gerhard weikum')
59bece468ed98397d54865715f40af30221aa08cDeformable Part-based Robust Face Detection
under Occlusion by Using Face Decomposition
into Face Components
Darijan Marčetić, Slobodan Ribarić
University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia
{darijan.marcetic, slobodan.ribaric}@fer.hr
59a35b63cf845ebf0ba31c290423e24eb822d245The FaceSketchID System: Matching Facial
Composites to Mugshots
tedious, and may not
('34393045', 'Hu Han', 'hu han')
('6680444', 'Anil K. Jain', 'anil k. jain')
59f325e63f21b95d2b4e2700c461f0136aecc1713070
978-1-4577-1302-6/11/$26.00 ©2011 IEEE
FOR FACE RECOGNITION
1. INTRODUCTION
59420fd595ae745ad62c26ae55a754b97170b01fObjects as Attributes for Scene Classification
Stanford University
('33642044', 'Li-Jia Li', 'li-jia li')
('2888806', 'Hao Su', 'hao su')
('7892285', 'Yongwhan Lim', 'yongwhan lim')
('3216322', 'Li Fei-Fei', 'li fei-fei')
599adc0dcd4ebcc2a868feedd243b5c3c1bd1d0aHow Robust is 3D Human Pose Estimation to Occlusion?
Visual Computing Institute, RWTH Aachen University
2Robert Bosch GmbH, Corporate Research
('2699877', 'Timm Linder', 'timm linder')
('1789756', 'Bastian Leibe', 'bastian leibe')
{sarandi,leibe}@vision.rwth-aachen.de
{timm.linder,kaioliver.arras}@de.bosch.com
5922e26c9eaaee92d1d70eae36275bb226ecdb2eBoosting Classification Based Similarity
Learning by using Standard Distances
Departament d’Informàtica, Universitat de València
Av. de la Universitat s/n. 46100-Burjassot (Spain)
('2275648', 'Emilia López-Iñesta', 'emilia lópez-iñesta')
('3138833', 'Miguel Arevalillo-Herráez', 'miguel arevalillo-herráez')
('2627759', 'Francisco Grimaldo', 'francisco grimaldo')
eloi@alumni.uv.es,miguel.arevalillo@uv.es
francisco.grimaldo@uv.es
59d8fa6fd91cdb72cd0fa74c04016d79ef5a752bThe Menpo Facial Landmark Localisation Challenge:
A step towards the solution
Department of Computing
Imperial College London
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('2814229', 'George Trigeorgis', 'george trigeorgis')
('1688922', 'Grigorios Chrysos', 'grigorios chrysos')
('3234063', 'Jiankang Deng', 'jiankang deng')
('1719912', 'Jie Shen', 'jie shen')
{s.zafeiriou, g.trigeorgis, g.chrysos, j.deng16, jie.shen07}@imperial.ac.uk
59e75aad529b8001afc7e194e21668425119b864Membrane Nonrigid Image Registration
Department of Computer Science
Drexel University
Philadelphia, PA
('1708819', 'Ko Nishino', 'ko nishino')
59d45281707b85a33d6f50c6ac6b148eedd71a25Rank Minimization across Appearance and Shape for AAM Ensemble Fitting
2The Commonwealth Scientific and Industial Research Organization (CSIRO)
Queensland University of Technology
('2699730', 'Xin Cheng', 'xin cheng')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
('1820249', 'Simon Lucey', 'simon lucey')
1{x2.cheng,s.sridharan}@qut.edu.au
2{jason.saragih,simon.lucey}@csiro.au
59319c128c8ac3c88b4ab81088efe8ae9c458e07Effective Computer Model For Recognizing
Nationality From Frontal Image
Bat-Erdene.B
Information and Communication Management School
The University of the Humanities
Ulaanbaatar, Mongolia
e-mail: basubaer@gmail.com
59a6c9333c941faf2540979dcfcb5d503a49b91eSampling Clustering
School of Computer Science and Technology, Shandong University, China
('51016741', 'Ching Tarn', 'ching tarn')
('2413471', 'Yinan Zhang', 'yinan zhang')
('48260402', 'Ye Feng', 'ye feng')
∗i@ctarn.io
59031a35b0727925f8c47c3b2194224323489d68Sparse Variation Dictionary Learning for Face Recognition with A Single
Training Sample Per Person
ETH Zurich
Switzerland
('5828998', 'Meng Yang', 'meng yang')
('1681236', 'Luc Van Gool', 'luc van gool')
{yang,vangool}@vision.ee.ethz.ch
926c67a611824bc5ba67db11db9c05626e79de961913
Enhancing Bilinear Subspace Learning
by Element Rearrangement
('38188040', 'Dong Xu', 'dong xu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1686911', 'Stephen Lin', 'stephen lin')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
923ede53b0842619831e94c7150e0fc4104e62f7978-1-4799-9988-0/16/$31.00 ©2016 IEEE
1293
ICASSP 2016
92b61b09d2eed4937058d0f9494d9efeddc39002Under review in IJCV manuscript No.
(will be inserted by the editor)
BoxCars: Improving Vehicle Fine-Grained Recognition using
3D Bounding Boxes in Traffic Surveillance
Received: date / Accepted: date
('34891870', 'Jakub Sochor', 'jakub sochor')
9264b390aa00521f9bd01095ba0ba4b42bf84d7eDisplacement Template with Divide-&-Conquer
Algorithm for Significantly Improving
Descriptor based Face Recognition Approaches
Wenzhou University, China
University of Northern British Columbia, Canada
Aberystwyth University, UK
('1692551', 'Liang Chen', 'liang chen')
('33500699', 'Ling Yan', 'ling yan')
('1990125', 'Yonghuai Liu', 'yonghuai liu')
('39388942', 'Lixin Gao', 'lixin gao')
('3779849', 'Xiaoqin Zhang', 'xiaoqin zhang')
92be73dffd3320fe7734258961fe5a5f2a43390eTRANSFERRING FACE VERIFICATION NETS TO PAIN AND EXPRESSION REGRESSION
Dept. of {Computer Science1, Electrical & Computer Engineering2, Radiation Oncology3, Cognitive Science4}
Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA
5Dept. of EE, UESTC, 2006 Xiyuan Ave, Chengdu, Sichuan 611731, China
Tsinghua University, Beijing 100084, China
('39369840', 'Feng Wang', 'feng wang')
('40031188', 'Xiang Xiang', 'xiang xiang')
('1692867', 'Chang Liu', 'chang liu')
('1709073', 'Trac D. Tran', 'trac d. tran')
('3207112', 'Austin Reiter', 'austin reiter')
('1678633', 'Gregory D. Hager', 'gregory d. hager')
('2095823', 'Harry Quon', 'harry quon')
('1709439', 'Jian Cheng', 'jian cheng')
('1746141', 'Alan L. Yuille', 'alan l. yuille')
920a92900fbff22fdaaef4b128ca3ca8e8d54c3eLEARNING PATTERN TRANSFORMATION MANIFOLDS WITH PARAMETRIC ATOM
SELECTION
Ecole Polytechnique F´ed´erale de Lausanne (EPFL)
Signal Processing Laboratory (LTS4)
Switzerland-1015 Lausanne
('12636684', 'Elif Vural', 'elif vural')
('1703189', 'Pascal Frossard', 'pascal frossard')
9207671d9e2b668c065e06d9f58f597601039e5eFace Detection Using a 3D Model on
Face Keypoints
('2455529', 'Adrian Barbu', 'adrian barbu')
('3019469', 'Gary Gramajo', 'gary gramajo')
924b14a9e36d0523a267293c6d149bca83e73f3bVolume 5, Number 2, pp. 133 -164
Development and Evaluation of a Method
Employed to Identify Internal State
Utilizing Eye Movement Data
(cid:2) Graduate School of Media and
Governance, Keio University
(JAPAN)
(cid:3) Faculty of Environmental
Information, Keio University
(JAPAN)
('31726964', 'Noriyuki Aoyama', 'noriyuki aoyama')
('1889276', 'Tadahiko Fukuda', 'tadahiko fukuda')
9282239846d79a29392aa71fc24880651826af72Antonakos et al. EURASIP Journal on Image and Video Processing 2014, 2014:14
http://jivp.eurasipjournals.com/content/2014/1/14
RESEARCH
Open Access
Classification of extreme facial events in sign
language videos
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('1738119', 'Vassilis Pitsikalis', 'vassilis pitsikalis')
('1750686', 'Petros Maragos', 'petros maragos')
92115b620c7f653c847f43b6c4ff0470c8e55dabTraining Deformable Object Models for Human
Detection Based on Alignment and Clustering
Department of Computer Science,
Centre of Biological Signalling Studies (BIOSS),
University of Freiburg, Germany
('2127987', 'Benjamin Drayer', 'benjamin drayer')
('1710872', 'Thomas Brox', 'thomas brox')
{drayer,brox}@cs.uni-freiburg.de
928b8eb47288a05611c140d02441660277a7ed54Exploiting Images for Video Recognition with Hierarchical Generative
Adversarial Networks
1 Beijing Laboratory of Intelligent Information Technology, School of Computer Science,
Big Data Research Center, University of Electronic Science and Technology of China
Beijing Institute of Technology
('3450614', 'Feiwu Yu', 'feiwu yu')
('2125709', 'Xinxiao Wu', 'xinxiao wu')
('9177510', 'Yuchao Sun', 'yuchao sun')
('2055900', 'Lixin Duan', 'lixin duan')
{yufeiwu,wuxinxiao,sunyuchao}@bit.edu.cn, lxduan@uestc.edu.cn
926e97d5ce2a6e070f8ec07c5aa7f91d3df90ba0Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural
Networks
Department of Electrical and Computer Engineering
University of Denver, Denver, CO
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')behzad.hasani@du.edu and mmahoor@du.edu
92c2dd6b3ac9227fce0a960093ca30678bceb364Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published
version when available.
Title
On color texture normalization for active appearance models
Author(s)
Ionita, Mircea C.; Corcoran, Peter M.; Buzuloiu, Vasile
Publication
Date
2009-05-12
Publication
Information
Ionita, M. C., Corcoran, P., & Buzuloiu, V. (2009). On Color
Texture Normalization for Active Appearance Models. Image
Processing, IEEE Transactions on, 18(6), 1372-1378.
Publisher
IEEE
Link to
publisher's
version
http://dx.doi.org/10.1109/TIP.2009.2017163
Item record
http://hdl.handle.net/10379/1350
Some rights reserved. For more information, please see the item record link above.
Downloaded 2018-11-06T00:40:53Z
92e464a5a67582d5209fa75e3b29de05d82c7c86Reconstruction for Feature Disentanglement in Pose-invariant Face Recognition
Rutgers University, NJ, USA
2NEC Labs America, CA, USA
('4340744', 'Xi Peng', 'xi peng')
('39960064', 'Xiang Yu', 'xiang yu')
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
{xpeng.cs, dnm}@rutgers.edu, {xiangyu, ksohn, manu}@nec-labs.com
927ba64123bd4a8a31163956b3d1765eb61e4426Customer satisfaction measuring based on the most
significant facial emotion
To cite this version:
most significant facial emotion. 15th IEEE International Multi-Conference on Systems, Signals
Devices (SSD 2018), Mar 2018, Hammamet, Tunisia.
HAL Id: hal-01790317
https://hal-upec-upem.archives-ouvertes.fr/hal-01790317
Submitted on 11 May 2018
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('50101862', 'Rostom Kachouri', 'rostom kachouri')
('50101862', 'Rostom Kachouri', 'rostom kachouri')
922838dd98d599d1d229cc73896d55e7a769aa7cLearning Hierarchical Representations for Face Verification
with Convolutional Deep Belief Networks
Erik Learned-Miller
University of Massachusetts
University of Michigan
University of Massachusetts
Amherst, MA
Ann Arbor, MI
Amherst, MA
('3219900', 'Gary B. Huang', 'gary b. huang')
('1697141', 'Honglak Lee', 'honglak lee')
gbhuang@cs.umass.edu
honglak@eecs.umich.edu
elm@cs.umass.edu
9294739e24e1929794330067b84f7eafd286e1c8Expression Recognition using Elastic Graph Matching
21,
21,
21,
, Cairong Zhou 2
Research Center for Learning Science, Southeast University, Nanjing 210096, China
Southeast University, Nanjing 210096, China
('40622743', 'Yujia Cao', 'yujia cao')
('40608983', 'Wenming Zheng', 'wenming zheng')
('1718117', 'Li Zhao', 'li zhao')
Email: yujia_cao@seu.edu.cn
92fada7564d572b72fd3be09ea3c39373df3e27c
927ad0dceacce2bb482b96f42f2fe2ad1873f37aInterest-Point based Face Recognition System
87
X
Interest-Point based Face Recognition System
Spain
1. Introduction
Among all applications of face recognition systems, surveillance is one of the most
challenging ones. In such an application, the goal is to detect known criminals in crowded
environments, like airports or train stations. Some attempts have been made, like those of
Tokio (Engadget, 2006) or Mainz (Deutsche Welle, 2006), with limited success.
The first task to be carried out in an automatic surveillance system involves the detection of
all the faces in the images taken by the video cameras. Current face detection algorithms are
highly reliable and thus, they will not be the focus of our work. Some of the best performing
examples are the Viola-Jones algorithm (Viola & Jones, 2004) or the Schneiderman-Kanade
algorithm (Schneiderman & Kanade, 2000).
The second task to be carried out involves the comparison of all detected faces among the
database of known criminals. The ideal behaviour of an automatic system performing this
task would be to get a 100% correct identification rate, but this behaviour is far from the
capabilities of current face recognition algorithms. Assuming that there will be false
identifications, supervised surveillance systems seem to be the most realistic option: the
automatic system issues an alarm whenever it detects a possible match with a criminal, and
a human decides whether it is a false alarm or not. Figure 1 shows an example.
However, even in a supervised scenario the requirements for the face recognition algorithm
are extremely high: the false alarm rate must be low enough as to allow the human operator
to cope with it; and the percentage of undetected criminals must be kept to a minimum in
order to ensure security. Fulfilling both requirements at the same time is the main challenge,
as a reduction in false alarm rate usually implies an increase of the percentage of undetected
criminals.
We propose a novel face recognition system based in the use of interest point detectors and
local descriptors. In order to check the performances of our system, and particularly its
performances in a surveillance application, we present experimental results in terms of
Receiver Operating Characteristic curves or ROC curves. From the experimental results, it
becomes clear that our system outperforms classical appearance based approaches.
www.intechopen.com
('35178717', 'Cesar Fernandez', 'cesar fernandez')
('3686544', 'Maria Asuncion Vicente', 'maria asuncion vicente')
('2422580', 'Miguel Hernandez', 'miguel hernandez')
929bd1d11d4f9cbc638779fbaf958f0efb82e603This is the author’s version of a work that was submitted/accepted for pub-
lication in the following source:
Zhang, Ligang & Tjondronegoro, Dian W. (2010) Improving the perfor-
mance of facial expression recognition using dynamic, subtle and regional
features.
In Kok, WaiWong, B. Sumudu, U. Mendis, & Abdesselam ,
Bouzerdoum (Eds.) Neural Information Processing. Models and Applica-
tions, Lecture Notes in Computer Science, Sydney, N.S.W, pp. 582-589.
This file was downloaded from: http://eprints.qut.edu.au/43788/
c(cid:13) Copyright 2010 Springer-Verlag
Conference proceedings published, by Springer Verlag, will be available
via Lecture Notes in Computer Science http://www.springer.de/comp/lncs/
Notice: Changes introduced as a result of publishing processes such as
copy-editing and formatting may not be reflected in this document. For a
definitive version of this work, please refer to the published source:
http://dx.doi.org/10.1007/978-3-642-17534-3_72
923ec0da8327847910e8dd71e9d801abcbc93b08Hide-and-Seek: Forcing a Network to be Meticulous for
Weakly-supervised Object and Action Localization
University of California, Davis
('19553871', 'Krishna Kumar Singh', 'krishna kumar singh')
('1883898', 'Yong Jae Lee', 'yong jae lee')
0c741fa0966ba3ee4fc326e919bf2f9456d0cd74Facial Age Estimation by Learning from Label Distributions
School of Mathematical Sciences, Monash University, VIC 3800, Australia
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
('1735299', 'Xin Geng', 'xin geng')
('2848275', 'Kate Smith-Miles', 'kate smith-miles')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
0c435e7f49f3e1534af0829b7461deb891cf540aCapturing Global Semantic Relationships for Facial Action Unit Recognition
Rensselaer Polytechnic Institute
School of Electrical Engineering and Automation, Harbin Institute of Technology
School of Computer Science and Technology, University of Science and Technology of China
('2860279', 'Ziheng Wang', 'ziheng wang')
('1830523', 'Yongqiang Li', 'yongqiang li')
('1791319', 'Shangfei Wang', 'shangfei wang')
('1726583', 'Qiang Ji', 'qiang ji')
{wangz10,liy23,jiq}@rpi.edu
sfwang@ustc.edu.cn
0cb7e4c2f6355c73bfc8e6d5cdfad26f3fde0bafInternational Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5, No. 3, May 2014
FACIAL EXPRESSION RECOGNITION BASED ON
Computer Science, Engineering and Mathematics School, Flinders University, Australia
Computer Science, Engineering and Mathematics School, Flinders University, Australia
('3105876', 'Humayra Binte Ali', 'humayra binte ali')
('1739260', 'David M W Powers', 'david m w powers')
0c30f6303dc1ff6d05c7cee4f8952b74b9533928Pareto Discriminant Analysis
Karim T. Abou–Moustafa
Centre of Intelligent Machines
The Robotics Institute
Centre of Intelligent Machines
McGill University
Carnegie Mellon University
McGill University
('1707876', 'Fernando De la Torre', 'fernando de la torre')
('1701344', 'Frank P. Ferrie', 'frank p. ferrie')
karimt@cim.mcgill.ca
ftorre@cs.cmu.edu
ferrie@cim.mcgill.ca
0ccc535d12ad2142a8310d957cc468bbe4c63647Better Exploiting OS-CNNs for Better Event Recognition in Images
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology, CAS, China
('33345248', 'Limin Wang', 'limin wang')
('1915826', 'Zhe Wang', 'zhe wang')
('2072196', 'Sheng Guo', 'sheng guo')
('33427555', 'Yu Qiao', 'yu qiao')
{07wanglimin, buptwangzhe2012, guosheng1001}@gmail.com, yu.qiao@siat.ac.cn
0c8a0a81481ceb304bd7796e12f5d5fa869ee448International Journal of Fuzzy Logic and Intelligent Systems, vol. 10, no. 2, June 2010, pp. 95-100
A Spatial Regularization of LDA for Face Recognition
Gangnung-Wonju National University
123 Chibyun-Dong, Kangnung, 210-702, Korea
('39845108', 'Lae-Jeong Park', 'lae-jeong park')Tel : +82-33-640-2389, Fax : +82-33-646-0740, E-mail : ljpark@gwnu.ac.kr
0c36c988acc9ec239953ff1b3931799af388ef70Face Detection Using Improved Faster RCNN
Huawei Cloud BU, China
Figure1.Face detection results of FDNet1.0
('2568329', 'Changzheng Zhang', 'changzheng zhang')
('5084124', 'Xiang Xu', 'xiang xu')
('2929196', 'Dandan Tu', 'dandan tu')
{zhangzhangzheng, xuxiang12, tudandan}@huawei.com
0c5ddfa02982dcad47704888b271997c4de0674b
0c79a39a870d9b56dc00d5252d2a1bfeb4c295f1Face Recognition in Videos by Label Propagation
International Institute of Information Technology, Hyderabad, India
('37956314', 'Vijay Kumar', 'vijay kumar')
('3185334', 'Anoop M. Namboodiri', 'anoop m. namboodiri')
{vijaykumar.r@research., anoop@, jawahar@}iiit.ac.in
0cccf576050f493c8b8fec9ee0238277c0cfd69a
0cdb49142f742f5edb293eb9261f8243aee36e12Combined Learning of Salient Local Descriptors and Distance Metrics
for Image Set Face Verification
NICTA, PO Box 6020, St Lucia, QLD 4067, Australia
University of Queensland, School of ITEE, QLD 4072, Australia
('1781182', 'Conrad Sanderson', 'conrad sanderson')
('3026404', 'Yongkang Wong', 'yongkang wong')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
0c069a870367b54dd06d0da63b1e3a900a257298Author manuscript, published in "ICANN 2011 - International Conference on Artificial Neural Networks (2011)"
0c75c7c54eec85e962b1720755381cdca3f57dfb2212
Face Landmark Fitting via Optimized Part
Mixtures and Cascaded Deformable Model
('39960064', 'Xiang Yu', 'xiang yu')
('1768190', 'Junzhou Huang', 'junzhou huang')
('1753384', 'Shaoting Zhang', 'shaoting zhang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
0cf2eecf20cfbcb7f153713479e3206670ea0e9cPrivacy-Protective-GAN for Face De-identification
Temple University
('50117915', 'Yifan Wu', 'yifan wu')
('46319628', 'Fan Yang', 'fan yang')
('1805398', 'Haibin Ling', 'haibin ling')
{yifan.wu, fyang, hbling} @temple.edu
0ca36ecaf4015ca4095e07f0302d28a5d9424254Improving Bag-of-Visual-Words Towards Effective Facial Expressive
Image Classification
1Univ. Grenoble Alpes, CNRS, Grenoble INP∗ , GIPSA-lab, 38000 Grenoble, France
Keywords:
BoVW, k-means++, Relative Conjunction Matrix, SIFT, Spatial Pyramids, TF.IDF.
('10762131', 'Dawood Al Chanti', 'dawood al chanti')
('1788869', 'Alice Caplier', 'alice caplier')
dawood.alchanti@gmail.com
0c1d85a197a1f5b7376652a485523e616a406273Joint Registration and Representation Learning for Unconstrained Face
Identification
University of Canberra, Australia, Data61 - CSIRO and ANU, Australia
Khalifa University, Abu Dhabi, United Arab Emirates
('2008898', 'Munawar Hayat', 'munawar hayat')
('1802072', 'Naoufel Werghi', 'naoufel werghi')
{munawar.hayat,roland.goecke}@canberra.edu.au, salman.khan@csiro.au, naoufel.werghi@kustar.ac.ae
0ca66283f4fb7dbc682f789fcf6d6732006befd5Active Dictionary Learning for Image Representation
Department of Electrical and Computer Engineering
Rutgers, The State University of New Jersey, Piscataway, NJ
('37799945', 'Tong Wu', 'tong wu')
('9208982', 'Anand D. Sarwate', 'anand d. sarwate')
('2138101', 'Waheed U. Bajwa', 'waheed u. bajwa')
0c7f27d23a162d4f3896325d147f412c40160b52Models and Algorithms for
Vision through the Atmosphere
Submitted in partial fulfillment of the
requirements for the degree
of Doctor of Philosophy
in the Graduate School of Arts and Sciences
COLUMBIA UNIVERSITY
2003
('1779052', 'Srinivasa G. Narasimhan', 'srinivasa g. narasimhan')
0cfca73806f443188632266513bac6aaf6923fa8Predictive Uncertainty in Large Scale Classification
using Dropout - Stochastic Gradient Hamiltonian
Monte Carlo.
Vergara, Diego∗1, Hern´andez, Sergio∗2, Valdenegro-Toro, Mat´ıas∗∗3 and Jorquera, Felipe∗4.
∗Laboratorio de Procesamiento de Informaci´on Geoespacial, Universidad Cat´olica del Maule, Chile.
∗∗German Research Centre for Artificial Intelligence, Bremen, Germany.
Email: 1diego.vergara@alu.ucm.cl, 2shernandez@ucm.cl,3matias.valdenegro@dfki.de,
4f.jorquera.uribe@gmail.com
0c20fd90d867fe1be2459223a3cb1a69fa3d44bfA Monte Carlo Strategy to Integrate Detection
and Model-Based Face Analysis
Department for Mathematics and Computer Science
University of Basel, Switzerland
('2591294', 'Andreas Forster', 'andreas forster')
('34460642', 'Bernhard Egger', 'bernhard egger')
('1687079', 'Thomas Vetter', 'thomas vetter')
sandro.schoenborn,andreas.forster,bernhard.egger,thomas.vetter@unibas.ch
0c2875bb47db3698dbbb3304aca47066978897a4Recurrent Models for Situation Recognition
University of Illinois at Urbana-Champaign
('36508529', 'Arun Mallya', 'arun mallya')
('1749609', 'Svetlana Lazebnik', 'svetlana lazebnik')
{amallya2,slazebni}@illinois.edu
0c3f7272a68c8e0aa6b92d132d1bf8541c062141Hindawi Publishing Corporation
e Scientific World Journal
Volume 2014, Article ID 672630, 6 pages
http://dx.doi.org/10.1155/2014/672630
Research Article
Kruskal-Wallis-Based Computationally Efficient Feature
Selection for Face Recognition
Foundation University, Rawalpindi 46000, Pakistan
Shaheed Zulfikar Ali Bhutto Institute of Science and Technology Islamabad
Islamabad 44000, Pakistan
International Islamic University, Islamabad 44000, Pakistan
Received 5 December 2013; Accepted 10 February 2014; Published 21 May 2014
Academic Editors: S. Balochian, V. Bhatnagar, and Y. Zhang
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Face recognition in today’s technological world, and face recognition applications attain much more importance. Most of the
existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images.
The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute
to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more
discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers
are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are
performed on standard face database images and results are compared with existing techniques.
1. Introduction
Face recognition is becoming more acceptable in the domain
of computer vision and pattern recognition. The authenti-
cation systems based on the traditional ID card and pass-
word are nowadays replaced by the techniques which are
more preferable in order to handle the security issues. The
authentication systems based on biometrics are one of the
substitutes which are independent of the user’s memory and
not subjected to loss. Among those systems, face recognition
gains special attention because of the security it provides and
because it is independent of the high accuracy equipment
unlike iris and recognition based on the fingerprints.
Feature selection in pattern recognition is specifying the
subset of significant features to decrease the data dimensions
and at the same time it provides the set of selective features.
Image is represented by set of features in methods used for
feature extraction and each feature plays a vital role in the
process of recognition. The feature selection algorithm drops
all the unrelated features with the highly acceptable precision
rate as compared to some other pattern classification problem
in which higher precision rate cannot be obtained by greater
number of feature sets [1].
The feature selected by the classifiers plays a vital role
in producing the best features that are vigorous to the
inconsistent environment, for example, change in expressions
and other barriers. Local (texture-based) and global (holistic)
approaches are the two approaches used for face recognition
[2]. Local approaches characterized the face in the form of
geometric measurements which matches the unfamiliar face
with the closest face from database. Geometric measurements
contain angles and the distance of different facial points,
for example, mouth position, nose length, and eyes. Global
features are extracted by the use of algebraic methods like
PCA (principle component analysis) and ICA (independent
component analysis) [3]. PCA shows a quick response to
light and variation as it serves inner and outer classes
fairly. In face recognition, LDA (linear discriminate analysis)
usually performs better than PCA but separable creation is
not precise in classification. Good recognition rates can be
produced by transformation techniques like DCT (discrete
cosine transform) and DWT (discrete wavelet transform) [4].
('8652075', 'Sajid Ali Khan', 'sajid ali khan')
('9955306', 'Ayyaz Hussain', 'ayyaz hussain')
('1959869', 'Abdul Basit', 'abdul basit')
('2388005', 'Sheeraz Akram', 'sheeraz akram')
('8652075', 'Sajid Ali Khan', 'sajid ali khan')
Correspondence should be addressed to Sajid Ali Khan; sajidalibn@gmail.com
0cbc4dcf2aa76191bbf641358d6cecf38f644325Visage: A Face Interpretation Engine for
Smartphone Applications
Dartmouth College, 6211 Sudiko Lab, Hanover, NH 03755, USA
Intel Lab, 2200 Mission College Blvd, Santa Clara, CA 95054, USA
3 Microsoft Research Asia, No. 5 Dan Ling St., Haidian District, Beijing, China
('1840450', 'Xiaochao Yang', 'xiaochao yang')
('1702472', 'Chuang-Wen You', 'chuang-wen you')
('1884089', 'Hong Lu', 'hong lu')
('1816301', 'Mu Lin', 'mu lin')
('2772904', 'Nicholas D. Lane', 'nicholas d. lane')
('1690035', 'Andrew T. Campbell', 'andrew t. campbell')
{Xiaochao.Yang,chuang-wen.you}@dartmouth.edu,hong.lu@intel.com,
mu.lin@dartmouth.edu,niclane@microsoft.com,campbell@cs.dartmouth.edu
0ce8a45a77e797e9d52604c29f4c1e227f604080International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3,No. 6,December 2013
ZERNIKE MOMENT-BASED FEATURE EXTRACTION
FOR FACIAL RECOGNITION OF IDENTICAL TWINS
1Department of Electrical,Computer and Biomedical Engineering, Qazvin branch,
Amirkabir University of Technology, Tehran
IslamicAzad University, Qazvin, Iran
Iran
('13302047', 'Hoda Marouf', 'hoda marouf')
('1692435', 'Karim Faez', 'karim faez')
0ce3a786aed896d128f5efdf78733cc675970854Learning the Face Prior
for Bayesian Face Recognition
Department of Information Engineering,
The Chinese University of Hong Kong, China
('2312486', 'Chaochao Lu', 'chaochao lu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
0c54e9ac43d2d3bab1543c43ee137fc47b77276e
0c5afb209b647456e99ce42a6d9d177764f9a0dd97
Recognizing Action Units for
Facial Expression Analysis
('40383812', 'Ying-li Tian', 'ying-li tian')
('1733113', 'Takeo Kanade', 'takeo kanade')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
0c59071ddd33849bd431165bc2d21bbe165a81e0Person Recognition in Personal Photo Collections
Max Planck Institute for Informatics
Saarbrücken, Germany
('2390510', 'Seong Joon Oh', 'seong joon oh')
('1798000', 'Rodrigo Benenson', 'rodrigo benenson')
('1739548', 'Mario Fritz', 'mario fritz')
('1697100', 'Bernt Schiele', 'bernt schiele')
{joon,benenson,mfritz,schiele}@mpi-inf.mpg.de
0c377fcbc3bbd35386b6ed4768beda7b5111eec6258
A Unified Probabilistic Framework
for Spontaneous Facial Action Modeling
and Understanding
('1686235', 'Yan Tong', 'yan tong')
('1713712', 'Jixu Chen', 'jixu chen')
('1726583', 'Qiang Ji', 'qiang ji')
0c12cbb9b9740dfa2816b8e5cde69c2f5a715c58Memory-Augmented Attribute Manipulation Networks for
Interactive Fashion Search
Southwest Jiaotong University
National University of Singapore
AI Institute
('33901950', 'Bo Zhao', 'bo zhao')
('33221685', 'Jiashi Feng', 'jiashi feng')
('1814091', 'Xiao Wu', 'xiao wu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
zhaobo@my.swjtu.edu.cn, elezhf@nus.edu.sg, wuxiaohk@swjtu.edu.cn, yanshuicheng@360.cn
0cb2dd5f178e3a297a0c33068961018659d0f443('2964917', 'Cameron Whitelam', 'cameron whitelam')
('1885566', 'Emma Taborsky', 'emma taborsky')
('1917247', 'Austin Blanton', 'austin blanton')
('8033275', 'Brianna Maze', 'brianna maze')
('15282121', 'Tim Miller', 'tim miller')
('6680444', 'Anil K. Jain', 'anil k. jain')
('40205896', 'James A. Duncan', 'james a. duncan')
('2040584', 'Kristen Allen', 'kristen allen')
('39403529', 'Jordan Cheney', 'jordan cheney')
('2136478', 'Patrick Grother', 'patrick grother')
0cd8895b4a8f16618686f622522726991ca2a324Discrete Choice Models for Static Facial Expression
Recognition
Ecole Polytechnique Federale de Lausanne, Signal Processing Institute
2 Ecole Polytechnique Federale de Lausanne, Operation Research Group
Ecublens, 1015 Lausanne, Switzerland
Ecublens, 1015 Lausanne, Switzerland
('1794461', 'Gianluca Antonini', 'gianluca antonini')
('2916630', 'Matteo Sorci', 'matteo sorci')
('1690395', 'Michel Bierlaire', 'michel bierlaire')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
{Matteo.Sorci,Gianluca.Antonini,JP.Thiran}@epfl.ch
Michel.Bierlaire@epfl.ch
0cf7da0df64557a4774100f6fde898bc4a3c4840Shape Matching and Object Recognition using Low Distortion Correspondences
Department of Electrical Engineering and Computer Science
U.C. Berkeley
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1689212', 'Jitendra Malik', 'jitendra malik')
faberg,millert,malikg@eecs.berkeley.edu
0cbe059c181278a373292a6af1667c54911e7925Owl and Lizard: Patterns of Head Pose and Eye
Pose in Driver Gaze Classification
Massachusetts Institute of Technology (MIT
Chalmers University of Technology, SAFER
('7137846', 'Joonbum Lee', 'joonbum lee')
('1901227', 'Bryan Reimer', 'bryan reimer')
('35816778', 'Trent Victor', 'trent victor')
0c4659b35ec2518914da924e692deb37e96d62061236
Registering a MultiSensor Ensemble of Images
('1822837', 'Jeff Orchard', 'jeff orchard')
('6056877', 'Richard Mann', 'richard mann')
0c6e29d82a5a080dc1db9eeabbd7d1529e78a3dcLearning Bayesian Network Classifiers for Facial Expression Recognition using
both Labeled and Unlabeled Data
Beckman Institute, University of Illinois at Urbana-Champaign, IL, USA
iracohen, huang
 Escola Polit´ecnica, Universidade de S˜ao Paulo, S˜ao Paulo, Brazil
fgcozman, marcelo.cirelo
('1774778', 'Ira Cohen', 'ira cohen')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
@ifp.uiuc.edu
 Leiden Institute of Advanced Computer Science, Leiden University, The Netherlands, nicu@liacs.nl
@usp.br
0ced7b814ec3bb9aebe0fcf0cac3d78f36361eaeAvailable Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IMPACT FACTOR: 6.017

IJCSMC, Vol. 6, Issue. 1, January 2017, pg.221 – 227
Central Local Directional Pattern Value
Flooding Co-occurrence Matrix based
Features for Face Recognition
Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad
('40221166', 'Chandra Sekhar Reddy', 'chandra sekhar reddy')
('40221166', 'Chandra Sekhar Reddy', 'chandra sekhar reddy')
0c53ef79bb8e5ba4e6a8ebad6d453ecf3672926dSUBMITTED TO JOURNAL
Weakly Supervised PatchNets: Describing and
Aggregating Local Patches for Scene Recognition
('40184588', 'Zhe Wang', 'zhe wang')
('39709927', 'Limin Wang', 'limin wang')
('40457196', 'Yali Wang', 'yali wang')
('3047890', 'Bowen Zhang', 'bowen zhang')
('40285012', 'Yu Qiao', 'yu qiao')
0c60eebe10b56dbffe66bb3812793dd514865935
0c05f60998628884a9ac60116453f1a91bcd9ddaOptimizing Open-Ended Crowdsourcing: The Next Frontier in
Crowdsourced Data Management
University of Illinois
cid:63)Stanford University
('32953042', 'Akash Das Sarma', 'akash das sarma')
('8336538', 'Vipul Venkataraman', 'vipul venkataraman')
6601a0906e503a6221d2e0f2ca8c3f544a4adab7SRTM-2 2/9/06 3:27 PM Page 321
Detection of Ancient Settlement Mounds:
Archaeological Survey Based on the
SRTM Terrain Model
B.H. Menze, J.A. Ur, and A.G. Sherratt
660b73b0f39d4e644bf13a1745d6ee74424d4a163,250+OPEN ACCESS BOOKS106,000+INTERNATIONALAUTHORS AND EDITORS113+ MILLIONDOWNLOADSBOOKSDELIVERED TO151 COUNTRIESAUTHORS AMONGTOP 1%MOST CITED SCIENTIST12.2%AUTHORS AND EDITORSFROM TOP 500 UNIVERSITIESSelection of our books indexed in theBook Citation Index in Web of Science™Core Collection (BKCI)Chapter from the book Reviews, Refinements and New Ideas in Face RecognitionDownloaded from: http://www.intechopen.com/books/reviews-refinements-and-new-ideas-in-face-recognitionPUBLISHED BYWorld's largest Science,Technology & Medicine Open Access book publisherInterested in publishing with InTechOpen?Contact us at book.department@intechopen.com
66d512342355fb77a4450decc89977efe7e55fa2Under review as a conference paper at ICLR 2018
LEARNING NON-LINEAR TRANSFORM WITH DISCRIM-
INATIVE AND MINIMUM INFORMATION LOSS PRIORS
Anonymous authors
Paper under double-blind review
66aad5b42b7dda077a492e5b2c7837a2a808c2faA Novel PCA-Based Bayes Classifier
and Face Analysis
1 Centre de Visi´o per Computador,
Universitat Aut`onoma de Barcelona, Barcelona, Spain
2 Department of Computer Science,
Nanjing University of Science and Technology
Nanjing, People’s Republic of China
3 HEUDIASYC - CNRS Mixed Research Unit,
Compi`egne University of Technology
60205 Compi`egne cedex, France
('1761329', 'Zhong Jin', 'zhong jin')
('1742818', 'Franck Davoine', 'franck davoine')
('35428318', 'Zhen Lou', 'zhen lou')
zhong.jin@cvc.uab.es
jyyang@mail.njust.edu.cn
franck.davoine@hds.utc.fr
66b9d954dd8204c3a970d86d91dd4ea0eb12db47Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition
in Image Sequences of Increasing Complexity
IBM T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
University of Pittsburgh, Pittsburgh, PA
('40383812', 'Ying-li Tian', 'ying-li tian')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
Email: yltian@us.ibm.com,
tk@cs.cmu.edu
jeffcohn@pitt.edu
6643a7feebd0479916d94fb9186e403a4e5f7cbfChapter 8
3D Face Recognition
('1737428', 'Nick Pears', 'nick pears')
661ca4bbb49bb496f56311e9d4263dfac8eb96e9Datasheets for Datasets ('2076288', 'Timnit Gebru', 'timnit gebru')
('1722360', 'Hal Daumé', 'hal daumé')
66dcd855a6772d2731b45cfdd75f084327b055c2Quality Classified Image Analysis with Application
to Face Detection and Recognition
International Doctoral Innovation Centre
University of Nottingham Ningbo China
School of Computer Science
University of Nottingham Ningbo China
College of Information Engineering
Shenzhen University, Shenzhen, China
('1684164', 'Fei Yang', 'fei yang')
('1737486', 'Qian Zhang', 'qian zhang')
('2155597', 'Miaohui Wang', 'miaohui wang')
('1698461', 'Guoping Qiu', 'guoping qiu')
666939690c564641b864eed0d60a410b31e49f80What Visual Attributes Characterize an Object Class ?
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of
Sciences, No.95, Zhongguancun East Road, Beijing, 100190, China
2Microsoft Research, No.5, Dan Ling Street, Haidian District, Beijing 10080, China
('3247966', 'Jianlong Fu', 'jianlong fu')
('1783122', 'Jinqiao Wang', 'jinqiao wang')
('3349534', 'Xin-Jing Wang', 'xin-jing wang')
('3663422', 'Yong Rui', 'yong rui')
('1694235', 'Hanqing Lu', 'hanqing lu')
1fjlfu, jqwang, luhqg@nlpr.ia.ac.cn, 2fxjwang, yongruig@microsoft.com
66330846a03dcc10f36b6db9adf3b4d32e7a3127Polylingual Multimodal Learning
Institute AIFB, Karlsruhe Institute of Technology, Germany
('3219864', 'Aditya Mogadala', 'aditya mogadala'){aditya.mogadala}@kit.edu
66d087f3dd2e19ffe340c26ef17efe0062a59290Dog Breed Identification
Brian Mittl
Vijay Singh
wlarow@stanford.edu
bmittl@stanford.edu
vpsingh@stanford.edu
6618cff7f2ed440a0d2fa9e74ad5469df5cdbe4cOrdinal Regression with Multiple Output CNN for Age Estimation
Xidian University 2Xi an Jiaotong University 3Microsoft Research Asia
('1786361', 'Zhenxing Niu', 'zhenxing niu')
('1745420', 'Gang Hua', 'gang hua')
('10699750', 'Xinbo Gao', 'xinbo gao')
('36497527', 'Mo Zhou', 'mo zhou')
('40367806', 'Le Wang', 'le wang')
{zhenxingniu,cdluminate}@gmail.com, lewang@mail.xjtu.edu.cn, xinbogao@mail.xidian.edu.cn
ganghua@gmail.com
666300af8ffb8c903223f32f1fcc5c4674e2430bChanging Fashion Cultures
National Institute of Advanced Industrial Science and Technology (AIST
Tsukuba, Ibaraki, Japan
Tokyo Denki University
Adachi, Tokyo, Japan
('3408038', 'Kaori Abe', 'kaori abe')
('5014206', 'Teppei Suzuki', 'teppei suzuki')
('9935341', 'Shunya Ueta', 'shunya ueta')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('2462801', 'Akio Nakamura', 'akio nakamura')
{abe.keroko, suzuki-teppei, shunya.ueta, yu.satou, hirokatsu.kataoka}@aist.go.jp
nkmr-a@cck.dendai.ac.jp
66029f1be1a5cee9a4e3e24ed8fcb65d5d293720HWANG AND GRAUMAN: ACCOUNTING FOR IMPORTANCE IN IMAGE RETRIEVAL
Accounting for the Relative Importance of
Objects in Image Retrieval
The University of Texas
Austin, TX, USA
('35788904', 'Sung Ju Hwang', 'sung ju hwang')
('1794409', 'Kristen Grauman', 'kristen grauman')
sjhwang@cs.utexas.edu
grauman@cs.utexas.edu
6691dfa1a83a04fdc0177d8d70e3df79f606b10fIllumination Modeling and Normalization for Face Recognition
Institute of Automation
Chinese Academy of Sciences
Beijing, 100080, China
('29948255', 'Haitao Wang', 'haitao wang')
('34679741', 'Stan Z. Li', 'stan z. li')
('1744302', 'Yangsheng Wang', 'yangsheng wang')
('38248052', 'Weiwei Zhang', 'weiwei zhang')
{htwang, wys, wwzhang}@nlpr.ia.ac.cn
66a2c229ac82e38f1b7c77a786d8cf0d7e369598Proceedings of the 2016 Industrial and Systems Engineering Research Conference
H. Yang, Z. Kong, and MD Sarder, eds.
A Probabilistic Adaptive Search System
for Exploring the Face Space
Escuela Superior Politecnica del Litoral (ESPOL)
Guayaquil-Ecuador
('3123974', 'Andres G. Abad', 'andres g. abad')
('3044670', 'Luis I. Reyes Castro', 'luis i. reyes castro')
66886997988358847615375ba7d6e9eb0f1bb27f
66837add89caffd9c91430820f49adb5d3f40930
66a9935e958a779a3a2267c85ecb69fbbb75b8dcFAST AND ROBUST FIXED-RANK MATRIX RECOVERY
Fast and Robust Fixed-Rank Matrix
Recovery
Antonio Lopez
('34210410', 'Julio Guerrero', 'julio guerrero')
66533107f9abdc7d1cb8f8795025fc7e78eb1122Vi a Sevig f a Ue  h wih E(cid:11)ecive ei Readig
i a Wheechai baed Rbic A
W y g Sgy Dae i iy g S g iz ad Ze ga Biey
y EECS AST 373 1  g Dg Y g G  Taej 305 701 REA
z VR Cee ETR 161 ajg Dg Y g G  Taej 305 350 REA
Abac
Thee exi he c eaive aciviy bewee a h
a beig ad ehabiiai b beca e he h
a eae ehabiiai b i he ae evi
e ad ha he bee(cid:12) f ehabiiai b
 ch a ai ay  bie f ci. ei
eadig i e f he eeia f ci f h a
fiedy ehabiiai b i de  ie he
cf ad afey f a wh eed he. Fi f
a he vea  c e f a ew wheechai baed
bic a ye ARES  ad i h a b
ieaci echgie ae eeed. Ag he
echgie we cceae  vi a evig ha
aw hi bic a  eae a  y via
vi a feedback. E(cid:11)ecive iei eadig  ch a
ecgizig he iive ad egaive eaig f he
e i efed  he bai f chage f he facia
exei a d i ha i gy eaed  he
e iei whie hi bic a vide he
e wih a beveage. F he eÆcie vi a ifa
i ceig g a aed iage ae ed 
c he ee caea head ha i caed i he
ed e(cid:11)ec f he bic a. The vi a evig
wih e(cid:11)ecive iei eadig i  ccef y aied
 eve a beveage f he e.
d ci
Wheechai baed bic ye ae aiy ed 
ai he edey ad he diabed wh have hadi
ca i ey ad  f ci i ib. S ch a
ye ci f a weed wheechai ad a bic
a ad ha  y a bie caabiiy h gh
he wheechai b  a a ai ay f ci via
he bic a ad h  ake ibe he c
exiece f a e ad a b i he ae evi
e.
 hi cae he e eed  ieac wih
he bic a i cfabe ad afe way. w
Fig e 1: The wheechai baed bic a ad i
h a b ieaci echgie.
eve i ha bee eed ha ay diÆc ie exi
i h a bf ieaci i exiig ehabiiai
b. F exae a a c f he bic
a ake a high cgiive ad  he e a whie
hyicay diabed e ay have diÆc ie i 
eaig jyick dexe y   hig b  f
deicae vee [4].  addii AUS eva
ai e eed ha he  diÆc  hig 
ig ehabiiai b i  ay cad f a
a adj e ad  ay f ci  kee i
id a he begiig [4]. Theefe h a fiedy
h a b ieaci i e f eeia echi e
i a wheechai baed bic a.
 hi ae we cide he wheechai baed
bic ye ARES AST Rehabiiai E
gieeig Sevice ye  which we ae deveig
a a evice bic ye f he diabed ad he
edey ad dic  i h a b ieaci ech
i e Fig. 1. Ag h a b ieaci ech
i e vi a evig i dea wih a a aj ic.
zbie@ee.kai.ac.k
66810438bfb52367e3f6f62c24f5bc127cf92e56Face Recognition of Illumination Tolerance in 2D
Subspace Based on the Optimum Correlation
Filter
Xu Yi
Department of Information Engineering, Hunan Industry Polytechnic, Changsha, China
images will be tested to project
66af2afd4c598c2841dbfd1053bf0c386579234eNoname manuscript No.
(will be inserted by the editor)
Context Assisted Face Clustering Framework with
Human-in-the-Loop
Received: date / Accepted: date
('3338094', 'Liyan Zhang', 'liyan zhang')
('1686199', 'Sharad Mehrotra', 'sharad mehrotra')
66f02fbcad13c6ee5b421be2fc72485aaaf6fcb5The AAAI-17 Workshop on
Human-Aware Artificial Intelligence
WS-17-10
Using Co-Captured Face, Gaze and Verbal Reactions to Images of
Varying Emotional Content for Analysis and Semantic Alignment
Muhlenberg College
Rochester Institute of Technology
Rochester Institute of Technology
('40114708', 'Trevor Walden', 'trevor walden')
('2459642', 'Preethi Vaidyanathan', 'preethi vaidyanathan')
('37459359', 'Reynold Bailey', 'reynold bailey')
('1695716', 'Cecilia O. Alm', 'cecilia o. alm')
ag249083@muhlenberg.edu
tjw5866@rit.edu
{pxv1621, emilypx, rjbvcs, coagla}@rit.edu
66e9fb4c2860eb4a15f713096020962553696e12A New Urban Objects Detection Framework
Using Weakly Annotated Sets
University of S ao Paulo - USP, S ao Paulo - Brazil
New York University
('40014199', 'Claudio Silva', 'claudio silva')
('1748049', 'Roberto M. Cesar', 'roberto m. cesar')
{keiji, gabriel.augusto.ferreira, rmcesar}@usp.br
csilva@nyu.edu
66e6f08873325d37e0ec20a4769ce881e04e964eInt J Comput Vis (2014) 108:59–81
DOI 10.1007/s11263-013-0695-z
The SUN Attribute Database: Beyond Categories for Deeper Scene
Understanding
Received: 27 February 2013 / Accepted: 28 December 2013 / Published online: 18 January 2014
© Springer Science+Business Media New York 2014
('40541456', 'Genevieve Patterson', 'genevieve patterson')
('12532254', 'James Hays', 'james hays')
661da40b838806a7effcb42d63a9624fcd68497653
An Illumination Invariant Accurate
Face Recognition with Down Scaling
of DCT Coefficients
Department of Computer Science and Engineering, Amity School of Engineering and Technology, New Delhi, India
In this paper, a novel approach for illumination normal-
ization under varying lighting conditions is presented.
Our approach utilizes the fact that discrete cosine trans-
form (DCT) low-frequency coefficients correspond to
illumination variations in a digital image. Under varying
illuminations, the images captured may have low con-
trast; initially we apply histogram equalization on these
for contrast stretching. Then the low-frequency DCT
coefficients are scaled down to compensate the illumi-
nation variations. The value of scaling down factor and
the number of low-frequency DCT coefficients, which
are to be rescaled, are obtained experimentally. The
classification is done using k−nearest neighbor classi-
fication and nearest mean classification on the images
obtained by inverse DCT on the processed coefficients.
The correlation coefficient and Euclidean distance ob-
tained using principal component analysis are used as
distance metrics in classification. We have tested our
face recognition method using Yale Face Database B.
The results show that our method performs without any
error (100% face recognition performance), even on the
most extreme illumination variations. There are different
schemes in the literature for illumination normalization
under varying lighting conditions, but no one is claimed
to give 100% recognition rate under all illumination
variations for this database. The proposed technique is
computationally efficient and can easily be implemented
for real time face recognition system.
Keywords: discrete cosine transform, correlation co-
efficient, face recognition, illumination normalization,
nearest neighbor classification
1. Introduction
Two-dimensional pattern classification plays a
crucial role in real-world applications. To build
high-performance surveillance or information
security systems, face recognition has been
known as the key application attracting enor-
mous researchers highlighting on related topics
[1,2]. Even though current machine recognition
systems have reached a certain level of matu-
rity, their success is limited by the real appli-
cations constraints, like pose, illumination and
expression. The FERET evaluation shows that
the performance of a face recognition system
decline seriously with the change of pose and
illumination conditions [31].
To solve the variable illumination problem a
variety of approaches have been proposed [3, 7-
11, 26-29]. Early work in illumination invariant
face recognition focused on image representa-
tions that are mostly insensitive to changes in
illumination. There were approaches in which
the image representations and distance mea-
sures were evaluated on a tightly controlled face
database that varied the face pose, illumination,
and expression. The image representations in-
clude edge maps, 2D Gabor-like filters, first and
second derivatives of the gray-level image, and
the logarithmic transformations of the intensity
image along with these representations [4].
The different approaches to solve the prob-
lem of illumination invariant face recognition
can be broadly classified into two main cate-
gories. The first category is named as passive
approach in which the visual spectrum images
are analyzed to overcome this problem. The
approaches belonging to other category named
active, attempt to overcome this problem by
employing active imaging techniques to obtain
face images captured in consistent illumina-
tion condition, or images of illumination invari-
ant modalities. There is a hierarchical catego-
rization of these two approaches. An exten-
sive review of both approaches is given in [5].
('2650871', 'Virendra P. Vishwakarma', 'virendra p. vishwakarma')
('2100294', 'Sujata Pandey', 'sujata pandey')
('11690561', 'M. N. Gupta', 'm. n. gupta')
66886f5af67b22d14177119520bd9c9f39cdd2e6T. KOBAYASHI: LEARNING ADDITIVE KERNEL
Learning Additive Kernel For Feature
Transformation and Its Application to CNN
Features
National Institute of Advanced Industrial
Science and Technology
Tsukuba, Japan
('1800592', 'Takumi Kobayashi', 'takumi kobayashi')takumi.kobayashi@aist.go.jp
3edb0fa2d6b0f1984e8e2c523c558cb026b2a983Automatic Age Estimation Based on
Facial Aging Patterns
('1735299', 'Xin Geng', 'xin geng')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
('2848275', 'Kate Smith-Miles', 'kate smith-miles')
3e69ed088f588f6ecb30969bc6e4dbfacb35133eACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 2011
Improving Performance of Texture Based Face
Recognition Systems by Segmenting Face Region
St. Xavier s Catholic College of Engineering, Nagercoil, India
Manonmaniam Sundaranar University, Tirunelveli, India
('9375880', 'R. Reena Rose', 'r. reena rose')
('3311251', 'A. Suruliandi', 'a. suruliandi')
mailtoreenarose@yahoo.in
suruliandi@yahoo.com
3e0a1884448bfd7f416c6a45dfcdfc9f2e617268Understanding and Controlling User Linkability in
Decentralized Learning
Max Planck Institute for Informatics
Saarland Informatics Campus
Saarbrücken, Germany
('9517443', 'Tribhuvanesh Orekondy', 'tribhuvanesh orekondy')
('2390510', 'Seong Joon Oh', 'seong joon oh')
('1697100', 'Bernt Schiele', 'bernt schiele')
{orekondy,joon,schiele,mfritz}@mpi-inf.mpg.de
3e4b38b0574e740dcbd8f8c5dfe05dbfb2a92c07FACIAL EXPRESSION RECOGNITION WITH LOCAL BINARY PATTERNS
AND LINEAR PROGRAMMING
Xiaoyi Feng1, 2, Matti Pietikäinen1, Abdenour Hadid1
1 Machine Vision Group, Infotech Oulu and Dept. of Electrical and Information Engineering
P. O. Box 4500 Fin-90014 University of Oulu, Finland
College of Electronics and Information, Northwestern Polytechnic University
710072 Xi’an, China
In this work, we propose a novel approach to recognize facial expressions from static
images. First, the Local Binary Patterns (LBP) are used to efficiently represent the facial
images and then the Linear Programming (LP) technique is adopted to classify the seven
facial expressions anger, disgust, fear, happiness, sadness, surprise and neutral.
Experimental results demonstrate an average recognition accuracy of 93.8% on the JAFFE
database, which outperforms the rates of all other reported methods on the same database.
Introduction
Facial expression recognition from static
images is a more challenging problem
than from image sequences because less
information for expression actions
is
available. However, information in a
single image is sometimes enough for
expression recognition, and
in many
applications it is also useful to recognize
single image’s facial expression.
In the recent years, numerous approaches
to facial expression analysis from static
images have been proposed [1] [2]. These
methods
face
representation and similarity measure.
For instance, Zhang [3] used two types of
features: the geometric position of 34
manually selected fiducial points and a
set of Gabor wavelet coefficients at these
points. These two types of features were
used both independently and jointly with
a multi-layer perceptron for classification.
Guo and Dyer [4] also adopted a similar
face representation, combined with linear
to carry out
programming
selection
simultaneous
and
classifier
they reported
technique
feature
training, and
differ
generally
in
a
simple
imperative question
better result. Lyons et al. used a similar face
representation with
LDA-based
classification scheme [5]. All the above methods
required the manual selection of fiducial points.
Buciu et al. used ICA and Gabor representation for
facial expression recognition and reported good result
on the same database [6]. However, a suitable
combination of feature extraction and classification is
still one
for expression
recognition.
In this paper, we propose a novel method for facial
expression recognition. In the feature extraction step,
the Local Binary Pattern (LBP) operator is used to
describe facial expressions. In the classification step,
seven expressions (anger, disgust, fear, happiness,
sadness, surprise and neutral) are decomposed into 21
expression pairs such as anger-fear, happiness-
sadness etc. 21 classifiers are produced by the Linear
Programming (LP) technique, each corresponding to
one of the 21 expression pairs. A simple binary tree
tournament scheme with pairwise comparisons is
used for classifying unknown expressions.
Face Representation with Local Binary Patterns

Fig.1 shows the basic LBP operator [7], in which the
original 3×3 neighbourhood at the left is thresholded
by the value of the centre pixel, and a binary pattern
{xiaoyi,mkp,hadid}@ee.oulu.fi
fengxiao@nwpu.edu.cn
3ee7a8107a805370b296a53e355d111118e96b7c
3ebce6710135d1f9b652815e59323858a7c60025Component-based Face Detection
(cid:1)Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA
cid:2)Honda RandD Americas, Inc., Boston, MA, USA
University of Siena, Siena, Italy
('1684626', 'Bernd Heisele', 'bernd heisele')(cid:1)heisele, serre, tp(cid:2) @ai.mit.edu pontil@dii.unisi.it
3e4acf3f2d112fc6516abcdddbe9e17d839f5d9bDeep Value Networks Learn to
Evaluate and Iteratively Refine Structured Outputs
('3037160', 'Michael Gygli', 'michael gygli')
3e3f305dac4fbb813e60ac778d6929012b4b745aFeature sampling and partitioning for visual vocabulary
generation on large action classification datasets.
Oxford Brookes University
University of Oxford
('3019396', 'Michael Sapienza', 'michael sapienza')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
3ea8a6dc79d79319f7ad90d663558c664cf298d4('40253814', 'IRA COHEN', 'ira cohen')
3e4f84ce00027723bdfdb21156c9003168bc1c801979
© EURASIP, 2011 - ISSN 2076-1465
19th European Signal Processing Conference (EUSIPCO 2011)
INTRODUCTION
3e04feb0b6392f94554f6d18e24fadba1a28b65f14
Subspace Image Representation for Facial
Expression Analysis and Face Recognition
and its Relation to the Human Visual System
Aristotle University of Thessaloniki GR
Thessaloniki, Box 451, Greece.
2 Electronics Department, Faculty of Electrical Engineering and Information
Technology, University of Oradea 410087, Universitatii 1, Romania
Summary. Two main theories exist with respect to face encoding and representa-
tion in the human visual system (HVS). The first one refers to the dense (holistic)
representation of the face, where faces have “holon”-like appearance. The second one
claims that a more appropriate face representation is given by a sparse code, where
only a small fraction of the neural cells corresponding to face encoding is activated.
Theoretical and experimental evidence suggest that the HVS performs face analysis
(encoding, storing, face recognition, facial expression recognition) in a structured
and hierarchical way, where both representations have their own contribution and
goal. According to neuropsychological experiments, it seems that encoding for face
recognition, relies on holistic image representation, while a sparse image represen-
tation is used for facial expression analysis and classification. From the computer
vision perspective, the techniques developed for automatic face and facial expres-
sion recognition fall into the same two representation types. Like in Neuroscience,
the techniques which perform better for face recognition yield a holistic image rep-
resentation, while those techniques suitable for facial expression recognition use a
sparse or local image representation. The proposed mathematical models of image
formation and encoding try to simulate the efficient storing, organization and coding
of data in the human cortex. This is equivalent with embedding constraints in the
model design regarding dimensionality reduction, redundant information minimiza-
tion, mutual information minimization, non-negativity constraints, class informa-
tion, etc. The presented techniques are applied as a feature extraction step followed
by a classification method, which also heavily influences the recognition results.
Key words: Human Visual System; Dense, Sparse and Local Image Repre-
sentation and Encoding, Face and Facial Expression Analysis and Recogni-
tion.
R.P. W¨urtz (ed.), Organic Computing. Understanding Complex Systems,
doi: 10.1007/978-3-540-77657-4 14, © Springer-Verlag Berlin Heidelberg 2008
('2336758', 'Ioan Buciu', 'ioan buciu')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
pitas@zeus.csd.auth.gr
ibuciu@uoradea.ro
3e685704b140180d48142d1727080d2fb9e52163Single Image Action Recognition by Predicting
Space-Time Saliency
('32998919', 'Marjaneh Safaei', 'marjaneh safaei')
('1691260', 'Hassan Foroosh', 'hassan foroosh')
3e51d634faacf58e7903750f17111d0d172a0bf1A COMPRESSIBLE TEMPLATE PROTECTION SCHEME
FOR FACE RECOGNITION BASED ON SPARSE REPRESENTATION
Tokyo Metropolitan University
6–6 Asahigaoka, Hino-shi, Tokyo 191–0065, Japan
† NTT Network Innovation Laboratories, Japan
('32403098', 'Yuichi Muraki', 'yuichi muraki')
('11129971', 'Masakazu Furukawa', 'masakazu furukawa')
('1728060', 'Masaaki Fujiyoshi', 'masaaki fujiyoshi')
('34638424', 'Yoshihide Tonomura', 'yoshihide tonomura')
('1737217', 'Hitoshi Kiya', 'hitoshi kiya')
3e40991ab1daa2a4906eb85a5d6a01a958b6e674LIPNET: END-TO-END SENTENCE-LEVEL LIPREADING
University of Oxford, Oxford, UK
Google DeepMind, London, UK 2
CIFAR, Canada 3
{yannis.assael,brendan.shillingford,
('3365565', 'Yannis M. Assael', 'yannis m. assael')
('3144580', 'Brendan Shillingford', 'brendan shillingford')
('1766767', 'Shimon Whiteson', 'shimon whiteson')
shimon.whiteson,nando.de.freitas}@cs.ox.ac.uk
3e687d5ace90c407186602de1a7727167461194aPhoto Tagging by Collection-Aware People Recognition
UFF
UFF
Asla S´a
FGV
IMPA
('2901520', 'Cristina Nader Vasconcelos', 'cristina nader vasconcelos')
('19264449', 'Vinicius Jardim', 'vinicius jardim')
('1746637', 'Paulo Cezar Carvalho', 'paulo cezar carvalho')
crisnv@ic.uff.br
vinicius@id.uff.br
asla.sa@fgv.br
pcezar@impa.br
3e3a87eb24628ab075a3d2bde3abfd185591aa4cEffects of sparseness and randomness of
pairwise distance matrix on t-SNE results
BECS, Aalto University, Helsinki, Finland
('32430508', 'Eli Parviainen', 'eli parviainen')
3e207c05f438a8cef7dd30b62d9e2c997ddc0d3fObjects as context for detecting their semantic parts
University of Edinburgh
('20758701', 'Abel Gonzalez-Garcia', 'abel gonzalez-garcia')
('1996209', 'Davide Modolo', 'davide modolo')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
a.gonzalez-garcia@sms.ed.ac.uk
davide.modolo@gmail.com
vferrari@staffmail.ed.ac.uk
5040f7f261872a30eec88788f98326395a44db03PAPAMAKARIOS, PANAGAKIS, ZAFEIRIOU: GENERALISED SCALABLE ROBUST PCA
Generalised Scalable Robust Principal
Component Analysis
Department of Computing
Imperial College London
London, UK
('2369138', 'Georgios Papamakarios', 'georgios papamakarios')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
georgios.papamakarios13@imperial.ac.uk
i.panagakis@imperial.ac.uk
s.zafeiriou@imperial.ac.uk
50f0c495a214b8d57892d43110728e54e413d47dSubmitted 8/11; Revised 3/12; Published 8/12
Pairwise Support Vector Machines and their Application to Large
Scale Problems
Institute for Numerical Mathematics
Technische Universit¨at Dresden
01062 Dresden, Germany
Cognitec Systems GmbH
Grossenhainer Str. 101
01127 Dresden, Germany
Editor: Corinna Cortes
('25796572', 'Carl Brunner', 'carl brunner')
('1833903', 'Andreas Fischer', 'andreas fischer')
('2201239', 'Klaus Luig', 'klaus luig')
('2439730', 'Thorsten Thies', 'thorsten thies')
C.BRUNNER@GMX.NET
ANDREAS.FISCHER@TU-DRESDEN.DE
LUIG@COGNITEC.COM
THIES@COGNITEC.COM
501096cca4d0b3d1ef407844642e39cd2ff86b37Illumination Invariant Face Image
Representation using Quaternions
Dayron Rizo-Rodr´ıguez, Heydi M´endez-V´azquez, and Edel Garc´ıa-Reyes
Advanced Technologies Application Center. 7a # 21812 b/ 218 and 222,
Rpto. Siboney, Playa, P.C. 12200, La Habana, Cuba.
{drizo,hmendez,egarcia}@cenatav.co.cu
500fbe18afd44312738cab91b4689c12b4e0eeeeChaLearn Looking at People 2015 new competitions:
Age Estimation and Cultural Event Recognition
University of Barcelona
Computer Vision Center, UAB
Jordi Gonz`alez
Xavier Bar´o
Univ. Aut`onoma de Barcelona
Computer Vision Center, UAB
Universitat Oberta de Catalunya
Computer Vision Center, UAB
University of Barcelona
Univ. Aut`onoma de Barcelona
Computer Vision Center, UAB
University of Barcelona
Computer Vision Center, UAB
INAOE
Ivan Huerta
University of Venezia
Clopinet, Berkeley
('7855312', 'Sergio Escalera', 'sergio escalera')
('40378482', 'Pablo Pardo', 'pablo pardo')
('37811966', 'Junior Fabian', 'junior fabian')
('3305641', 'Marc Oliu', 'marc oliu')
('1742688', 'Hugo Jair Escalante', 'hugo jair escalante')
('1743797', 'Isabelle Guyon', 'isabelle guyon')
Email: sergio@maia.ub.es
Email: ppardoga7@gmail.com
Email: poal@cvc.uab.es
Email: xbaro@uoc.edu
Email: jfabian@cvc.uab.es
Email: moliusimon@gmail.com
Email: hugo.jair@gmail.com
Email: huertacasado@iuav.it
Email: guyon@chalearn.org
501eda2d04b1db717b7834800d74dacb7df58f91('3846862', 'Pedro Miguel Neves Marques', 'pedro miguel neves marques')
5083c6be0f8c85815ead5368882b584e4dfab4d1 Please do not quote. In press, Handbook of affective computing. New York, NY: Oxford
Automated Face Analysis for Affective Computing
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
506c2fbfa9d16037d50d650547ad3366bb1e1cdeConvolutional Channel Features: Tailoring CNN to Diverse Tasks
Junjie Yan
Zhen Lei
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1716231', 'Bin Yang', 'bin yang')
('34679741', 'Stan Z. Li', 'stan z. li')
{zlei, szli}@nlpr.ia.ac.cn
{yb.derek, yanjjie}@gmail.com
500b92578e4deff98ce20e6017124e6d2053b451
504028218290d68859f45ec686f435f473aa326cMulti-Fiber Networks for Video Recognition
National University of Singapore
2 Facebook Research
Qihoo 360 AI Institute
('1713312', 'Yunpeng Chen', 'yunpeng chen')
('1944225', 'Yannis Kalantidis', 'yannis kalantidis')
('2757639', 'Jianshu Li', 'jianshu li')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('33221685', 'Jiashi Feng', 'jiashi feng')
{chenyunpeng, jianshu}@u.nus.edu, yannisk@fb.com,
{eleyans, elefjia}@nus.edu.sg
5058a7ec68c32984c33f357ebaee96c59e269425A Comparative Evaluation of Regression Learning
Algorithms for Facial Age Estimation
1 Herta Security
Pau Claris 165 4-B, 08037 Barcelona, Spain
DPDCE, University IUAV
Santa Croce 1957, 30135 Venice, Italy
('1733945', 'Andrea Prati', 'andrea prati')carles.fernandez@hertasecurity.com
huertacasado@iuav.it, aprati@iuav.it
50ff21e595e0ebe51ae808a2da3b7940549f4035IEEE TRANSACTIONS ON LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 2017
Age Group and Gender Estimation in the Wild with
Deep RoR Architecture
('32164792', 'Ke Zhang', 'ke zhang')
('35038034', 'Ce Gao', 'ce gao')
('3451321', 'Liru Guo', 'liru guo')
('2598874', 'Miao Sun', 'miao sun')
('3451660', 'Xingfang Yuan', 'xingfang yuan')
('3244463', 'Tony X. Han', 'tony x. han')
('2626320', 'Zhenbing Zhao', 'zhenbing zhao')
('2047712', 'Baogang Li', 'baogang li')
5042b358705e8d8e8b0655d07f751be6a1565482International Journal of
Emerging Research in Management &Technology
ISSN: 2278-9359 (Volume-4, Issue-8)
Research Article
August
2015
Review on Emotion Detection in Image
CSE & PCET, PTU HOD, CSE & PCET, PTU
Punjab, India Punj ab, India
50e47857b11bfd3d420f6eafb155199f4b41f6d7International Journal of Computer, Consumer and Control (IJ3C), Vol. 2, No.1 (2013)
3D Human Face Reconstruction Using a Hybrid of Photometric
Stereo and Independent Component Analysis
('1734467', 'Cheng-Jian Lin', 'cheng-jian lin')
('3318507', 'Shyi-Shiun Kuo', 'shyi-shiun kuo')
('18305737', 'Hsueh-Yi Lin', 'hsueh-yi lin')
('2911354', 'Cheng-Yi Yu', 'cheng-yi yu')
50eb75dfece76ed9119ec543e04386dfc95dfd13Learning Visual Entities and their Visual Attributes from Text Corpora
Dept. of Computer Science
K.U.Leuven, Belgium
Dept. of Computer Science
K.U.Leuven, Belgium
Dept. of Computer Science
K.U.Leuven, Belgium
('2955093', 'Erik Boiy', 'erik boiy')
('1797588', 'Koen Deschacht', 'koen deschacht')
('1802161', 'Marie-Francine Moens', 'marie-francine moens')
erik.boiy@cs.kuleuven.be
koen.deschacht@cs.kuleuven.be
sien.moens@cs.kuleuven.be
5050807e90a925120cbc3a9cd13431b98965f4b9To appear in the ECCV Workshop on Parts and Attributes, Oct. 2012.
Unsupervised Learning of Discriminative
Relative Visual Attributes
Boston University
Hacettepe University
('2863531', 'Shugao Ma', 'shugao ma')
('2011587', 'Nazli Ikizler-Cinbis', 'nazli ikizler-cinbis')
50a0930cb8cc353e15a5cb4d2f41b365675b5ebf
508702ed2bf7d1b0655ea7857dd8e52d6537e765ZUO, ORGANISCIAK, SHUM, YANG: SST-VLAD AND SST-FV FOR VAR
Saliency-Informed Spatio-Temporal Vector
of Locally Aggregated Descriptors and
Fisher Vectors for Visual Action Recognition
Department of Computer and
Information Sciences
Northumbria University
Newcastle upon Tyne, NE1 8ST, UK
('40760781', 'Zheming Zuo', 'zheming zuo')
('34975328', 'Daniel Organisciak', 'daniel organisciak')
('2840036', 'Hubert P. H. Shum', 'hubert p. h. shum')
('1706028', 'Longzhi Yang', 'longzhi yang')
zheming.zuo@northumbria.ac.uk
daniel.organisciak@northumbria.ac.uk
hubert.shum@northumbria.ac.uk
longzhi.yang@northumbria.ac.uk
50eb2ee977f0f53ab4b39edc4be6b760a2b05f96Australian Journal of Basic and Applied Sciences, 11(5) April 2017, Pages: 1-11
AUSTRALIAN JOURNAL OF BASIC AND
APPLIED SCIENCES
ISSN:1991-8178 EISSN: 2309-8414
Journal home page: www.ajbasweb.com
Emotion Recognition Based on Texture Analysis of Facial Expressions
Using Wavelets Transform
1Suhaila N. Mohammed and 2Loay E. George
Assistant Lecturer, College of Science, Baghdad University, Baghdad, Iraq
College of Science, Baghdad University, Baghdad, Iraq
Address For Correspondence:
Suhaila N. Mohammed, Baghdad University, College of Science, Baghdad, Iraq
A R T I C L E I N F O
Article history:
Received 18 January 2017
Accepted 28 March 2017
Available online 15 April 2017
Keywords:
Facial Emotion, Face Detection,
Template Based Methods, Texture
Based Features, Haar Wavelets
Transform, Image Blocking, Neural
Network.
A B S T R A C T
Background: The interests toward developing accurate automatic facial emotion
recognition methodologies are growing vastly and still an ever growing research field in
the region of computer vision, artificial intelligent and automation. Auto emotion
detection systems are demanded in various fields such as medicine, education, driver
safety, games, etc. Despite the importance of this issue it still remains an unsolved
problem Objective: In this paper a facial based emotion recognition system is
introduced. Template based method is used for face region extraction by exploiting
human knowledge about face components and the corresponding symmetry property.
The system is based on texture features to work as identical feature vector. These
features are extracted from face region through using Haar wavelets transform and
blocking idea by calculating the energy of each block The feed forward neural network
classifier is used for classification task. The network is trained using a training set of
samples, and then the generated weights are used to test the recognition ability of the
system. Results: JAFFE public dataset is used for system evaluation purpose; it holds
213 facial samples for seven basic emotions. The conducted tests on the developed
system gave accuracy around 90.05% when the number of blocks is set 4x4.
Conclusion: This result is considered the highest when compared with the results of
other newly published works, especially those based on texture features in which
blocking idea allows the extraction of statistical features according to local energy of
each block; this gave chance for more features to work more effectively.
INTRODUCTION
Due to the rapid development of technologies, it is being required to build a smart system for understanding
human emotion (Ruivo et al., 2016). There are different ways to distinguish person emotions such as facial
image, voice, shape of body and others. Mehrabian explained that person impression can be expressed through
words (verbal part) by 7%, and 38% through tone of voice (vocal part) while the facial image can give the
largest rate which reaches to 55% (Rani and Garg, 2014). Also, he indicated that one of the most important ways
to display emotions is through facial expressions; where facial image contains much information (such as,
person's identification and also about mood and state of mind) which can be used to distinguish human
inspiration (Saini and Rana, 2014).
Facial emotion recognition is an active area of research with several fields of applications. Some of the
significant applications are: feedback system for e-learning, alert system for driving, social robot emotion
recognition system, medical practices...etc (Dubey and Singh, 2016).
Human emotion is composed of thousands of expressions but in the last decade the focus on analyzing only
seven basic facial expressions such as happiness, sadness, surprise, disgust, fear, natural, and anger (Singh and
Open Access Journal
Published BY AENSI Publication
© 2017 AENSI Publisher All rights reserved
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/
To Cite This Article: Suhaila N. Mohammed and Loay E. George., Emotion Recognition Based on Texture Analysis of Facial Expressions
Using Wavelets Transform. Aust. J. Basic & Appl. Sci., 11(5): 1-11, 2017
50e45e9c55c9e79aaae43aff7d9e2f079a2d787bHindawi Publishing Corporation
e Scientific World Journal
Volume 2015, Article ID 471371, 18 pages
http://dx.doi.org/10.1155/2015/471371
Research Article
Unbiased Feature Selection in Learning Random Forests for
High-Dimensional Data
Shenzhen Key Laboratory of High Performance Data Mining, Shenzhen Institutes of Advanced Technology
Chinese Academy of Sciences, Shenzhen 518055, China
University of Chinese Academy of Sciences, Beijing 100049, China
School of Computer Science and Engineering, Water Resources University, Hanoi 10000, Vietnam
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
Faculty of Information Technology, Vietnam National University of Agriculture, Hanoi 10000, Vietnam
Received 20 June 2014; Accepted 20 August 2014
Academic Editor: Shifei Ding
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging
samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs
have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where
multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select
good features in learning RFs for high-dimensional data. We first remove the uninformative features using 𝑝-value assessment,
and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into
two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This
approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed
for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image
datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in
increasing the accuracy and the AUC measures.
1. Introduction
Random forests (RFs) [1] are a nonparametric method that
builds an ensemble model of decision trees from random
subsets of features and bagged samples of the training data.
RFs have shown excellent performance for both clas-
sification and regression problems. RF model works well
even when predictive features contain irrelevant features
(or noise); it can be used when the number of features is
much larger than the number of samples. However, with
randomizing mechanism in both bagging samples and feature
selection, RFs could give poor accuracy when applied to high
dimensional data. The main cause is that, in the process of
growing a tree from the bagged sample data, the subspace
of features randomly sampled from thousands of features to
split a node of the tree is often dominated by uninformative
features (or noise), and the tree grown from such bagged
subspace of features will have a low accuracy in prediction
which affects the final prediction of the RFs. Furthermore,
Breiman et al. noted that feature selection is biased in the
classification and regression tree (CART) model because it is
based on an information criteria, called multivalue problem
[2]. It tends in favor of features containing more values, even if
these features have lower importance than other ones or have
no relationship with the response feature (i.e., containing
less missing values, many categorical or distinct numerical
values) [3, 4].
In this paper, we propose a new random forests algo-
rithm using an unbiased feature sampling method to build
a good subspace of unbiased features for growing trees.
('40538635', 'Thanh-Tung Nguyen', 'thanh-tung nguyen')
('8192216', 'Joshua Zhexue Huang', 'joshua zhexue huang')
('39340373', 'Thuy Thi Nguyen', 'thuy thi nguyen')
('40538635', 'Thanh-Tung Nguyen', 'thanh-tung nguyen')
Correspondence should be addressed to Thanh-Tung Nguyen; tungnt@wru.vn
5003754070f3a87ab94a2abb077c899fcaf936a6Evaluation of LC-KSVD on UCF101 Action Dataset
University of Maryland, College Park
2Noah’s Ark Lab, Huawei Technologies
('3146162', 'Hyunjong Cho', 'hyunjong cho')
('2445131', 'Hyungtae Lee', 'hyungtae lee')
('34145947', 'Zhuolin Jiang', 'zhuolin jiang')
cho@cs.umd.edu, htlee@umd.edu, zhuolin.jiang@huawei.com
503db524b9a99220d430e741c44cd9c91ce1ddf8Who’s Better, Who’s Best: Skill Determination in Video using Deep Ranking
University of Bristol, Bristol, UK
Walterio Mayol-Cuevas
('28798386', 'Hazel Doughty', 'hazel doughty')
('1728459', 'Dima Damen', 'dima damen')
.@bristol.ac.uk
50d15cb17144344bb1879c0a5de7207471b9ff74Divide, Share, and Conquer: Multi-task
Attribute Learning with Selective Sharing
('3197570', 'Chao-Yeh Chen', 'chao-yeh chen')
('2228235', 'Dinesh Jayaraman', 'dinesh jayaraman')
('1693054', 'Fei Sha', 'fei sha')
('1794409', 'Kristen Grauman', 'kristen grauman')
50d961508ec192197f78b898ff5d44dc004ef26dInternational Journal of Computer science & Information Technology (IJCSIT), Vol 1, No 2, November 2009
A LOW INDEXED CONTENT BASED
NEURAL NETWORK APPROACH FOR
NATURAL OBJECTS RECOGNITION
1Research Scholar, JNTUH, Hyderabad, AP. India
Principal, JNTUH College of Engineering, jagitial, Karimnagar, AP, India
Principal, Chaithanya Institute of Engineering and Technology, Kakinada, AP, India
shyam_gunda2002@yahoo.co.in
govardhan_cse@yahoo.co.in
tv_venkat@yahoo.com
50ccc98d9ce06160cdf92aaf470b8f4edbd8b899Towards Robust Cascaded Regression for Face Alignment in the Wild
J¨urgen Beyerer2,1
Vision and Fusion Laboratory (IES), Karlsruhe Institute of Technology (KIT
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (Fraunhofer IOSB
3Signal Processing Laboratory (LTS5), ´Ecole Polytechnique F´ed´erale de Lausanne (EPFL)
('1797975', 'Chengchao Qu', 'chengchao qu')
('1697965', 'Hua Gao', 'hua gao')
('2233872', 'Eduardo Monari', 'eduardo monari')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
firstname.lastname@iosb.fraunhofer.de
firstname.lastname@epfl.ch
5028c0decfc8dd623c50b102424b93a8e9f2e390Published as a conference paper at ICLR 2017
REVISITING CLASSIFIER TWO-SAMPLE TESTS
1Facebook AI Research, 2WILLOW project team, Inria / ENS / CNRS
('3016461', 'David Lopez-Paz', 'david lopez-paz')
('2093491', 'Maxime Oquab', 'maxime oquab')
dlp@fb.com, maxime.oquab@inria.fr
505e55d0be8e48b30067fb132f05a91650666c41A Model of Illumination Variation for Robust Face Recognition
Institut Eur´ecom
Multimedia Communications Department
BP 193, 06904 Sophia Antipolis Cedex, France
('1723883', 'Florent Perronnin', 'florent perronnin')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
fflorent.perronnin, jean-luc.dugelayg@eurecom.fr
507c9672e3673ed419075848b4b85899623ea4b0Faculty of Informatics
Institute for Anthropomatics
Chair Prof. Dr.-Ing. R. Stiefelhagen
Facial Image Processing and Analysis Group
Multi-View Facial Expression
Classification
ADVISORS
MARCH 2011
KIT University of the State of Baden-W rttemberg and National Laboratory of the Helmholtz Association
www.kit.edu
('33357889', 'Nikolas Hesse', 'nikolas hesse')
('38113750', 'Hua Gao', 'hua gao')
('40303076', 'Tobias Gehrig', 'tobias gehrig')
50c0de2cccf7084a81debad5fdb34a9139496da0ORIGINAL RESEARCH
published: 30 November 2016
doi: 10.3389/fict.2016.00027
The Influence of Annotation, Corpus
Design, and Evaluation on the
Outcome of Automatic Classification
of Human Emotions
Institute of Neural Information Processing, Ulm University, Ulm, Germany
The integration of emotions into human–computer interaction applications promises a
more natural dialog between the user and the technical system operators. In order
to construct such machinery, continuous measuring of the affective state of the user
becomes essential. While basic research that is aimed to capture and classify affective
signals has progressed, many issues are still prevailing that hinder easy integration
of affective signals into human–computer interaction. In this paper, we identify and
investigate pitfalls in three steps of the work-flow of affective classification studies. It starts
with the process of collecting affective data for the purpose of training suitable classifiers.
Emotional data have to be created in which the target emotions are present. Therefore,
human participants have to be stimulated suitably. We discuss the nature of these stimuli,
their relevance to human–computer interaction, and the repeatability of the data recording
setting. Second, aspects of annotation procedures are investigated, which include the
variances of
individual raters, annotation delay, the impact of the used annotation
tool, and how individual ratings are combined to a unified label. Finally, the evaluation
protocol
is examined, which includes, among others, the impact of the performance
measure on the accuracy of a classification model. We hereby focus especially on the
evaluation of classifier outputs against continuously annotated dimensions. Together with
the discussed problems and pitfalls and the ways how they affect the outcome, we
provide solutions and alternatives to overcome these issues. As the final part of the paper,
we sketch a recording scenario and a set of supporting technologies that can contribute
to solve many of the issues mentioned above.
Keywords: affective computing, affective labeling, human–computer interaction, performance measures, machine
guided labeling
1. INTRODUCTION
The integration of affective signals into human–computer interaction (HCI) is generally considered
beneficial to improve the interaction process (Picard, 2000). The analysis of affective data in HCI
can be considered both cumbersome and prone to errors. The main reason for this is that the
important steps in affective classification are particularly difficult. This includes difficulties that arise
in the recording of suitable data collections comprising episodes of affective HCI, in the uncertainty
and subjectivity of the annotations of these data, and finally in the evaluation protocol that should
account for the continuous nature of the application.
Edited by:
Anna Esposito,
Seconda Università degli Studi di
Napoli, Italy
Reviewed by:
Anna Pribilova,
Slovak University of Technology in
Bratislava, Slovakia
Alda Troncone,
Seconda Università degli Studi di
Napoli, Italy
*Correspondence:
contributed equally to this work.
Specialty section:
This article was submitted to
Human-Media Interaction, a section
of the journal Frontiers in ICT
Received: 15 May 2016
Accepted: 26 October 2016
Published: 30 November 2016
Citation:
Kächele M, Schels M and
Schwenker F (2016) The Influence of
Annotation, Corpus Design, and
Evaluation on the Outcome of
Automatic Classification of Human
Emotions.
doi: 10.3389/fict.2016.00027
Frontiers in ICT | www.frontiersin.org
November 2016 | Volume 3 | Article 27
('2144395', 'Markus Kächele', 'markus kächele')
('3037635', 'Martin Schels', 'martin schels')
('1685857', 'Friedhelm Schwenker', 'friedhelm schwenker')
('2144395', 'Markus Kächele', 'markus kächele')
('2144395', 'Markus Kächele', 'markus kächele')
('3037635', 'Martin Schels', 'martin schels')
markus.kaechele@uni-ulm.de
680d662c30739521f5c4b76845cb341dce010735Int J Comput Vis (2014) 108:82–96
DOI 10.1007/s11263-014-0716-6
Part and Attribute Discovery from Relative Annotations
Received: 25 February 2013 / Accepted: 14 March 2014 / Published online: 26 April 2014
© Springer Science+Business Media New York 2014
('35208858', 'Subhransu Maji', 'subhransu maji')
68f89c1ee75a018c8eff86e15b1d2383c250529bFinal Report for Project Localizing Objects and
Actions in Videos Using Accompanying Text
Johns Hopkins University, Center for Speech and Language Processing
Summer Workshop 2010
J. Neumann, StreamSage/Comcast
F.Ferraro, University of Rochester
H. He, Honkong Polytechnic University
Y. Li, University of Maryland
C.L. Teo, University of Maryland
November 4, 2010
('3167986', 'C. Fermueller', 'c. fermueller')
('1743020', 'J. Kosecka', 'j. kosecka')
('2601166', 'E. Tzoukermann', 'e. tzoukermann')
('2995090', 'R. Chaudhry', 'r. chaudhry')
('1937619', 'I. Perera', 'i. perera')
('9133363', 'B. Sapp', 'b. sapp')
('38873583', 'G. Singh', 'g. singh')
('1870728', 'X. Yi', 'x. yi')
68a2ee5c5b76b6feeb3170aaff09b1566ec2cdf5AGE CLASSIFICATION BASED ON
SIMPLE LBP TRANSITIONS
Aditya institute of Technology and Management, Tekkalli-532 201, A.P
2Dr. V.Vijaya Kumar
3A. Obulesu
2Dean-Computer Sciences (CSE & IT), Anurag Group of Institutions, Hyderabad – 500088, A.P., India.,
3Asst. Professor, Dept. Of CSE, Anurag Group of Institutions, Hyderabad – 500088, A.P., India.
('34964075', 'Satyanarayana Murty', 'satyanarayana murty')India, 1gsn_73@yahoo.co.in
2drvvk144@gmail.com
3obulesh.a@gmail.com
68d2afd8c5c1c3a9bbda3dd209184e368e4376b9Representation Learning by Rotating Your Faces ('1849929', 'Luan Tran', 'luan tran')
('2399004', 'Xi Yin', 'xi yin')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
68a3f12382003bc714c51c85fb6d0557dcb15467
6859b891a079a30ef16f01ba8b85dc45bd22c352International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 10, October 2014)
2D Face Recognition Based on PCA & Comparison of
Manhattan Distance, Euclidean Distance & Chebychev
Distance
RCC Institute of Information Technology, Kolkata, India
('2467416', 'Rajib Saha', 'rajib saha')
('2144187', 'Sayan Barman', 'sayan barman')
68d08ed9470d973a54ef7806318d8894d87ba610Drive Video Analysis for the Detection of Traffic Near-Miss Incidents ('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('5014206', 'Teppei Suzuki', 'teppei suzuki')
('6881850', 'Shoko Oikawa', 'shoko oikawa')
('1720770', 'Yasuhiro Matsui', 'yasuhiro matsui')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
68caf5d8ef325d7ea669f3fb76eac58e0170fff0
68003e92a41d12647806d477dd7d20e4dcde1354ISSN: 0976-9102 (ONLINE)
DOI: 10.21917/ijivp.2013.0101
ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, NOVEMBER 2013, VOLUME: 04, ISSUE: 02
FUZZY BASED IMAGE DIMENSIONALITY REDUCTION USING SHAPE
PRIMITIVES FOR EFFICIENT FACE RECOGNITION
1Deprtment of Computer Science and Engineering, Nalla Narasimha Reddy Education Society’s Group of Institutions, India
Deprtment of Computer Science and Engineering, JNTUA College of Engineering, India
3Deprtment of Computer Science and Engineering, Anurag Group of Institutions, India
('2086540', 'P. Chandra', 'p. chandra')
('2803943', 'B. Eswara Reddy', 'b. eswara reddy')
('36754879', 'Vijaya Kumar', 'vijaya kumar')
E-Mail: pchandureddy@yahoo.com
E-mail: eswarcsejntu@gmail.com
E-mail: vijayvakula@yahoo.com
68d4056765c27fbcac233794857b7f5b8a6a82bfExample-Based Face Shape Recovery Using the
Zenith Angle of the Surface Normal
Mario Castel´an1, Ana J. Almaz´an-Delf´ın2, Marco I. Ram´ırez-Sosa-Mor´an3,
and Luz A. Torres-M´endez1
1 CINVESTAV Campus Saltillo, Ramos Arizpe 25900, Coahuila, M´exico
2 Universidad Veracruzana, Facultad de F´ısica e Inteligencia Artificial, Xalapa 91000,
3 ITESM, Campus Saltillo, Saltillo 25270, Coahuila, M´exico
Veracruz, M´exico
mario.castelan@cinvestav.edu.mx
684f5166d8147b59d9e0938d627beff8c9d208ddIEEE TRANS. NNLS, JUNE 2017
Discriminative Block-Diagonal Representation
Learning for Image Recognition
('38448016', 'Zheng Zhang', 'zheng zhang')
('40065614', 'Yong Xu', 'yong xu')
('40799321', 'Ling Shao', 'ling shao')
('49500178', 'Jian Yang', 'jian yang')
68c5238994e3f654adea0ccd8bca29f2a24087fcPLSA-BASED ZERO-SHOT LEARNING
Centre of Image and Signal Processing
Faculty of Computer Science & Information Technology
University of Malaya, 50603 Kuala Lumpur, Malaysia
('2800072', 'Wai Lam Hoo', 'wai lam hoo')
('2863960', 'Chee Seng Chan', 'chee seng chan')
{wailam88@siswa.um.edu.my; cs.chan@um.edu.my}
68cf263a17862e4dd3547f7ecc863b2dc53320d8
68e9c837431f2ba59741b55004df60235e50994dDetecting Faces Using Region-based Fully
Convolutional Networks
Tencent AI Lab, China
('1996677', 'Yitong Wang', 'yitong wang'){yitongwang,denisji,encorezhou,hawelwang,michaelzfli}@tencent.com
685f8df14776457c1c324b0619c39b3872df617bMaster of Science Thesis in Electrical Engineering
Link ping University
Face Recognition with
Preprocessing and Neural
Networks
68484ae8a042904a95a8d284a7f85a4e28e37513Spoofing Deep Face Recognition with Custom Silicone Masks
S´ebastien Marcel
Idiap Research Institute. Centre du Parc, Rue Marconi 19, Martigny (VS), Switzerland
('1952348', 'Sushil Bhattacharjee', 'sushil bhattacharjee'){sushil.bhattacharjee; amir.mohammadi; sebastien.marcel}@idiap.ch
687e17db5043661f8921fb86f215e9ca2264d4d2A Robust Elastic and Partial Matching Metric for Face Recognition
Microsoft Corporate
One Microsoft Way, Redmond, WA 98052
('1745420', 'Gang Hua', 'gang hua')
('33474090', 'Amir Akbarzadeh', 'amir akbarzadeh')
{ganghua, amir}@microsoft.com
688754568623f62032820546ae3b9ca458ed0870bioRxiv preprint first posted online Sep. 27, 2016;
doi:
http://dx.doi.org/10.1101/077784
.
The copyright holder for this preprint (which was not
peer-reviewed) is the author/funder. It is made available under a
CC-BY-NC-ND 4.0 International license
.
Resting high frequency heart rate variability is not associated with the
recognition of emotional facial expressions in healthy human adults.
1 Univ. Grenoble Alpes, LPNC, F-38040, Grenoble, France
2 CNRS, LPNC UMR 5105, F-38040, Grenoble, France
3 IPSY, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
4 Fund for Scientific Research (FRS-FNRS), Brussels, Belgium
Correspondence concerning this article should be addressed to Brice Beffara, Office E250, Institut
de Recherches en Sciences Psychologiques, IPSY - Place du Cardinal Mercier, 10 bte L3.05.01 B-1348
Author note
This study explores whether the myelinated vagal connection between the heart and the brain
is involved in emotion recognition. The Polyvagal theory postulates that the activity of the
myelinated vagus nerve underlies socio-emotional skills. It has been proposed that the perception
of emotions could be one of this skills dependent on heart-brain interactions. However, this
assumption was differently supported by diverging results suggesting that it could be related to
confounded factors. In the current study, we recorded the resting state vagal activity (reflected by
High Frequency Heart Rate Variability, HF-HRV) of 77 (68 suitable for analysis) healthy human
adults and measured their ability to identify dynamic emotional facial expressions. Results show
that HF-HRV is not related to the recognition of emotional facial expressions in healthy human
adults. We discuss this result in the frameworks of the polyvagal theory and the neurovisceral
integration model.
Keywords: HF-HRV; autonomic flexibility; emotion identification; dynamic EFEs; Polyvagal
theory; Neurovisceral integration model
Word count: 9810
10
11
12
13
14
15
16
17
Introduction
The behavior of an animal is said social when involved in in-
teractions with other animals (Ward & Webster, 2016). These
interactions imply an exchange of information, signals, be-
tween at least two animals. In humans, the face is an efficient
communication channel, rapidly providing a high quantity of
information. Facial expressions thus play an important role
in the transmission of emotional information during social
interactions. The result of the communication is the combina-
tion of transmission from the sender and decoding from the
receiver (Jack & Schyns, 2015). As a consequence, the quality
of the interaction depends on the ability to both produce and
identify facial expressions. Emotions are therefore a core
feature of social bonding (Spoor & Kelly, 2004). Health
of individuals and groups depend on the quality of social
bonds in many animals (Boyer, Firat, & Leeuwen, 2015; S. L.
Brown & Brown, 2015; Neuberg, Kenrick, & Schaller, 2011),
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
especially in highly social species such as humans (Singer &
Klimecki, 2014).
The recognition of emotional signals produced by others is
not independent from its production by oneself (Niedenthal,
2007). The muscles of the face involved in the production of
a facial expressions are also activated during the perception of
the same facial expressions (Dimberg, Thunberg, & Elmehed,
2000). In other terms, the facial mimicry of the perceived
emotional facial expression (EFE) triggers its sensorimotor
simulation in the brain, which improves the recognition abili-
ties (Wood, Rychlowska, Korb, & Niedenthal, 2016). Beyond
that, the emotion can be seen as the body -including brain-
dynamic itself (Gallese & Caruana, 2016) which helps to un-
derstand why behavioral simulation is necessary to understand
the emotion.
The interplay between emotion production, emotion percep-
tion, social communication and body dynamics has been sum-
marized in the framework of the polyvagal theory (Porges,
('37799937', 'Nicolas Vermeulen', 'nicolas vermeulen')
('2634712', 'Martial Mermillod', 'martial mermillod')
Louvain-la-Neuve, Belgium. E-mail: brice.beffara@univ-grenoble-alpes.fr
68f9cb5ee129e2b9477faf01181cd7e3099d1824ALDA Algorithms for Online Feature Extraction ('2784763', 'Youness Aliyari Ghassabeh', 'youness aliyari ghassabeh')
('2060085', 'Hamid Abrishami Moghaddam', 'hamid abrishami moghaddam')
68bf34e383092eb827dd6a61e9b362fcba36a83a
68d40176e878ebffbc01ffb0556e8cb2756dd9e9International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622
International Conference on Humming Bird ( 01st March 2014)
RESEARCH ARTICLE
OPEN ACCESS
Locality Repulsion Projection and Minutia Extraction Based
Similarity Measure for Face Recognition
AgnelAnushya P. is currently pursuing M.E (Computer Science and engineering) at Vins Christian college of
2Ramya P. is currently working as an Asst. Professor in the dept. of Information Technology at Vins Christian
college of Engineering
Engineering. e-mail:anushyase@gmail.com.
68c4a1d438ea1c6dfba92e3aee08d48f8e7f7090AgeNet: Deeply Learned Regressor and Classifier for
Robust Apparent Age Estimation
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
2Tencent BestImage Team, Shanghai, 100080, China
('1731144', 'Xin Liu', 'xin liu')
('1688086', 'Shaoxin Li', 'shaoxin li')
('1693589', 'Meina Kan', 'meina kan')
('1698586', 'Jie Zhang', 'jie zhang')
('3126238', 'Shuzhe Wu', 'shuzhe wu')
('13323391', 'Wenxian Liu', 'wenxian liu')
('34393045', 'Hu Han', 'hu han')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{xin.liu, meina.kan, jie.zhang, shuzhe.wu, wenxian.liu, hu.han}@vipl.ict.ac.cn
{darwinli}@tencent.com, {sgshan, xlchen}@ict.ac.cn
6889d649c6bbd9c0042fadec6c813f8e894ac6ccAnalysis of Robust Soft Learning Vector
Quantization and an application to Facial
Expression Recognition
68f69e6c6c66cfde3d02237a6918c9d1ee678e1bEnhancing Concept Detection by Pruning Data with MCA-based Transaction
Weights
Department of Electrical and
Computer Engineering
University of Miami
Coral Gables, FL 33124, USA
School of Computing and
Information Sciences
Florida International University
Miami, FL 33199, USA
('1685202', 'Lin Lin', 'lin lin')
('1693826', 'Mei-Ling Shyu', 'mei-ling shyu')
('1705664', 'Shu-Ching Chen', 'shu-ching chen')
Email: l.lin2@umiami.edu, shyu@miami.edu
Email: chens@cs.fiu.edu
682760f2f767fb47e1e2ca35db3becbb6153756fThe Effect of Pets on Happiness: A Large-scale Multi-Factor
Analysis using Social Multimedia
From reducing stress and loneliness, to boosting productivity and overall well-being, pets are believed to play
a significant role in people’s daily lives. Many traditional studies have identified that frequent interactions
with pets could make individuals become healthier and more optimistic, and ultimately enjoy a happier life.
However, most of those studies are not only restricted in scale, but also may carry biases by using subjective
self-reports, interviews, and questionnaires as the major approaches. In this paper, we leverage large-scale
data collected from social media and the state-of-the-art deep learning technologies to study this phenomenon
in depth and breadth. Our study includes four major steps: 1) collecting timeline posts from around 20,000
Instagram users; 2) using face detection and recognition on 2-million photos to infer users’ demographics,
relationship status, and whether having children, 3) analyzing a user’s degree of happiness based on images
and captions via smiling classification and textual sentiment analysis; 3) applying transfer learning techniques
to retrain the final layer of the Inception v3 model for pet classification; and 4) analyzing the effects of pets
on happiness in terms of multiple factors of user demographics. Our main results have demonstrated the
efficacy of our proposed method with many new insights. We believe this method is also applicable to other
domains as a scalable, efficient, and effective methodology for modeling and analyzing social behaviors and
psychological well-being. In addition, to facilitate the research involving human faces, we also release our
dataset of 700K analyzed faces.
CCS Concepts: • Human-centered computing → Social media;
Additional Key Words and Phrases: Happiness analysis, happiness, user demographics, pet and happiness,
social multimedia, social media.
ACM Reference format:
Analysis using Social Multimedia. ACM Trans. Intell. Syst. Technol. 9, 4, Article 39 (June 2017), 15 pages.
https://doi.org/0000001.0000001
1 INTRODUCTION
Happiness has always been a subjective and multidimensional matter; its definition varies individu-
ally, and the factors impacting our feeling of happiness are diverse. A study in [21] has constructed
We thank the support of New York State through the Goergen Institute for Data Science, our corporate research sponsors
Xerox and VisualDX, and NSF Award #1704309.
Author s addresses: X. Peng, University of Rochester; L. Chi
University of Rochester and J. Luo, University of Rochester
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the
full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored.
('1901094', 'Xuefeng Peng', 'xuefeng peng')
('35678395', 'Li-Kai Chi', 'li-kai chi')
('33642939', 'Jiebo Luo', 'jiebo luo')
('1901094', 'Xuefeng Peng', 'xuefeng peng')
('35678395', 'Li-Kai Chi', 'li-kai chi')
('33642939', 'Jiebo Luo', 'jiebo luo')
683ec608442617d11200cfbcd816e86ce9ec0899Dual Linear Regression Based Classification for Face Cluster Recognition
University of Northern British Columbia
Prince George, BC, Canada V2N 4Z9
('1692551', 'Liang Chen', 'liang chen')chen.liang.97@gmail.com
68c17aa1ecbff0787709be74d1d98d9efd78f410International Journal of Optomechatronics, 6: 92–119, 2012
Copyright # Taylor & Francis Group, LLC
ISSN: 1559-9612 print=1559-9620 online
DOI: 10.1080/15599612.2012.663463
GENDER CLASSIFICATION FROM FACE IMAGES
USING MUTUAL INFORMATION AND FEATURE
FUSION
Department of Electrical Engineering and Advanced Mining Technology
Center, Universidad de Chile, Santiago, Chile
In this article we report a new method for gender classification from frontal face images
using feature selection based on mutual information and fusion of features extracted from
intensity, shape, texture, and from three different spatial scales. We compare the results of
three different mutual information measures: minimum redundancy and maximal relevance
(mRMR), normalized mutual information feature selection (NMIFS), and conditional
mutual information feature selection (CMIFS). We also show that by fusing features
extracted from six different methods we significantly improve the gender classification
results relative to those previously published, yielding 99.13% of the gender classification
rate on the FERET database.
Keywords: Feature fusion, feature selection, gender classification, mutual information, real-time gender
classification
1. INTRODUCTION
During the 90’s, one of the main issues addressed in the area of computer
vision was face detection. Many methods and applications were developed including
the face detection used in many digital cameras nowadays. Gender classification is
important in many possible applications including electronic marketing. Displays
at retail stores could show products and offers according to the person gender as
the person passes in front of a camera at the store. This is not a simple task since
faces are not rigid and depend on illumination, pose, gestures, facial expressions,
occlusions (glasses), and other facial features (makeup, beard). The high variability
in the appearance of the face directly affects their detection and classification. Auto-
matic classification of gender from face images has a wide range of possible applica-
tions, ranging from human-computer interaction to applications in real-time
electronic marketing in retail stores (Shan 2012; Bekios-Calfa et al. 2011; Chu
et al. 2010; Perez et al. 2010a).
Automatic gender classification has a wide range of possible applications for
improving human-machine interaction and face identification methods (Irick et al.
ing.uchile.cl
92
('32271973', 'Claudio Perez', 'claudio perez')
('40333310', 'Juan Tapia', 'juan tapia')
('32723983', 'Claudio Held', 'claudio held')
('32271973', 'Claudio Perez', 'claudio perez')
('32271973', 'Claudio Perez', 'claudio perez')
Engineering, Universidad de Chile Casilla 412-3, Av. Tupper 2007, Santiago, Chile. E-mail: clperez@
68f61154a0080c4aae9322110c8827978f01ac2eResearch Article
Journal of the Optical Society of America A
Recognizing blurred, non-frontal, illumination and
expression variant partially occluded faces
Indian Institute of Technology Madras, Chennai 600036, India
Compiled June 26, 2016
The focus of this paper is on the problem of recognizing faces across space-varying motion blur, changes
in pose, illumination, and expression, as well as partial occlusion, when only a single image per subject
is available in the gallery. We show how the blur incurred due to relative motion between the camera and
the subject during exposure can be estimated from the alpha matte of pixels that straddle the boundary
between the face and the background. We also devise a strategy to automatically generate the trimap re-
quired for matte estimation. Having computed the motion via the matte of the probe, we account for pose
variations by synthesizing from the intensity image of the frontal gallery, a face image that matches the
pose of the probe. To handle illumination and expression variations, and partial occlusion, we model the
probe as a linear combination of nine blurred illumination basis images in the synthesized non-frontal
pose, plus a sparse occlusion. We also advocate a recognition metric that capitalizes on the sparsity of the
occluded pixels. The performance of our method is extensively validated on synthetic as well as real face
data. © 2016 Optical Society of America
OCIS codes:
(150.0150) Machine vision.
http://dx.doi.org/10.1364/ao.XX.XXXXXX
(100.0100) Image processing; (100.5010) Pattern recognition; (100.3008) Image recognition, algorithms and filters;
1. INTRODUCTION
State-of-the-art face recognition (FR) systems can outperform
even humans when presented with images captured under con-
trolled environments. However, their performance drops quite
rapidly in unconstrained settings due to image degradations
arising from blur, variations in pose, illumination, and expres-
sion, partial occlusion etc. Motion blur is commonplace today
owing to the exponential rise in the use and popularity of light-
weight and cheap hand-held imaging devices, and the ubiquity
of mobile phones equipped with cameras. Photographs cap-
tured using a hand-held device usually contain blur when the
illumination is poor because larger exposure times are needed
to compensate for the lack of light, and this increases the possi-
bility of camera shake. On the other hand, reducing the shutter
speed results in noisy images while tripods inevitably restrict
mobility. Even for a well-lit scene, the face might be blurred if
the subject is in motion. The problem is further compounded
in the case of poorly-lit dynamic scenes since the blur observed
on the face is due to the combined effects of the blur induced
by the motion of the camera and the independent motion of
the subject. In addition to blur and illumination, practical face
recognition algorithms must also possess the ability to recognize
faces across reasonable variations in pose. Partial occlusion and
facial expression changes, common in real-world applications,
escalate the challenges further. Yet another factor that governs
the performance of face recognition algorithms is the number
of images per subject available for training. In many practical
application scenarios such as law enforcement, driver license or
passport identification, where there is usually only one training
sample per subject in the database, techniques that rely on the
size and representation of the training set suffer a serious perfor-
mance drop or even fail to work. Face recognition algorithms
can broadly be classified into either discriminative or genera-
tive approaches. While the availability of large labeled datasets
and greater computing power has boosted the performance of
discriminative methods [1, 2] recently, generative approaches
continue to remain very popular [3, 4], and there is concurrent
research in both directions. The model we present in this paper
falls into the latter category. In fact, generative models are even
useful for producing training samples for learning algorithms.
Literature on face recognition from blurred images can be
broadly classified into four categories. It is important to note
that all of them (except our own earlier work in [4]) are restricted
to the convolution model for uniform blur. In the first approach
[5, 6], the blurred probe image is first deblurred using standard
deconvolution algorithms before performing recognition. How-
*Corresponding author: jithuthatswho@gmail.com
6821113166b030d2123c3cd793dd63d2c909a110STUDIA INFORMATICA
Volume 36
2015
Number 1 (119)
Gdansk University of Technology, Faculty of Electronics, Telecommunication
and Informatics
ACQUISITION AND INDEXING OF RGB-D RECORDINGS FOR
FACIAL EXPRESSIONS AND EMOTION RECOGNITION1
Summary. In this paper KinectRecorder comprehensive tool is described which
provides for convenient and fast acquisition, indexing and storing of RGB-D video
streams from Microsoft Kinect sensor. The application is especially useful as a sup-
porting tool for creation of fully indexed databases of facial expressions and emotions
that can be further used for learning and testing of emotion recognition algorithms for
affect-aware applications. KinectRecorder was successfully exploited for creation of
Facial Expression and Emotion Database (FEEDB) significantly reducing the time of
the whole project consisting of data acquisition, indexing and validation. FEEDB has
already been used as a learning and testing dataset for a few emotion recognition al-
gorithms which proved utility of the database, and the KinectRecorder tool.
Keywords: RGB-D data acquisition and indexing, facial expression recognition,
emotion recognition
AKWIZYCJA ORAZ INDEKSACJA NAGRAŃ RGB-D DO
Streszczenie. W pracy przedstawiono kompleksowe narzędzie, które pozwala na
wygodną i szybką akwizycję, indeksowanie i przechowywanie nagrań strumieni
RGB-D z czujnika Microsoft Kinect. Aplikacja jest szczególnie przydatna jako na-
mogą być następnie wykorzystywane do nauki i testowania algorytmów rozpoznawa-
nia emocji użytkownika dla aplikacji je uwzględniających. KinectRecorder został
skracając czas całego procesu, obejmującego akwizycję, indeksowanie i walidację
nagrań. Baza FEEDB została już z powodzeniem wykorzystana jako uczący i testują-

1 The research leading to these results has received funding from the Polish-Norwegian Research Programme
operated by the National Centre for Research and Development under the Norwegian Financial Mechanism
2009-2014 in the frame of Project Contract No Pol-Nor/210629/51/2013.
('3271448', 'Mariusz SZWOCH', 'mariusz szwoch')
68a04a3ae2086986877fee2c82ae68e3631d0356THERMAL & REFLECTANCE BASED IDENTIFICATION IN CHALLENGING VARIABLE ILLUMINATIONS
Thermal and Reflectance Based Personal
Identification Methodology in Challenging
Variable Illuminations
†Department of Engineering
University of Cambridge
‡Delphi Corporation,
Delphi Electronics and Safety
Cambridge, CB2 1PZ, UK
Kokomo, IN 46901-9005, USA
February 15, 2007
DRAFT
('2214319', 'Riad Hammoud', 'riad hammoud'){oa214,cipolla}@eng.cam.ac.uk
riad.hammoud@delphi.com
6888f3402039a36028d0a7e2c3df6db94f5cb9bbUnder review as a conference paper at ICLR 2018
CLASSIFIER-TO-GENERATOR ATTACK: ESTIMATION
OF TRAINING DATA DISTRIBUTION FROM CLASSIFIER
Anonymous authors
Paper under double-blind review
57f5711ca7ee5c7110b7d6d12c611d27af37875fIllumination Invariance for Face Verification
Submitted for the Degree of
Doctor of Philosophy
from the
University of Surrey
Centre for Vision, Speech and Signal Processing
School of Electronics and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
August 2006
('28467739', 'J. Short', 'j. short')
('28467739', 'J. Short', 'j. short')
570308801ff9614191cfbfd7da88d41fb441b423Unsupervised Synchrony Discovery in Human Interaction
Robotics Institute, Carnegie Mellon University 3University of Pittsburgh, USA
Beihang University, Beijing, China
University of Miami, USA
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
('1874236', 'Daniel S. Messinger', 'daniel s. messinger')
57bf9888f0dfcc41c5ed5d4b1c2787afab72145aRobust Facial Expression Recognition Based on
Local Directional Pattern
Automatic facial expression recognition has many
potential applications
in different areas of human
computer interaction. However, they are not yet fully
realized due to the lack of an effective facial feature
descriptor. In this paper, we present a new appearance-
based feature descriptor, the local directional pattern
(LDP), to represent facial geometry and analyze its
performance in expression recognition. An LDP feature is
obtained by computing the edge response values in 8
directions at each pixel and encoding them into an 8 bit
binary number using the relative strength of these edge
responses. The LDP descriptor, a distribution of LDP
codes within an image or image patch, is used to describe
each expression image. The effectiveness of dimensionality
reduction techniques, such as principal component
analysis and AdaBoost, is also analyzed in terms of
computational cost saving and classification accuracy. Two
well-known machine
template
matching and support vector machine, are used for
classification using the Cohn-Kanade and Japanese
female facial expression databases. Better classification
accuracy shows the superiority of LDP descriptor against
other appearance-based feature descriptors.
learning methods,
Keywords: Image representation, facial expression
recognition, local directional pattern, features extraction,
principal component analysis, support vector machine.

Manuscript received Mar. 15, 2010; revised July 15, 2010; accepted Aug. 2, 2010.
This work was supported by the Korea Research Foundation Grant funded by the Korean
Government (KRF-2010-0015908).
Kyung Hee University, Yongin, Rep. of Korea
doi:10.4218/etrij.10.1510.0132
I. Introduction
Facial expression provides the most natural and immediate
indication about a person’s emotions and intentions [1], [2].
Therefore, automatic facial expression analysis is an important
and challenging task that has had great impact in such areas as
human-computer
interaction and data-driven animation.
Furthermore, video cameras have recently become an integral
part of many consumer devices [3] and can be used for
capturing facial images for recognition of people and their
emotions. This ability to recognize emotions can enable
customized applications [4], [5]. Even though much work has
already been done on automatic facial expression recognition
[6], [7], higher accuracy with reasonable speed still remains a
great challenge [8]. Consequently, a fast but robust facial
expression recognition system is very much needed to support
these applications.
The most critical aspect for any successful facial expression
recognition system is to find an efficient facial feature
representation [9]. An extracted facial feature can be considered
an efficient representation if it can fulfill three criteria: first, it
minimizes within-class variations of expressions while
maximizes between-class variations; second, it can be easily
extracted from the raw face image; and third, it can be
described in a low-dimensional feature space to ensure
computational speed during the classification step [10], [11].
The goal of the facial feature extraction is thus to find an
efficient and effective representation of the facial images which
would provide robustness during recognition process. Two
types of approaches have been proposed to extract facial
features for expression recognition: a geometric feature-based
system and an appearance-based system [12].
In the geometric feature extraction system, the shape and
© 2010
ETRI Journal, Volume 32, Number 5, October 2010
('3182680', 'Taskeed Jabid', 'taskeed jabid')
('9408912', 'Hasanul Kabir', 'hasanul kabir')
('1685505', 'Oksam Chae', 'oksam chae')
('3182680', 'Taskeed Jabid', 'taskeed jabid')
Taskeed Jabid (phone: +82 31 201 2948, email: taskeed@khu.ac.kr), Md. Hasanul Kabir
(email: hasanul@khu.ac.kr), and Oksam Chae (email: oschae@khu.ac.kr) are with the
57ebeff9273dea933e2a75c306849baf43081a8cDeep Convolutional Network Cascade for Facial Point Detection
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1681656', 'Yi Sun', 'yi sun')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
sy011@ie.cuhk.edu.hk
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
574751dbb53777101502419127ba8209562c4758
5778d49c8d8d127351eee35047b8d0dc90defe85Probabilistic Subpixel Temporal Registration
for Facial Expression Analysis
Queen Mary University of London
Centre for Intelligent Sensing
('1781916', 'Hatice Gunes', 'hatice gunes')
('1713138', 'Andrea Cavallaro', 'andrea cavallaro')
fe.sariyanidi, h.gunes, a.cavallarog@qmul.ac.uk
57ee3a8b0cafe211d1e9b477d210bb78b9d43bc1Modeling the joint density of two images under a variety of transformations
Joshua Susskind
Institute for Neural Computation
University of California, San Diego
United States
Department of Computer Science
University of Frankfurt
Germany
Department of Computer Science
Department of Computer Science
ETH Zurich
Switzerland
Geoffrey Hinton
University of Toronto
Canada
('1710604', 'Roland Memisevic', 'roland memisevic')
('1742208', 'Marc Pollefeys', 'marc pollefeys')
josh@mplab.ucsd.edu
ro@cs.uni-frankfurt.de
hinton@cs.toronto.edu
marc.pollefeys@inf.ethz.ch
57fd229097e4822292d19329a17ceb013b2cb648Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
Fast Structural Binary Coding
University of California, San Diego
University of California, San Diego
('2451800', 'Dongjin Song', 'dongjin song')
('1722649', 'Wei Liu', 'wei liu')
('3520515', 'David A. Meyer', 'david a. meyer')
La Jolla, USA, 92093-0409. Email: dosong@ucsd.edu
] Didi Research, Didi Kuaidi, Beijing, China. Email: wliu@ee.columbia.edu
La Jolla, USA, 92093-0112. Email: dmeyer@math.ucsd.edu
57c59011614c43f51a509e10717e47505c776389Unsupervised Human Action Detection by Action Matching
The Australian National University Queensland University of Technology
('1688071', 'Basura Fernando', 'basura fernando')firstname.lastname@anu.edu.au
s.shirazi@qut.edu.au
57b8b28f8748d998951b5a863ff1bfd7ca4ae6a5
57101b29680208cfedf041d13198299e2d396314
57893403f543db75d1f4e7355283bdca11f3ab1b
571f493c0ade12bbe960cfefc04b0e4607d8d4b2International Journal of Research Studies in Science, Engineering and Technology
Volume 3, Issue 2, February 2016, PP 18-41
ISSN 2349-4751 (Print) & ISSN 2349-476X (Online)
Review on Content Based Image Retrieval: From Its Origin to the
New Age
Assistant Professor, ECE
Dr. B. L. Malleswari
Principal
Mahatma Gandhi Institute of Technology
Sridevi Women's Engineering College
Hyderabad, India
Hyderabad, India
pasumarthinalini@gmil.com
blmalleswari@gmail.com
57f8e1f461ab25614f5fe51a83601710142f8e88Region Selection for Robust Face Verification using UMACE Filters
Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering,
Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia.
In this paper, we investigate the verification performances of four subdivided face images with varying expressions. The
objective of this study is to evaluate which part of the face image is more tolerant to facial expression and still retains its personal
characteristics due to the variations of the image. The Unconstrained Minimum Average Correlation Energy (UMACE) filter is
implemented to perform the verification process because of its advantages such as shift–invariance, ability to trade-off between
discrimination and distortion tolerance, e.g. variations in pose, illumination and facial expression. The database obtained from the
facial expression database of Advanced Multimedia Processing (AMP) Lab at CMU is used in this study. Four equal
sizes of face regions i.e. bottom, top, left and right halves are used for the purpose of this study. The results show that the bottom
half of the face region gives the best performance in terms of the PSR values with zero false accepted rate (FAR) and zero false
rejection rate (FRR) compared to the other three regions.
1. Introduction
Face recognition is a well established field of research,
and a large number of algorithms have been proposed in the
literature. Various classifiers have been explored to improve
the accuracy of face classification. The basic approach is to
use distance-base methods which measure Euclidean distance
between any two vectors and then compare it with the preset
threshold. Neural Networks are often used as classifiers due
to their powerful generation ability [1]. Support Vector
Machines (SVM) have been applied with encouraging results
[2].
In biometric applications, one of the important tasks is the
matching process between an individual biometrics against
the database that has been prepared during the enrolment
stage. For biometrics systems such as face authentication that
use images as personal characteristics, biometrics sensor
output and image pre-processing play an important role since
the quality of a biometric input can change significantly due
to illumination, noise and pose variations. Over the years,
researchers have studied the role of illumination variation,
pose variation, facial expression, and occlusions in affecting
the performance of face verification systems [3].
The Minimum Average Correlation Energy (MACE)
filters have been reported to be an alternative solution to these
problems because of the advantages such as shift-invariance,
close-form expressions and distortion-tolerance. MACE
filters have been successfully applied in the field of automatic
target recognition as well as in biometric verification [3][4].
Face and fingerprint verification using correlation filters have
been investigated in [5] and [6], respectively. Savvides et.al
performed face authentication and identification using
correlation filters based on illumination variation [7]. In the
process of implementing correlation filters, the number of
training images used depends on the level of distortions
applied to the images [5], [6].
In this study, we investigate which part of a face image is
more tolerant to facial expression and retains its personal
characteristics for the verification process. Four subdivided
face images, i.e. bottom, top, left and right halves, with
varying expressions are investigated. By identifying only the
region of the face that gives the highest verification
performance, that region can be used instead of the full-face
to reduce storage requirements.
2. Unconstrained Minimum Average Correlation
Energy (UMACE) Filter
Correlation filter theory and the descriptions of the design
of the correlation filter can be found in a tutorial survey paper
[8]. According to [4][6], correlation filter evolves from
matched filters which are optimal for detecting a known
reference image in the presence of additive white Gaussian
noise. However, the detection rate of matched filters
decreases significantly due to even the small changes of scale,
rotation and pose of the reference image.
the pre-specified peak values
In an effort to solve this problem, the Synthetic
Discriminant Function (SDF) filter and the Equal Correlation
Peak SDF (ECP SDF) filter ware introduced which allowed
several training images to be represented by a single
correlation filter. SDF filter produces pre-specified values
called peak constraints. These peak values correspond to the
authentic class or impostor class when an image is tested.
However,
to
misclassifications when the sidelobes are larger than the
controlled values at the origin.
Savvides et.al developed
the Minimum Average
Correlation Energy (MACE) filters [5]. This filter reduces the
large sidelobes and produces a sharp peak when the test
image is from the same class as the images that have been
used to design the filter. There are two kinds of variants that
can be used in order to obtain a sharp peak when the test
image belongs to the authentic class. The first MACE filter
variant minimizes the average correlation energy of the
training images while constraining the correlation output at
the origin to a specific value for each of the training images.
The second MACE filter variant is the Unconstrained
Minimum Average Correlation Energy (UMACE) filter
which also minimizes the average correlation output while
maximizing the correlation output at the origin [4].
lead
Proceedings of the International Conference onElectrical Engineering and InformaticsInstitut Teknologi Bandung, Indonesia June 17-19, 2007B-67ISBN 978-979-16338-0-2611
('5461819', 'Salina Abdul Samad', 'salina abdul samad')
('2864147', 'Dzati Athiar Ramli', 'dzati athiar ramli')
('2573778', 'Aini Hussain', 'aini hussain')
* E-mail: salina@vlsi.eng.ukm.my
57a1466c5985fe7594a91d46588d969007210581A Taxonomy of Face-models for System Evaluation
Motivation and Data Types
Synthetic Data Types
Unverified – Have no underlying physical or
statistical basis
Physics -Based – Based on structure and
materials combined with the properties
formally modeled in physics.
Statistical – Use statistics from real
data/experiments to estimate/learn model
parameters. Generally have measurements
of accuracy
Guided Synthetic – Individual models based
on individual people. No attempt to capture
properties of large groups, a unique model
per person. For faces, guided models are
composed of 3D structure models and skin
textures, capturing many artifacts not
easily parameterized. Can be combined with
physics-based rendering to generate samples
under different conditions.
Semi–Synethetic – Use measured data such
as 2D images or 3D facial scans. These are
not truly synthetic as they are re-rendering’s
of real measured data.
Semi and Guided Synthetic data provide
higher operational relevance while
maintaining a high degree of control.
Generating statistically significant size
datasets for face matching system
evaluation is both a laborious and
expensive process.
There is a gap in datasets that allow for
evaluation of system issues including:
 Long distance recognition
 Blur caused by atmospherics
 Various weather conditions
 End to end systems evaluation
Our contributions:
 Define a taxonomy of face-models
for controlled experimentations
 Show how Synthetic addresses gaps
in system evaluation
 Show a process for generating and
validating synthetic models
 Use these models in long distance
face recognition system evaluation
Experimental Setup
Results and Conclusions
Example Models
Original Pie
Semi-
Synthetic
FaceGen
Animetrics
http://www.facegen.com
http://www.animetrics.com/products/Forensica.php
Guided-
Synthetic
Models
 Models generated using the well
known CMU PIE [18] dataset. Each of
the 68 subjects of PIE were modeled
using a right profile and frontal
image from the lights subset.
 Two modeling programs were used,
Facegen and Animetrics. Both
programs create OBJ files and
textures
 Models are re-rendered using
custom display software built with
OpenGL, GLUT and DevIL libraries
 Custom Display Box housing a BENQ SP820 high
powered projector rated at 4000 ANSI Lumens
 Canon EOS 7D withd a Sigma 800mm F5.6 EX APO
DG HSM lens a 2x adapter imaging the display
from 214 meters
Normalized Example Captures
Real PIE 1 Animetrics
FaceGen
81M inside 214M outside
Real PIE 2
 Pre-cropped images were used for the
commercial core
 Ground truth eye points + geometric/lighting
normalization pre processing before running
through the implementation of the V1
recognition algorithm found in [1].
 Geo normalization highlights how the feature
region of the models looks very similar to
that of the real person.
Each test consisted of using 3 approximately frontal gallery images NOT used to
make the 3D model used as the probe, best score over 3 images determined score.
Even though the PIE-3D-20100224A–D sets were imaged on the same day, the V1
core scored differently on each highlighting the synthetic data’s ability to help
evaluate data capture methods and effects of varying atmospherics. The ISO setting
varied which effects the shutter speed, with higher ISO generally yielding less blur.
Dataset
Range(m)
Iso
V1
Comm.
Original PIE Images
FaceGen ScreenShots
Animetrics Screenshots
PIE-3D-20100210B
PIE-3D-20100224A
PIE-3D-20100224B
PIE-3D-20100224C
PIE-3D-20100224D
N/A
N/A
N/A
81m
214m
214m
214m
214m
N/A
N/A
N/A
500
125
125
250
400
100
47.76
100
100
58.82
45.59
81.82
79.1
100
100
100
100
100
100
 The same (100 percent) recognition rate on screenshots as original images
validate the Anmetrics guided synthetic models and fails FaceGen Models.
 100% recognition means dataset is too small/easy; exapanding pose and models
underway.
 Expanded the photohead methodology into 3D
 Developed a robust modeling system allowing for multiple configurations of a
single real life data set.
 Gabor+SVM based V1[15] significantly more impacted by atmospheric blur than
the commercial algorithm
Key References:
[6 of 21] R. Bevridge, D. Bolme, M Teixeira, and B. Draper. The CSU Face Identification Evaluation System Users Guide: Version 5.0. Technical report, CSU 2003
[8 of 21] T. Boult and W. Scheirer. Long range facial image acquisition and quality. In M. Tisarelli, S. Li, and R. Chellappa.
[15 of 21] N. Pinto, J. J. DiCarlo, and D. D. Cox. How far can you get with a modern face recognition test set using only simple features? In IEEE CVPR, 2009.
[18 of 21] T. Sim, S. Baker, and M. Bsat. The CMU Pose, Illumination and Expression (PIE) Database. In Proceedings of the IEEE F&G, May 2002.
('31552290', 'Brian C. Parks', 'brian c. parks')
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
{viyer,skirkbride,bparks,wscheirer,tboult}@vast.uccs.edu
574b62c845809fd54cc168492424c5fac145bc83Learning Warped Guidance for Blind Face
Restoration
School of Computer Science and Technology, Harbin Institute of Technology, China
School of Data and Computer Science, Sun Yat-sen University, China
University of Kentucky, USA
('21515518', 'Xiaoming Li', 'xiaoming li')
('40508248', 'Yuting Ye', 'yuting ye')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('1737218', 'Liang Lin', 'liang lin')
('38958903', 'Ruigang Yang', 'ruigang yang')
csxmli@hit.edu.cn, csmliu@outlook.com, yeyuting.jlu@gmail.com,
wmzuo@hit.edu.cn
linliang@ieee.org
ryang@cs.uky.edu
57246142814d7010d3592e3a39a1ed819dd01f3bMITSUBISHI ELECTRIC RESEARCH LABORATORIES
http://www.merl.com
Verification of Very Low-Resolution Faces Using An
Identity-Preserving Deep Face Super-resolution Network
TR2018-116 August 24, 2018
5721216f2163d026e90d7cd9942aeb4bebc92334
575141e42740564f64d9be8ab88d495192f5b3bcAge Estimation based on Multi-Region
Convolutional Neural Network
1Center for Biometrics and Security Research & National Laboratory of Pattern
Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences
('40282288', 'Ting Liu', 'ting liu')
('1756538', 'Jun Wan', 'jun wan')
('39974958', 'Tingzhao Yu', 'tingzhao yu')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
{ting.liu,jun.wan,zlei,szli}@nlpr.ia.ac.cn,yutingzhao2013@ia.ac.cn
5789f8420d8f15e7772580ec373112f864627c4bEfficient Global Illumination for Morphable Models
University of Basel, Switzerland
('1801001', 'Andreas Schneider', 'andreas schneider')
('34460642', 'Bernhard Egger', 'bernhard egger')
('32013053', 'Lavrenti Frobeen', 'lavrenti frobeen')
('1687079', 'Thomas Vetter', 'thomas vetter')
{andreas.schneider,sandro.schoenborn,bernhard.egger,l.frobeen,thomas.vetter}@unibas.ch
574705812f7c0e776ad5006ae5e61d9b071eebdbAvailable Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 3, Issue. 5, May 2014, pg.780 – 787
RESEARCH ARTICLE
A Novel Approach for Face Recognition
Using PCA and Artificial Neural Network
Dayananda Sagar College of Engg., India
Dayananda Sagar College of Engg., India
('9856026', 'Karthik G', 'karthik g')
('9856026', 'Karthik G', 'karthik g')
1 email : karthik.knocks@gmail.com; 2 email : hcsateesh@gmail.com
5753b2b5e442eaa3be066daa4a2ca8d8a0bb1725
571b83f7fc01163383e6ca6a9791aea79cafa7ddSeqFace: Make full use of sequence information for face recognition
College of Information Science and Technology
Beijing University of Chemical Technology, China
YUNSHITU Corp., China
('48594708', 'Wei Hu', 'wei hu')
('7524887', 'Yangyu Huang', 'yangyu huang')
('8451319', 'Guodong Yuan', 'guodong yuan')
('47191084', 'Fan Zhang', 'fan zhang')
('50391855', 'Ruirui Li', 'ruirui li')
('47113208', 'Wei Li', 'wei li')
574ad7ef015995efb7338829a021776bf9daaa08AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks
for Human Action Recognition in Videos
1IIT Kanpur‡
2SRI International
3UCSD
('24899770', 'Amlan Kar', 'amlan kar')
('12692625', 'Nishant Rai', 'nishant rai')
('39707211', 'Karan Sikka', 'karan sikka')
('39396475', 'Gaurav Sharma', 'gaurav sharma')
57a14a65e8ae15176c9afae874854e8b0f23dca7UvA-DARE (Digital Academic Repository)
Seeing mixed emotions: The specificity of emotion perception from static and dynamic
facial expressions across cultures
Fang, X.; Sauter, D.A.; van Kleef, G.A.
Published in:
Journal of Cross-Cultural Psychology
DOI:
10.1177/0022022117736270
Link to publication
Citation for published version (APA):
Fang, X., Sauter, D. A., & van Kleef, G. A. (2018). Seeing mixed emotions: The specificity of emotion perception
from static and dynamic facial expressions across cultures. Journal of Cross-Cultural Psychology, 49(1), 130-
148. DOI: 10.1177/0022022117736270
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s),
other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating
your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask
the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam
The Netherlands. You will be contacted as soon as possible.
Download date: 08 Aug 2018
UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl
57b052cf826b24739cd7749b632f85f4b7bcf90bFast Fashion Guided Clothing Image Retrieval:
Delving Deeper into What Feature Makes
Fashion
School of Data and Computer Science, Sun Yat-sen University
Guangzhou, P.R China
('3079146', 'Yuhang He', 'yuhang he')
('40451106', 'Long Chen', 'long chen')
*Corresponding Author: chenl46@mail.sysu.edu.cn
57d37ad025b5796457eee7392d2038910988655aGEERATVEEETATF
ERARCCAVETYDETECTR
by
DagaEha
UdeheS eviif
f.DahaWeiha
ATheiS biediaiaF (cid:28)efhe
Re ieefheDegeef
aefSciece
a
TheSchfC eScieceadEgieeig
ebewUiveiyfe aeae91904
Decebe2009
57f7d8c6ec690bd436e70d7761bc5f46e993be4cFacial Expression Recognition Using Histogram Variances Faces
University of Technology, Sydney, 15 Broadway, Ultimo, NSW 2007, Australia
University of Aizu, Japan
('32796151', 'Ruo Du', 'ruo du')
('37046680', 'Qiang Wu', 'qiang wu')
('1706670', 'Xiangjian He', 'xiangjian he')
('1714410', 'Wenjing Jia', 'wenjing jia')
('40394300', 'Daming Wei', 'daming wei')
{ruodu, wuq, sean, wejia}@it.uts.edu.au
dm-wei@u-aizu.ac.jp
3b1260d78885e872cf2223f2c6f3d6f6ea254204
3b1aaac41fc7847dd8a6a66d29d8881f75c91ad5Sparse Representation-based Open Set Recognition ('2310707', 'He Zhang', 'he zhang')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
3b092733f428b12f1f920638f868ed1e8663fe57On the Size of Convolutional Neural Networks and
Generalization Performance
Center for Automation Research, UMIACS*
Department of Electrical and Computer Engineering†
University of Maryland, College Park
('2747758', 'Maya Kabkab', 'maya kabkab')
('9215658', 'Rama Chellappa', 'rama chellappa')
Email: {mayak, emhand, rama}@umiacs.umd.edu
3b73f8a2b39751efb7d7b396bf825af2aaadee24Connecting Pixels to Privacy and Utility:
Automatic Redaction of Private Information in Images
Max Planck Institute for Informatics
Saarland Informatics Campus
Saabr¨ucken, Germany
('9517443', 'Tribhuvanesh Orekondy', 'tribhuvanesh orekondy')
('1739548', 'Mario Fritz', 'mario fritz')
('1697100', 'Bernt Schiele', 'bernt schiele')
{orekondy,mfritz,schiele}@mpi-inf.mpg.de
3b2d5585af59480531616fe970cb265bbdf63f5bRobust Face Recognition under Varying Light
Based on 3D Recovery
Center of Computer Vision, School of
Mathematics and Computing, Sun Yat-sen
University, Guangzhou, China
Ching Y Suen
Centre for Pattern Recognition and Machine
Intelligence, Concordia University, Montreal
Canada, H3G 1M8
('3246510', 'Guan Yang', 'guan yang')mcsfgc@mail.sysu.edu.cn
parmidir@cenparmi.concordia.ca
3b64efa817fd609d525c7244a0e00f98feacc8b4A Comprehensive Survey on Pose-Invariant
Face Recognition
Centre for Quantum Computation and Intelligent Systems
Faculty of Engineering and Information Technology
University of Technology, Sydney
81-115 Broadway, Ultimo, NSW
Australia
15 March 2016
('37990555', 'Changxing Ding', 'changxing ding')
('1692693', 'Dacheng Tao', 'dacheng tao')
Emails: chx.ding@gmail.com, dacheng.tao@uts.edu.au
3bc776eb1f4e2776f98189e17f0d5a78bb755ef4
3b7f6035a113b560760c5e8000540fc46f91fed5COUPLING ALIGNMENTS WITH RECOGNITION FOR STILL-TO-VIDEO
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
FACE RECOGNITION
MOTIVATION
Problem: Still-to-Video face recognition
1. Gallery: high quality still face images (e.g., sharp and
high face resolution ones)
2. Probe: low quality video face frames (e.g., blur and low
face resolution ones)
Solution: Couple alignments with recognition
1. Quality Alignment (QA): select the frames of ‘best
quality’ from videos
2. Geometric Alignment (GA): jointly align the selected
frames to the still faces
3. Sparse Representation (SR): sparsely represent the
frames on the still faces
Frame 1
20
220
301
333
image
OVERVIEW
GA: Geometric Alignment
SR: Sparse Representation
QA: Quality Alignment
T : Alignment parameters
L: Identity labels
C: Selecting confidences
FORMULATION
{ ˆT , ˆL} = arg minT,L (cid:107)Z(cid:107)1 +(cid:80)c
s.t. Y ◦ T = B + E, B = AZ, Si = {j|Lj = i}.
i=1 (cid:107)BSi(cid:107)∗ + (cid:107)E(cid:107)1,
• Couple GA with SR: Y ◦T = B+E, B = AZ, (cid:107)Z(cid:107)1 ≤ t
DATASETS
1. YouTube-S2V dataset: 100 subjects, privately
collected from YouTube Face DB [Wolf et al., CVPR’ 11]
2. COX-S2V dataset: 1,000 subjects, publicly released
in our prior work [Huang et al., ACCV ’12]
– Y : Video faces, A: dictionary (still faces)
– ◦ and T : Alignment operator and parameters
– B: Sparse representations, E: residual errors
Examples of still faces
Examples of still faces
• Couple SR with QA: Si = {j|Lj = i},(cid:80) (cid:107)BSi(cid:107)∗ ≤ k
– Identity label: Lj = arg mink (cid:107)yj ◦τj −Akzjk(cid:107)2
– Confidence: Ci =(cid:80)
(cid:16) −(cid:107)ej(cid:107)1
(cid:17)
j∈Si
exp
σ2
Frame 1
RESULTS
Frame 1
31
45
72
84
Frame 1
14
25
35
46
89
Examples of video faces
118
Frame 1
Examples of video faces
14
25
OPTIMIZATION
{ ˆT , ˆL} = arg minT,L (cid:107)Z(cid:107)1 +(cid:80)c
Linearization:
i=1 (cid:107)BSi(cid:107)∗ + (cid:107)E(cid:107)1,
s.t.Y ◦ T + J∆T = B + E, B = AZ, Si = {j|Lj = i}.
∂T Y ◦ T : Jacobian matrices w.r.t transformations
J = ∂
Main algorithm:
Comparative methods:
1. Baseline: SRC[1], CRC[2]
2. Blind Geometric Alignment: RASL[3]
3. Joint Geometric Alignment and Recognition: MRR[4]
4. Our method: Couping Alignments with Recognition
(CAR)
Evaluation terms:
1. Face Alignments (QA and GA)
2. Sparse Reprentation (SR) for Face Recognition
INPUT: Gallery data matrix A, probe video sequence
data matrix Y and initial transformation T of Y
1. WHILE not converged DO
2. Compute Jacobian matrices w.r.t transformations
3. Warp and normalize the images:
(cid:20) vec(Y1 ◦ τ1)
Y ◦ T =
Set the segments at coarse search stage:
vec((cid:107)Y1 ◦ τ1(cid:107)2)
S1 = {1, . . . , n}, Si = φ, i = 2, . . . , c
, . . . ,
vec(Yn ◦ τn)
vec((cid:107)Yn ◦ τn(cid:107)2)
5. Apply Augmented Lagrange Multiplier to solve:
(cid:107)BSi(cid:107)∗ + (cid:107)E(cid:107)1,
{ ˆT , ˆZ} = arg min
(cid:107)Z(cid:107)1 +
c(cid:88)
4.
(cid:21)
T,Z
i=1
s.t. Y ◦ T + J∆T = B + E, B = AZ;
6. Update transformations: T = T + ∆T ∗
7. Update segments at fine search stage:
Si = {j|i = arg min
(cid:107)yj ◦ τj − Akzjk(cid:107)2}.
8. END WHILE
9. Compute Ci of Si, i = 1, . . . , n for voting class label.
OUTPUT: Class label of the probe video sequence.
QA, GA, SR results.
: correctly identified, (cid:3): finally selected
CONCLUSION
• The proposed method jointly performs GA, QA and SR
in a unified optimization.
• We employ an iterative EM-like algorithm to jointly op-
timize the three tasks.
• Experimental results demonstrate that GA, QA and SR
benefit from each other.
QA and GA results. Average faces of video frames finally
selected for face recognition
Methods
SRC[1]
CRC[2]
RASL[3] -SRC
RASL[3]-CRC
MRR[4]
CAR
10.78
10.34
26.29
29.74
28.45
36.21
C1
15.57
14.43
22.14
19.43
26.43
43.42
C2
42.29
43.57
39.00
41.29
44.14
55.00
C3
2.86
4.14
4.57
4.00
3.57
10.71
C4
18.71
19.71
18.29
19.43
13.57
28.86
Face recognition results. Intensity feature, Y: YouTube-S2V, Ci:
the i-the testing scenario of COX-S2V
REFERENCES
[1]
J. Wright, A. Yang, A. Ganesh, S. Sastry, Y. Ma. Robust face recognition via sparse representa-
tion. In TPAMI ’09
[2] L. Zhang, M. Yang, X. Feng. Sparse representation or collaborative representation which helps
face recognition? In ICCV ’11
[3] Y. Peng, A. Ganesh, J. Wright, W. Xu, Y. Ma. RASL: Robust alignement by sparse and low-rank
decomposition for linearly correlated images. In CVPR ’10
[4] M. Yang, L. Zhang, D. Zhang. Efficient misalignment-robust representaion for real-time face
recognition. In ECCV ’12
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('1874505', 'Xiaowei Zhao', 'xiaowei zhao')
('1685914', 'Shiguang Shan', 'shiguang shan')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1710220', 'Xilin Chen', 'xilin chen')
3b2a2357b12cf0a5c99c8bc06ef7b46e40dd888eLearning Person Trajectory Representations for Team Activity Analysis
Simon Fraser University
('10386960', 'Nazanin Mehrasa', 'nazanin mehrasa')
('19198359', 'Yatao Zhong', 'yatao zhong')
('2123865', 'Frederick Tung', 'frederick tung')
('3004771', 'Luke Bornn', 'luke bornn')
('10771328', 'Greg Mori', 'greg mori')
{nmehrasa, yataoz, ftung, lbornn}@sfu.ca, mori@cs.sfu.ca
3bd1d41a656c8159305ba2aa395f68f41ab84f31Entity-based Opinion Mining from Text and
Multimedia
1 Introduction
Social web analysis is all about the users who are actively engaged and generate
content. This content is dynamic, reflecting the societal and sentimental fluctuations
of the authors as well as the ever-changing use of language. Social networks are
pools of a wide range of articulation methods, from simple ”Like” buttons to com-
plete articles, their content representing the diversity of opinions of the public. User
activities on social networking sites are often triggered by specific events and re-
lated entities (e.g. sports events, celebrations, crises, news articles) and topics (e.g.
global warming, financial crisis, swine flu).
With the rapidly growing volume of resources on the Web, archiving this material
becomes an important challenge. The notion of community memories extends tradi-
tional Web archives with related data from a variety of sources. In order to include
this information, a semantically-aware and socially-driven preservation model is a
natural way to go: the exploitation of Web 2.0 and the wisdom of crowds can make
web archiving a more selective and meaning-based process. The analysis of social
media can help archivists select material for inclusion, while social media mining
can enrich archives, moving towards structured preservation around semantic cat-
egories. In this paper, we focus on the challenges in the development of opinion
mining tools from both textual and multimedia content.
We focus on two very different domains: socially aware federated political
archiving (realised by the national parliaments of Greece and Austria), and socially
contextualized broadcaster web archiving (realised by two large multimedia broad-
University of Shef eld, Regent Court, 211 Portobello, Shef eld
Jonathon Hare
Electronics and Computer Science, University of Southampton, Southampton, Hampshire
('2144272', 'Diana Maynard', 'diana maynard')
('2144272', 'Diana Maynard', 'diana maynard')
S1 4DP, UK e-mail: diana@dcs.shef.ac.uk
SO17 1BJ, UK e-mail: jsh2@ecs.soton.ac.uk
3bcd72be6fbc1a11492df3d36f6d51696fd6bdadMulti-Task Zero-Shot Action Recognition with
Prioritised Data Augmentation
School of Electronic Engineering and Computer Science,
Queen Mary University of London
('1735328', 'Xun Xu', 'xun xu')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('2073354', 'Shaogang Gong', 'shaogang gong')
{xun.xu,t.hospedales,s.gong}@qmul.ac.uk
3b9c08381282e65649cd87dfae6a01fe6abea79bCUHK & ETHZ & SIAT Submission to ActivityNet Challenge 2016
Multimedia Laboratory, The Chinese University of Hong Kong, Hong Kong
2Computer Vision Lab, ETH Zurich, Switzerland
Shenzhen Institutes of Advanced Technology, CAS, China
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('33345248', 'Limin Wang', 'limin wang')
('1915826', 'Zhe Wang', 'zhe wang')
('3047890', 'Bowen Zhang', 'bowen zhang')
('2313919', 'Hang Song', 'hang song')
('1688012', 'Wei Li', 'wei li')
('1807606', 'Dahua Lin', 'dahua lin')
('1681236', 'Luc Van Gool', 'luc van gool')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
3b84d074b8622fac125f85ab55b63e876fed4628End-to-End Localization and Ranking for
Relative Attributes
University of California, Davis
('19553871', 'Krishna Kumar Singh', 'krishna kumar singh')
('1883898', 'Yong Jae Lee', 'yong jae lee')
3b4fd2aec3e721742f11d1ed4fa3f0a86d988a10Glimpse: Continuous, Real-Time Object Recognition on
Mobile Devices
MIT CSAIL
Microsoft Research
MIT CSAIL
Microsoft Research
MIT CSAIL
('32214366', 'Tiffany Yu-Han Chen', 'tiffany yu-han chen')
('40125198', 'Lenin Ravindranath', 'lenin ravindranath')
('1904357', 'Shuo Deng', 'shuo deng')
('2292948', 'Paramvir Bahl', 'paramvir bahl')
('1712771', 'Hari Balakrishnan', 'hari balakrishnan')
yuhan@csail.mit.edu
lenin@microsoft.com
shuodeng@csail.mit.edu
bahl@microsoft.com
hari@csail.mit.edu
3be8f1f7501978287af8d7ebfac5963216698249Deep Cascaded Regression for Face Alignment
School of Data and Computer Science, Sun Yat-Sen University, China
National University of Singapore, Singapore
algorithm refines the shape by estimating a shape increment
∆S. In particular, a shape increment at stage k is calculated
as:
('3124720', 'Shengtao Xiao', 'shengtao xiao')
('10338111', 'Zhen Cui', 'zhen cui')
('48815683', 'Yan Pan', 'yan pan')
('48258938', 'Chunyan Xu', 'chunyan xu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
3bc376f29bc169279105d33f59642568de36f17fActive Shape Models with SIFT Descriptors and MARS
University of Cape Town, South Africa
Keywords:
Facial Landmark, Active Shape Model, Multivariate Adaptive Regression Splines
('2822258', 'Stephen Milborrow', 'stephen milborrow')
('2537623', 'Fred Nicolls', 'fred nicolls')
milbo@sonic.net
3b38c06caf54f301847db0dd622a6622c3843957RESEARCH ARTICLE
Gender differences in emotion perception
and self-reported emotional intelligence: A
test of the emotion sensitivity hypothesis
University of Amsterdam, Amsterdam, the Netherlands, 2 Leiden University
Leiden, the Netherlands, 3 Delft University of Technology
Intelligent Systems, Delft, the Netherlands
('1735303', 'Joost Broekens', 'joost broekens')* a.h.fischer@uva.nl
3b15a48ffe3c6b3f2518a7c395280a11a5f58ab0On Knowledge Transfer in
Object Class Recognition
A dissertation approved by
TECHNISCHE UNIVERSITÄT DARMSTADT
Fachbereich Informatik
for the degree of
Doktor-Ingenieur (Dr.-Ing.)
presented by
Dipl.-Inform.
born in Mainz, Germany
Prof. Dr.-Ing. Michael Goesele, examiner
Prof. Martial Hebert, Ph.D., co-examiner
Prof. Dr. Bernt Schiele, co-examiner
Date of Submission: 12th of August, 2010
Date of Defense: 23rd of September, 2010
Darmstadt, 2010
D17
('37718254', 'Michael Stark', 'michael stark')
3baa3d5325f00c7edc1f1427fcd5bdc6a420a63fEnhancing Convolutional Neural Networks for Face Recognition with
Occlusion Maps and Batch Triplet Loss
aSchool of Engineering and Technology, University of Hertfordshire, Hat eld AL10 9AB, UK
bIDscan Biometrics (a GBG company), London E14 9QD, UK
('2133352', 'Li Meng', 'li meng')
('46301106', 'Margaret Hartnett', 'margaret hartnett')
3b9b200e76a35178da940279d566bbb7dfebb787Learning Channel Inter-dependencies at Multiple Scales on Dense
Networks for Face Recognition
109 Research Way — PO Box 6109 Morgantown, West Virginia
West Virginia University
November 29, 2017
('16145333', 'Qiangchang Wang', 'qiangchang wang')
('1822413', 'Guodong Guo', 'guodong guo')
('23981570', 'Mohammad Iqbal Nouyed', 'mohammad iqbal nouyed')
qw0007@mix.wvu.edu, guodong.guo@mail.wvu.edu, monouyed@mix.wvu.edu
3b408a3ca6fb39b0fda4d77e6a9679003b2dc9abImproving Classification by Improving Labelling:
Introducing Probabilistic Multi-Label Object Interaction Recognition
Walterio Mayol-Cuevas
University of Bristol
('2052236', 'Michael Wray', 'michael wray')
('3420479', 'Davide Moltisanti', 'davide moltisanti')
('1728459', 'Dima Damen', 'dima damen')
.@bristol.ac.uk
3b02aaccc9f063ae696c9d28bb06a8cd84b2abb8Who Leads the Clothing Fashion: Style, Color, or Texture?
A Computational Study
School of Computer Science, Wuhan University, P.R. China
Shenzhen Key Laboratory of Spatial Smart Sensing and Service, Shenzhen University, P.R. China
School of Data of Computer Science, Sun Yat-sen University, P.R. China
University of South Carolina, USA
('4793870', 'Qin Zou', 'qin zou')
('37361540', 'Zheng Zhang', 'zheng zhang')
('40102806', 'Qian Wang', 'qian wang')
('1720431', 'Qingquan Li', 'qingquan li')
('40451106', 'Long Chen', 'long chen')
('10829233', 'Song Wang', 'song wang')
3ba8f8b6bfb36465018430ffaef10d2caf3cfa7eLocal Directional Number Pattern for Face
Analysis: Face and Expression Recognition
('2525887', 'Adin Ramirez Rivera', 'adin ramirez rivera')
('1685505', 'Oksam Chae', 'oksam chae')
3b80bf5a69a1b0089192d73fa3ace2fbb52a4ad5
3b9d94752f8488106b2c007e11c193f35d941e92CVPR
#2052
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
CVPR 2013 Submission #2052. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
CVPR
#2052
Appearance, Visual and Social Ensembles for
Face Recognition in Personal Photo Collections
Anonymous CVPR submission
Paper ID 2052
3bb6570d81685b769dc9e74b6e4958894087f3f1Hu-Fu: Hardware and Software Collaborative
Attack Framework against Neural Networks
Beijing National Research Center for Information Science and Technology
Tsinghua University
('3493074', 'Wenshuo Li', 'wenshuo li')
('1909938', 'Jincheng Yu', 'jincheng yu')
('6636914', 'Xuefei Ning', 'xuefei ning')
('2892980', 'Pengjun Wang', 'pengjun wang')
('49988678', 'Qi Wei', 'qi wei')
('47904166', 'Yu Wang', 'yu wang')
('39150998', 'Huazhong Yang', 'huazhong yang')
{lws17@mails.tsinghua.edu.cn, yu-wang@tsinghua.edu.cn}
3b557c4fd6775afc80c2cf7c8b16edde125b270eFace Recognition: Perspectives from the
Real-World
Institute for Infocomm Research, A*STAR
1 Fusionopolis Way, #21-01 Connexis (South Tower), Singapore 138632.
Phone: +65 6408 2071; Fax: +65 6776 1378;
('1709001', 'Bappaditya Mandal', 'bappaditya mandal')E-mail: bmandal@i2r.a-star.edu.sg
3b3482e735698819a6a28dcac84912ec01a9eb8aIndividual Recognition Using Gait Energy Image
Center for Research in Intelligent Systems
University of California, Riverside, California 92521, USA
jhan,bhanu
('1699904', 'Ju Han', 'ju han')
('1707159', 'Bir Bhanu', 'bir bhanu')
@cris.ucr.edu
3b37d95d2855c8db64bd6b1ee5659f87fce36881ADA: A Game-Theoretic Perspective on Data Augmentation for Object Detection
University of Illinois at Chicago
Carnegie Mellon University
University of Illinois at Chicago
('2761655', 'Sima Behpour', 'sima behpour')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
('1753269', 'Brian D. Ziebart', 'brian d. ziebart')
sbehpo2@uic.edu
kkitani@cs.cmu.edu
bziebart@uic.edu
3be7b7eb11714e6191dd301a696c734e8d07435f
3be027448ad49a79816cd21dcfcce5f4e1cec8a8Actively Selecting Annotations Among Objects and Attributes
University of Texas at Austin
('1770205', 'Adriana Kovashka', 'adriana kovashka')
('2259154', 'Sudheendra Vijayanarasimhan', 'sudheendra vijayanarasimhan')
('1794409', 'Kristen Grauman', 'kristen grauman')
{adriana, svnaras, grauman}@cs.utexas.edu
3bd56f4cf8a36dd2d754704bcb71415dcbc0a165Robust Regression
Robotics Institute, Carnegie Mellon University
('39792229', 'Dong Huang', 'dong huang')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
3b410ae97e4564bc19d6c37bc44ada2dcd608552Scalability Analysis of Audio-Visual Person
Identity Verification
1 Communications Laboratory,
Universit´e catholique de Louvain, B-1348 Belgium,
2 IDIAP, CH-1920 Martigny,
Switzerland
('34964585', 'Jacek Czyz', 'jacek czyz')
('1751569', 'Samy Bengio', 'samy bengio')
('2510802', 'Christine Marcel', 'christine marcel')
('1698047', 'Luc Vandendorpe', 'luc vandendorpe')
czyz@tele.ucl.ac.be,
{Samy.Bengio,Christine.Marcel}@idiap.ch
3b470b76045745c0ef5321e0f1e0e6a4b1821339Consensus of Regression for Occlusion-Robust
Facial Feature Localization
Rutgers University, Piscataway, NJ 08854, USA
2 Adobe Research, San Jose, CA 95110, USA
('39960064', 'Xiang Yu', 'xiang yu')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
6f5ce5570dc2960b8b0e4a0a50eab84b7f6af5cbLow Resolution Face Recognition Using a
Two-Branch Deep Convolutional Neural Network
Architecture
('19189138', 'Erfan Zangeneh', 'erfan zangeneh')
('1772623', 'Mohammad Rahmati', 'mohammad rahmati')
('3071758', 'Yalda Mohsenzadeh', 'yalda mohsenzadeh')
6f288a12033fa895fb0e9ec3219f3115904f24deLearning Expressionlets via Universal Manifold
Model for Dynamic Facial Expression Recognition
('1730228', 'Mengyi Liu', 'mengyi liu')
('1685914', 'Shiguang Shan', 'shiguang shan')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1710220', 'Xilin Chen', 'xilin chen')
6fa0c206873dcc5812f7ea74a48bb4bf4b273494Real-time Mobile Facial Expression Recognition System – A Case Study
Department of Computer Engineering
The University of Texas at Dallas, Richardson, TX
('2774175', 'Myunghoon Suk', 'myunghoon suk'){mhsuk, praba}@utdallas.edu
6f9824c5cb5ac08760b08e374031cbdabc953baeUnconstrained Human Identification Using Comparative Facial Soft Biometrics
Nawaf Y. Almudhahka
University of Southampton
Southampton, United Kingdom
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('31534955', 'Jonathon S. Hare', 'jonathon s. hare')
{nya1g14,msn,jsh2}@ecs.soton.ac.uk
6f2dc51d607f491dbe6338711c073620c85351ac
6fed504da4e192fe4c2d452754d23d3db4a4e5e3Learning Deep Features via Congenerous Cosine Loss for Person Recognition
1 SenseTime Group Ltd., Beijing, China
The Chinese University of Hong Kong, New Territories, Hong Kong
('1715752', 'Yu Liu', 'yu liu')
('1929886', 'Hongyang Li', 'hongyang li')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
liuyu@sensetime.com, {yangli, xgwang}@ee.cuhk.edu.hk
6f957df9a7d3fc4eeba53086d3d154fc61ae88dfMod´elisation et suivi des d´eformations faciales :
applications `a la description des expressions du visage
dans le contexte de la langue des signes
To cite this version:
des expressions du visage dans le contexte de la langue des signes. Interface homme-machine
[cs.HC]. Universit´e Paul Sabatier - Toulouse III, 2007. Fran¸cais.
HAL Id: tel-00185084
https://tel.archives-ouvertes.fr/tel-00185084
Submitted on 5 Nov 2007
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('3029015', 'Hugo Mercier', 'hugo mercier')
('3029015', 'Hugo Mercier', 'hugo mercier')
6f26ab7edd971148723d9b4dc8ddf71b36be9bf7Differences in Abundances of Cell-Signalling Proteins in
Blood Reveal Novel Biomarkers for Early Detection Of
Clinical Alzheimer’s Disease
Centre for Bioinformatics, Biomarker Discovery and Information-Based Medicine, The University of Newcastle, Callaghan, Australia, 2 Departamento de Engenharia de
Produc¸a˜o, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, Brazil
('8423987', 'Mateus Rocha de Paula', 'mateus rocha de paula')
('34861417', 'Regina Berretta', 'regina berretta')
('1738680', 'Pablo Moscato', 'pablo moscato')
6f75697a86d23d12a14be5466a41e5a7ffb79fad
6f7d06ced04ead3b9a5da86b37e7c27bfcedbbddPages 51.1-51.12
DOI: https://dx.doi.org/10.5244/C.30.51
6f7a8b3e8f212d80f0fb18860b2495be4c363eacCreating Capsule Wardrobes from Fashion Images
UT-Austin
UT-Austin
('22211024', 'Wei-Lin Hsiao', 'wei-lin hsiao')
('1794409', 'Kristen Grauman', 'kristen grauman')
kimhsiao@cs.utexas.edu
grauman@cs.utexas.edu
6f6b4e2885ea1d9bea1bb2ed388b099a5a6d9b81Structured Output SVM Prediction of Apparent Age,
Gender and Smile From Deep Features
Michal Uˇriˇc´aˇr
CMP, Dept. of Cybernetics
FEE, CTU in Prague
Computer Vision Lab
D-ITET, ETH Zurich
Computer Vision Lab
D-ITET, ETH Zurich
PSI, ESAT, KU Leuven
CVL, D-ITET, ETH Zurich
Jiˇr´ı Matas
CMP, Dept. of Cybernetics
FEE, CTU in Prague
('1732855', 'Radu Timofte', 'radu timofte')
('2173683', 'Rasmus Rothe', 'rasmus rothe')
('1681236', 'Luc Van Gool', 'luc van gool')
uricamic@cmp.felk.cvut.cz
radu.timofte@vision.ee.ethz.ch
rrothe@vision.ee.ethz.ch
vangool@vision.ee.ethz.ch
matas@cmp.felk.cvut.cz
6f08885b980049be95a991f6213ee49bbf05c48dThis article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/authorsrights
6f0900a7fe8a774a1977c5f0a500b2898bcbe1491
Quotient Based Multiresolution Image Fusion of Thermal
and Visual Images Using Daubechies Wavelet Transform
for Human Face Recognition
Tripura University (A Central University
Suryamaninagar, Tripura 799130, India
Jadavpur University
Kolkata, West Bengal 700032, India
*AICTE Emeritus Fellow
('1694317', 'Mrinal Kanti Bhowmik', 'mrinal kanti bhowmik')
('1721942', 'Debotosh Bhattacharjee', 'debotosh bhattacharjee')
('1729425', 'Mita Nasipuri', 'mita nasipuri')
('1679476', 'Dipak Kumar Basu', 'dipak kumar basu')
('1727663', 'Mahantapas Kundu', 'mahantapas kundu')
mkb_cse@yahoo.co.in
debotosh@indiatimes.com, mitanasipuri@gmail.com, dipakkbasu@gmail.com, mkundu@cse.jdvu.ac.in
6fea198a41d2f6f73e47f056692f365c8e6b04ceVideo Captioning with Boundary-aware Hierarchical Language
Decoding and Joint Video Prediction
Nanyang Technological University
Nanyang Technological University
Singapore, Singapore
Singapore, Singapore
Nanyang Technological University
Singapore, Singapore
Shafiq Joty
Nanyang Technological University
Singapore, Singapore
('8668622', 'Xiangxi Shi', 'xiangxi shi')
('1688642', 'Jianfei Cai', 'jianfei cai')
('2174964', 'Jiuxiang Gu', 'jiuxiang gu')
xxshi@ntu.edu.sg
JGU004@e.ntu.edu.sg
asjfcai@ntu.edu.sg
srjoty@ntu.edu.sg
6fbb179a4ad39790f4558dd32316b9f2818cd106Input Aggregated Network for Face Video Representation
Beijing Laboratory of IIT, School of Computer Science, Beijing Institute of Technology, Beijing, China
Stony Brook University, Stony Brook, USA
('40061483', 'Zhen Dong', 'zhen dong')
('3306427', 'Su Jia', 'su jia')
('1690083', 'Chi Zhang', 'chi zhang')
('35371203', 'Mingtao Pei', 'mingtao pei')
6f84e61f33564e5188136474f9570b1652a0606fDual Motion GAN for Future-Flow Embedded Video Prediction
Carnegie Mellon University
('40250403', 'Xiaodan Liang', 'xiaodan liang')
('3682478', 'Lisa Lee', 'lisa lee')
{xiaodan1,lslee}@cs.cmu.edu
6f35b6e2fa54a3e7aaff8eaf37019244a2d39ed3DOI 10.1007/s00530-005-0177-4
R E G U L A R PA P E R
Learning probabilistic classifiers for human–computer
interaction applications
Published online: 10 May 2005
c(cid:1) Springer-Verlag 2005
intelligent
interaction,
('1703601', 'Nicu Sebe', 'nicu sebe')
('1695527', 'Theo Gevers', 'theo gevers')
6f3054f182c34ace890a32fdf1656b583fbc7445Article
Age Estimation Robust to Optical and Motion
Blurring by Deep Residual CNN
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu
Received: 9 March 2018; Accepted: 10 April 2018; Published: 13 April 2018
('31515471', 'Jeon Seong Kang', 'jeon seong kang')
('31864414', 'Chan Sik Kim', 'chan sik kim')
('29944844', 'Se Woon Cho', 'se woon cho')
('4634733', 'Kang Ryoung Park', 'kang ryoung park')
Seoul 100-715, Korea; kjs2605@dgu.edu (J.S.K.); kimchsi9004@naver.com (C.S.K.);
lyw941021@dongguk.edu (Y.W.L.); jsu319@naver.com (S.W.C.)
* Correspondence: parkgr@dongguk.edu; Tel.: +82-10-3111-7022; Fax: +82-2-2277-8735
6fa3857faba887ed048a9e355b3b8642c6aab1d8Face Recognition in Challenging Environments:
An Experimental and Reproducible Research
Survey
('2121764', 'Laurent El Shafey', 'laurent el shafey')
6fda12c43b53c679629473806c2510d84358478fJournal of Academic and Applied Studies
Vol. 1(1), June 2011, pp. 29-38
A Training Model for Fuzzy Classification
System

Islamic Azad University
Iran
Available online @ www.academians.org
Email:a.jamshidnejad@yahoo.com
6fef65bd7287b57f0c3b36bf8e6bc987fd161b7dDeep Discriminative Model for Video
Classification
Center for Machine Vision and Signal Analysis (CMVS)
University of Oulu, Finland
('2014145', 'Mohammad Tavakolian', 'mohammad tavakolian')
('1751372', 'Abdenour Hadid', 'abdenour hadid')
firstname.lastname@oulu.fi
6f7ce89aa3e01045fcd7f1c1635af7a09811a1fe978-1-4673-0046-9/12/$26.00 ©2012 IEEE
937
ICASSP 2012
6fe2efbcb860767f6bb271edbb48640adbd806c3SOFT BIOMETRICS: HUMAN IDENTIFICATION USING COMPARATIVE DESCRIPTIONS
Soft Biometrics; Human Identification using
Comparative Descriptions
('34386180', 'Daniel A. Reid', 'daniel a. reid')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('2093843', 'Sarah V. Stevenage', 'sarah v. stevenage')
6fdc0bc13f2517061eaa1364dcf853f36e1ea5aeDAISEE: Dataset for Affective States in
E-Learning Environments
1 Microsoft India R&D Pvt. Ltd.
2 Department of Computer Science, IIT Hyderabad
('50178849', 'Abhay Gupta', 'abhay gupta')
('3468123', 'Richik Jaiswal', 'richik jaiswal')
('3468212', 'Sagar Adhikari', 'sagar adhikari')
('1973980', 'Vineeth Balasubramanian', 'vineeth balasubramanian')
abhgup@microsoft.com
{cs12b1032, cs12b1034, vineethnb}@iith.ac.in
6f5151c7446552fd6a611bf6263f14e729805ec75KHHAO /7 %:0 7
)>IJH=?J 9EJDE JDA ?JANJ B=?A ANFHAIIE ?=IIE?=JE KIEC JDA
FH>=>EEJEAI JD=J A=?D A B IALAH= ?O ??KHHEC )7 CHKFI EI
?=IIIAF=H=>EEJO MAECDJEC
/=>H M=LAAJI H FHE?EF= ?FAJI ==OIEI 2+) ! 1 JDEI F=FAH MA
.=?E= )?JE 7EJ 4A?CEJE KIEC .EJAHA@
?= *E=HO 2=JJAH .A=JKHAI MEJD *JIJH=FFA@
=@ 9AECDJA@ -++ +=IIEAHI
455EJD =@ 69E@A=JJ
+AJHA BH 8EIE 5FAA?D =@ 5EC= 2H?AIIEC 7ELAHIEJO B 5KHHAO /KE@BH@
4=O@5EJD 69E@A=JJ(IKHHAO=?K
B=?E= =?JE ?@EC IOIJA .)+5 MA =@@HAII JDA FH>A B @AJA?J
EC B=?E= =?JE KEJI )7I 6DA AJD@ =@FJA@ EI J JH=E = IECA
AHHH?HHA?JEC KJFKJ ?@A -++ KJE?=II ?=IIEAH J AIJE=JA JDA
FHAIAJ E JDA FH>A E=CA 2=JJ I?=EC EI KIA@ J ?=E>H=JA JDA -++
KJFKJI J FH>=>EEJEAI =@ =FFHFHE=JA IKI B JDAIA FH>=>EEJEAI =HA
J=A J >J=E = IAF=H=JA FH>=>EEJO BH A=?D )7 E@ELE@K=O .A=JKHA
ANJH=?JE EI FAHBHA@ >O CAAH=JEC = =HCA K>AH B ?= >E=HO F=J
JAH *2 BA=JKHAI =@ JDA IAA?JEC BH JDAIA KIEC B=IJ ?HHA=JE
>=IA@ JAHEC .+*. 6DA >E=I =@ L=HE=?A FHFAHJEAI B JDA ?=IIEAH
=HA A=IKHA@ =@ MA IDM JD=J >JD JDAIA IKH?AI B AHHH ?= >A HA
@K?A@ >O AD=?EC -++ JDHKCD JDA =FFE?=JE B >JIJH=FFEC =@
1JH@K?JE
6DA B=?E==?JE ?@EC IOIJA .)+5 B -= =@ .HEAIA   EI ?O
AFOA@ E =FFE?=JEI MDE?D FAHBH =KJ=JE? B=?E= ANFHAIIE HA?CEJE
1 JDEI AJD@ E@ELE@K= B=?E= LAAJI =HA ?D=H=?JAHEIA@ =I A B "" JOFAI
M =I =?JE KEJI )7I /HKFI B )7I =O JDA >A =FFA@ J AJEI
KIEC = IJ=@=H@ ?@A > JA DMALAH JD=J )7I =HA J A?AII=HEO E@A
FA@AJ =I JDA FHAIA?A B A )7 =O =A?J JDA =FFA=H=?A B =JDAH 6DAO
=O =I ??KH =J @EAHAJ EJAIEJEAI =@ =O ??KH  O A IE@A B JDA
B=?A 1 JDEI F=FAH MA B?KI  HA?CEIEC IEN )7I BH JDA HACE =HK@ JDA
AOAI =I EKIJH=JA@ E .EC 
1EJE= HAFHAIAJ=JE AJD@I BH )7 ?=IIE?=JE MAHA >=IA@  A=IKHEC
JDA HA=JELA FIEJE B = =HCA K>AH B =@=H FEJI  JDA B=?A   1J
D=I >AA BK@ DMALAH JD=J ?F=H=>A H >AJJAH HAIKJI ?= >A >J=EA@ >O
J=EC = HA DEIJE? =FFH=?D J BA=JKHA ANJH=?JE KIEC AJD@I IK?D =I
?F=HA JM IK?D AJD@I =AO 2+) " =@ ?= >E=HO F=JJAH *2
03c56c176ec6377dddb6a96c7b2e95408db65a7aA Novel Geometric Framework on Gram Matrix
Trajectories for Human Behavior Understanding
('46243486', 'Anis Kacem', 'anis kacem')
('2909056', 'Mohamed Daoudi', 'mohamed daoudi')
('2125606', 'Boulbaba Ben Amor', 'boulbaba ben amor')
('2507859', 'Stefano Berretti', 'stefano berretti')
03d9ccce3e1b4d42d234dba1856a9e1b28977640
0322e69172f54b95ae6a90eb3af91d3daa5e36eaFace Classification using Adjusted Histogram in
Grayscale
036c41d67b49e5b0a578a401eb31e5f46b3624e0The Tower Game Dataset: A Multimodal Dataset
for Analyzing Social Interaction Predicates
∗ SRI International
University of California, Santa Cruz
University of California, Berkeley
('1955011', 'David A. Salter', 'david a. salter')
('1860011', 'Amir Tamrakar', 'amir tamrakar')
('1832513', 'Behjat Siddiquie', 'behjat siddiquie')
('4599641', 'Mohamed R. Amer', 'mohamed r. amer')
('1696401', 'Ajay Divakaran', 'ajay divakaran')
('40530418', 'Brian Lande', 'brian lande')
('2108704', 'Darius Mehri', 'darius mehri')
Email: {david.salter, amir.tamrakar, behjat.siddiquie, mohamed.amer, ajay.divakaran}@sri.com
Email: brianlande@soe.ucsc.edu
Email: darius mehri@berkeley.edu
03b03f5a301b2ff88ab3bb4969f54fd9a35c7271Multi-kernel learning of deep convolutional features for action recognition
Imperial College London
Noah’s Ark Lab (Huawei Technologies UK)
Cortexica Vision Systems Limited
('39599054', 'Biswa Sengupta', 'biswa sengupta')
('29742002', 'Yu Qian', 'yu qian')
b.sengupta@imperial.ac.uk
03f7041515d8a6dcb9170763d4f6debd50202c2bClustering Millions of Faces by Identity ('40653304', 'Charles Otto', 'charles otto')
('7496032', 'Dayong Wang', 'dayong wang')
('40217643', 'Anil K. Jain', 'anil k. jain')
03ce2ff688f9b588b6f264ca79c6857f0d80ceaeAttention Clusters: Purely Attention Based
Local Feature Integration for Video Classification
Tsinghua University, 2Rutgers University, 3Massachusetts Institute of Technology, 4Baidu IDL
('1716690', 'Xiang Long', 'xiang long')
('2551285', 'Chuang Gan', 'chuang gan')
('1732213', 'Gerard de Melo', 'gerard de melo')
('3045089', 'Jiajun Wu', 'jiajun wu')
('48033101', 'Xiao Liu', 'xiao liu')
('35247507', 'Shilei Wen', 'shilei wen')
03b99f5abe0e977ff4c902412c5cb832977cf18eCROWLEY AND ZISSERMAN: OF GODS AND GOATS
Of Gods and Goats: Weakly Supervised
Learning of Figurative Art
Elliot J. Crowley
Department of Engineering Science
University of Oxford
('1688869', 'Andrew Zisserman', 'andrew zisserman')elliot@robots.ox.ac.uk
az@robots.ox.ac.uk
038ce930a02d38fb30d15aac654ec95640fe5cb0Approximate Structured Output Learning for Constrained Local
Models with Application to Real-time Facial Feature Detection and
Tracking on Low-power Devices
('40474289', 'Shuai Zheng', 'shuai zheng')
('3274976', 'Paul Sturgess', 'paul sturgess')
('1730268', 'Philip H. S. Torr', 'philip h. s. torr')
03167776e17bde31b50f294403f97ee068515578Chapter 11. Facial Expression Analysis
University of Pittsburgh, Pittsburgh, PA 15260, USA
1 Principles of Facial Expression Analysis
1.1 What Is Facial Expression Analysis?
Facial expressions are the facial changes in response to a person’s internal emotional states,
intentions, or social communications. Facial expression analysis has been an active research
topic for behavioral scientists since the work of Darwin in 1872 [18, 22, 25, 71]. Suwa et
al. [76] presented an early attempt to automatically analyze facial expressions by tracking the
motion of 20 identified spots on an image sequence in 1978. After that, much progress has
been made to build computer systems to help us understand and use this natural form of human
communication [6, 7, 17, 20, 28, 39, 51, 55, 65, 78, 81, 92, 93, 94, 96].
In this chapter, facial expression analysis refers to computer systems that attempt to auto-
matically analyze and recognize facial motions and facial feature changes from visual informa-
tion. Sometimes the facial expression analysis has been confused with emotion analysis in the
computer vision domain. For emotion analysis, higher level knowledge is required. For exam-
ple, although facial expressions can convey emotion, they can also express intention, cognitive
processes, physical effort, or other intra- or interpersonal meanings. Interpretation is aided by
context, body gesture, voice, individual differences, and cultural factors as well as by facial
configuration and timing [10, 67, 68]. Computer facial expression analysis systems need to
analyze the facial actions regardless of context, culture, gender, and so on.
The accomplishments in the related areas such as psychological studies, human movement
analysis, face detection, face tracking, and recognition make the automatic facial expression
analysis possible. Automatic facial expression analysis can be applied in many areas such as
emotion and paralinguistic communication, clinical psychology, psychiatry, neurology, pain
assessment, lie detection, intelligent environments, and multimodal human computer interface
(HCI).
1.2 Basic Structure of Facial Expression Analysis Systems
Facial expression analysis includes both measurement of facial motion and recognition of ex-
pression. The general approach to automatic facial expression analysis (AFEA) consists of
('40383812', 'Ying-Li Tian', 'ying-li tian')
('1733113', 'Takeo Kanade', 'takeo kanade')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
1 IBM T. J. Watson Research Center, Hawthorne, NY 10532, USA. yltian@us.ibm.com
2 Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA. tk@cs.cmu.edu
jeffcohn@pitt.edu
0334a8862634988cc684dacd4279c5c0d03704daFaceNet2ExpNet: Regularizing a Deep Face Recognition Net for
Expression Recognition
University of Maryland, College Park
2 Siemens Healthcare Technology Center, Princeton, New Jersey
('1700765', 'Hui Ding', 'hui ding')
('1682187', 'Shaohua Kevin Zhou', 'shaohua kevin zhou')
('9215658', 'Rama Chellappa', 'rama chellappa')
03c1fc9c3339813ed81ad0de540132f9f695a0f8Proceedings of Machine Learning Research 81:1–15, 2018
Conference on Fairness, Accountability, and Transparency
Gender Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification∗
MIT Media Lab 75 Amherst St. Cambridge, MA 02139
Microsoft Research 641 Avenue of the Americas, New York, NY 10011
Editors: Sorelle A. Friedler and Christo Wilson
('38222513', 'Joy Buolamwini', 'joy buolamwini')
('2076288', 'Timnit Gebru', 'timnit gebru')
joyab@mit.edu
timnit.gebru@microsoft.com
0339459a5b5439d38acd9c40a0c5fea178ba52fbD|C|I&I 2009 Prague
Multimodal recognition of emotions in car
environments
030ef31b51bd4c8d0d8f4a9a32b80b9192fe4c3f11936 • The Journal of Neuroscience, August 26, 2015 • 35(34):11936 –11945
Behavioral/Cognitive
Inhibition-Induced Forgetting Results from Resource
Competition between Response Inhibition and Memory
Encoding Processes
Center for Cognitive Neuroscience, Duke University, Durham, North Carolina
Response inhibition is a key component of executive control, but its relation to other cognitive processes is not well understood. We
recently documented the “inhibition-induced forgetting effect”: no-go cues are remembered more poorly than go cues. We attributed this
effect to central-resource competition, whereby response inhibition saps attention away from memory encoding. However, this proposal
is difficult to test with behavioral means alone. We therefore used fMRI in humans to test two neural predictions of the “common resource
hypothesis”: (1) brain regions associated with response inhibition should exhibit greater resource demands during encoding of subse-
quently forgotten than remembered no-go cues; and (2) this higher inhibitory resource demand should lead to memory encoding regions
having less resources available during encoding of subsequently forgotten no-go cues. Participants categorized face stimuli by gender in
a go/no-go task and, following a delay, performed a surprise recognition memory test for those faces. Replicating previous findings,
memory was worse for no-go than for go stimuli. Crucially, forgetting of no-go cues was predicted by high inhibitory resource demand, as
quantified by the trial-by-trial ratio of activity in neural “no-go” versus “go” networks. Moreover, this index of inhibitory demand
exhibited an inverse trial-by-trial relationship with activity in brain regions responsible for the encoding of no-go cues into memory,
notably the ventrolateral prefrontal cortex. This seesaw pattern between the neural resource demand of response inhibition and activity
related to memory encoding directly supports the hypothesis that response inhibition temporarily saps attentional resources away from
stimulus processing.
Key words: attention; cognitive control; memory; response inhibition
Significance Statement
Recent behavioral experiments showed that inhibiting a motor response to a stimulus (a “no-go cue”) impairs subsequent
memory for that cue. Here, we used fMRI to test whether this “inhibition-induced forgetting effect” is caused by competition for
neural resources between the processes of response inhibition and memory encoding. We found that trial-by-trial variations in
neural inhibitory resource demand predicted subsequent forgetting of no-go cues and that higher inhibitory demand was further-
more associated with lower concurrent activation in brain regions responsible for successful memory encoding of no-go cues.
Thus, motor inhibition and stimulus encoding appear to compete with each other: when more resources have to be devoted to
inhibiting action, less are available for encoding sensory stimuli.
Introduction
Response inhibition, the ability to preempt or cancel goal-
inappropriate actions, is considered a core cognitive control
Received Feb. 6, 2015; revised July 22, 2015; accepted July 24, 2015.
Author contributions: Y.-C.C. and T.E. designed research; Y.-C.C. performed research; Y.-C.C. analyzed data;
Y.-C.C. and T.E. wrote the paper.
This work was supported in part by National Institute of Mental Health Award R01 MH 087610 to T.E
The authors declare no competing financial interests.
DOI:10.1523/JNEUROSCI.0519-15.2015
Copyright © 2015 the authors
0270-6474/15/3511936-10$15.00/0
function (Logan and Cowan, 1984; Aron, 2007), an impairment
that contributes to impulsive symptoms of multiple psychiatric
diseases,
including obsessive-compulsive disorder, substance
abuse, and attention-deficit/hyperactivity disorder (Horn et al.,
2003; de Wit, 2009). However, the relation of response inhibition
to other cognitive control functions, and to traditional cognitive
domains, such as perception, memory, and attention, remains
poorly understood (Jurado and Rosselli, 2007; Miyake and Fried-
man, 2012).
A recent behavioral study has shed new light on this issue by
documenting an “inhibition-induced forgetting” effect, whereby
inhibiting responses to no-go or stop cues impaired subsequent
('2846298', 'Yu-Chin Chiu', 'yu-chin chiu')
('1900710', 'Tobias Egner', 'tobias egner')
('2846298', 'Yu-Chin Chiu', 'yu-chin chiu')
LSRC, Box 90999, Durham, NC 27708. E-mail: chiu.yuchin@duke.edu.
03f98c175b4230960ac347b1100fbfc10c100d0cSupervised Descent Method and its Applications to Face Alignment
The Robotics Institute, Carnegie Mellon University, Pittsburgh PA
('3182065', 'Xuehan Xiong', 'xuehan xiong')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
xxiong@andrew.cmu.edu
ftorre@cs.cmu.edu
032825000c03b8ab4c207e1af4daeb1f225eb025J. Appl. Environ. Biol. Sci., 7(10)159-164, 2017
ISSN: 2090-4274
© 2017, TextRoad Publication
Journal of Applied Environmental
and Biological Sciences
www.textroad.com
A Novel Approach for Human Face Detection in Color Images Using Skin
Color and Golden Ratio
Bacha Khan University, Charsadda, KPK, Pakistan
Abdul WaliKhan University, Mardan, KPK, Pakistan
Received: May 9, 2017
Accepted: August 2, 2017
('12144785', 'Faizan Ullah', 'faizan ullah')
('49669073', 'Dilawar Shah', 'dilawar shah')
('46463663', 'Sabir Shah', 'sabir shah')
('47160013', 'Abdus Salam', 'abdus salam')
('12579194', 'Shujaat Ali', 'shujaat ali')
03264e2e2709d06059dd79582a5cc791cbef94b1Convolutional Neural Networks for Facial Attribute-based Active Authentication
On Mobile Devices
University of Maryland, College Park
University of Maryland, College Park
MD, USA
MD, USA
('9215658', 'Rama Chellappa', 'rama chellappa')
('3383048', 'Pouya Samangouei', 'pouya samangouei')
pouya@umiacs.umd.org
rama@umiacs.umd.edu
03a8f53058127798bc2bc0245d21e78354f6c93bMax-Margin Additive Classifiers for Detection
Sam Hare
VGG Reading Group
October 30, 2009
('35208858', 'Subhransu Maji', 'subhransu maji')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
03fc466fdbc8a2efb6e3046fcc80e7cb7e86dc20A Real Time System for Model-based Interpretation of
the Dynamics of Facial Expressions
Technische Universit¨at M¨unchen
Boltzmannstr. 3, 85748 Garching
1. Motivation
Recent progress in the field of Computer Vision allows
intuitive interaction via speech, gesture or facial expressions
between humans and technical systems.Model-based tech-
niques facilitate accurately interpreting images with faces
by exploiting a priori knowledge, such as shape and texture
information. This renders them an inevitable component
to realize the paradigm of intuitive human-machine interac-
tion.
Our demonstration shows model-based recognition of
facial expressions in real-time via the state-of-the-art
Candide-3 face model [1] as visible in Figure 1. This three-
dimensional and deformable model is highly appropriate
for real-world face interpretation applications. However,
its complexity challenges the task of model fitting and we
tackle this challenge with an algorithm that has been auto-
matically learned from a large set of images. This solution
provides both, high accuracy and runtime. Note, that our
system is not limited to facial expression estimation. Gaze
direction, gender and age are also estimated.
2. Face Model Fitting
Models reduce the large amount of image data to a
small number of model parameters to describe the im-
age content, which facilitates and accelerates the subse-
quent interpretation task. Cootes et al. [3] introduced mod-
elling shapes with Active Contours. Further enhancements
emerged the idea of expanding shape models with texture
information [2]. Recent research considers modelling faces
in 3D space [1, 10].
Fitting the face model is the computational challenge of
finding the parameters that best describe the face within a
given image. This task is often addressed by minimizing
an objective function, such as the pixel error between the
model’s rendered surface and the underlying image content.
This section describes the four main components of model-
based techniques, see [9].
The face model contains a parameter vector p that repre-
sents its configurations. We integrate the complex and de-
formable 3D wire frame Candide-3 face model [1]. The
model consists of 116 anatomical landmarks and its param-
eter vector p = (rx, ry, rz, s, tx, ty, σ, α)T describes the
affine transformation (rx, ry, rz, s, tx, ty) and the deforma-
tion (σ, α). The 79 deformation parameters indicate the
shape of facial components such as the mouth, the eyes, or
the eye brows, etc., see Figure 2.
The localization algorithm computes an initial estimate of
the model parameters that is further refined by the subse-
quent fitting algorithm. Our system integrates the approach
of [8], which detects the model’s affine transformation in
case the image shows a frontal view face.
The objective function yields a comparable value that
specifies how accurately a parameterized model matches an
image. Traditional approaches manually specify the objec-
tive function in a laborious and erroneous task. In contrast,
we automatically learn the objective function from a large
set of training data based on objective information theoretic
measures [9]. This approach does not require expert knowl-
edge and it is domain-independently applicable. As a re-
sult, this approach yields more robust and accurate objective
functions, which greatly facilitate the task of the associated
fitting algorithms. Accurately estimated model parameters
in turn are required to infer correct high-level information,
such as facial expression or gaze direction.
Figure 1. Interpreting expressions with the Candide-3 face model.
('1685773', 'Christoph Mayer', 'christoph mayer')
('32131501', 'Matthias Wimmer', 'matthias wimmer')
('1704997', 'Freek Stulp', 'freek stulp')
('1725709', 'Zahid Riaz', 'zahid riaz')
('36401753', 'Anton Roth', 'anton roth')
('34667371', 'Martin Eggers', 'martin eggers')
('1699132', 'Bernd Radig', 'bernd radig')
{mayerc,wimmerm,stulp,riaz,roth,eggers,radig}@in.tum.de
03b98b4a2c0b7cc7dae7724b5fe623a43eaf877bAcume: A Novel Visualization Tool for Understanding Facial
Expression and Gesture Data
03adcf58d947a412f3904a79f2ab51cfdf0e838aWorld Journal of Science and Technology 2012, 2(4):136-139
ISSN: 2231 – 2587
Available Online: www.worldjournalofscience.com
_________________________________________________________________
Proceedings of "Conference on Advances in Communication and Computing (NCACC'12)”
Held at R.C.Patel Institute of Technology, Shirpur, Dist. Dhule, Maharastra, India
April 21, 2012
Video-based face recognition: a survey
R.C.Patel Institute of Technology, Shirpur, Dist.Dhule.Maharashtra, India
('40628915', 'Shailaja A Patil', 'shailaja a patil')
('30751046', 'Pramod J Deore', 'pramod j deore')
03104f9e0586e43611f648af1132064cadc5cc07
03f14159718cb495ca50786f278f8518c0d8c8c92015 IEEE International Conference on Control System, Computing and Engineering, Nov 27 – Nov 29, 2015 Penang, Malaysia
2015 IEEE International Conference on Control System,
Computing and Engineering (ICCSCE2015)
Technical Session 1A – DAY 1 – 27th Nov 2015
Time: 3.00 pm – 4.30 pm
Venue: Jintan
Topic: Signal and Image Processing
3.00 pm – 3.15pm
3.15 pm – 3.30pm
3.30 pm – 3.45pm
3.45 pm – 4.00pm
4.00 pm – 4.15pm
4.15 pm – 4.30pm
4.30 pm – 4.45pm
1A 01 ID3
Can Subspace Based Learning Approach Perform on Makeup Face
Recognition?
Khor Ean Yee, Pang Ying Han, Ooi Shih Yin and Wee Kuok Kwee
1A 02 ID35
Performance Evaluation of HOG and Gabor Features for Vision-based
Vehicle Detection
1A 03 ID23
Experimental Method to Pre-Process Fuzzy Bit Planes before Low-Level
Feature Extraction in Thermal Images
Chan Wai Ti and Sim Kok Swee
1A 04 ID84
Fractal-based Texture and HSV Color Features for Fabric Image Retrieval
Nanik Suciati, Darlis Herumurti and Arya Yudhi Wijaya
1A 05 ID168
Study of Automatic Melody Extraction Methods for Philippine Indigenous
Music
Jason Disuanco, Vanessa Tan, Franz de Leon
1A 06 ID211
Acoustical Comparison between Voiced and Voiceless Arabic Phonemes of
Malay
Speakers
Ali Abd Almisreb, Ahmad Farid Abidin, Nooritawati Md Tahir
*shaded cell is the proposed session chair
viii
©Faculty of Electrical Engineering, Universiti Teknologi MARA
('2715116', 'Soo Siang Teoh', 'soo siang teoh')Tea Break @ Foyer
0394040749195937e535af4dda134206aa830258Geodesic Entropic Graphs for Dimension and
Entropy Estimation in Manifold Learning
December 16, 2003
('1759109', 'Jose A. Costa', 'jose a. costa')
('1699402', 'Alfred O. Hero', 'alfred o. hero')
0334cc0374d9ead3dc69db4816d08c917316c6c4
03c48d8376990cff9f541d542ef834728a2fcda2Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs
Columbia University
New York, NY, USA
('2195345', 'Zheng Shou', 'zheng shou')
('2704179', 'Dongang Wang', 'dongang wang')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{zs2262,dw2648,sc250}@columbia.edu
0319332ded894bf1afe43f174f5aa405b49305f0Shearlet Network-based Sparse Coding Augmented by
Facial Texture Features for Face Recognition
Ben Amar1
Research Groups on Intelligent Machines, University of Sfax, Sfax 3038, Tunisia
University of Houston, Houston, TX 77204, USA
('2791150', 'Mohamed Anouar Borgi', 'mohamed anouar borgi')
('8847309', 'Demetrio Labate', 'demetrio labate')
{anoir.borgi@ieee.org ; dlabate@math.uh.edu ;
maher.elarbi@gmail.com; chokri.benamar@ieee.org}
03ac1c694bc84a27621da6bfe73ea9f7210c6d45Chapter 1
Introduction to information security
foundations and applications
1.1 Background
Information security has extended to include several research directions like user
authentication and authorization, network security, hardware security, software secu-
rity, and data cryptography. Information security has become a crucial need for
protecting almost all information transaction applications. Security is considered as
an important science discipline whose many multifaceted complexities deserve the
synergy of the computer science and engineering communities.
Recently, due to the proliferation of Information and Communication Tech-
nologies, information security has started to cover emerging topics such as cloud
computing security, smart cities’ security and privacy, healthcare and telemedicine,
the Internet-of-Things (IoT) security [1], the Internet-of-Vehicles security, and sev-
eral types of wireless sensor networks security [2,3]. In addition, information security
has extended further to cover not only technical security problems but also social and
organizational security challenges [4,5].
Traditional systems’ development approaches were focusing on the system’s
usability where security was left to the last stage with less priority. However, the
new design approaches consider security-in-design process where security is consid-
ered at the early phase of the design process. The new designed systems should be
well protected against the available security attacks. Having new systems such as IoT
or healthcare without enough security may lead to a leakage of sensitive data and, in
some cases, life threatening situations.
Taking the social aspect into account, security education is a vital need for both
practitioners and system users [6]. Users’ misbehaviour due to a lack of security
knowledge is the weakest point in the system security chain. The users’ misbehaviour
is considered as a security vulnerability that may be exploited for launching security
attacks. A successful security attack such as distributed denial-of-service attack will
impose incident recovery cost in addition to the downtime cost.
Electrical and Space Engineering, Lule University of Technology
Sweden
Faculty of Engineering, Al Azhar University, Qena, Egypt
('4073409', 'Ali Ismail Awad', 'ali ismail awad')
03baf00a3d00887dd7c828c333d4a29f3aacd5f5Entropy Based Feature Selection for 3D Facial
Expression Recognition
Submitted to the
Institute of Graduate Studies and Research
in partial fulfillment of the requirements for the Degree of
Doctor of Philosophy
in
Electrical and Electronic Engineering
Eastern Mediterranean University
September 2014
Gazimağusa, North Cyprus
('1974278', 'Kamil Yurtkan', 'kamil yurtkan')
0359f7357ea8191206b9da45298902de9f054c92Going Deeper in Facial Expression Recognition using Deep Neural Networks
1 Department of Electrical and Computer Engineering
2 Department of Computer Science
University of Denver, Denver, CO
('2314025', 'Ali Mollahosseini', 'ali mollahosseini')
('38461715', 'David Chan', 'david chan')
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')
ali.mollahosseini@du.edu, davidchan@cs.du.edu, and mmahoor@du.edu ∗ †
0394e684bd0a94fc2ff09d2baef8059c2652ffb0Median Robust Extended Local Binary Pattern
for Texture Classification
Index Terms— Texture descriptors, rotation invariance, local
binary pattern (LBP), feature extraction, texture analysis.
how the texture recognition process works in humans as
well as in the important role it plays in the wide variety of
applications of computer vision and image analysis [1], [2].
The many applications of texture classification include medical
image analysis and understanding, object recognition, biomet-
rics, content-based image retrieval, remote sensing, industrial
inspection, and document classification.
As a classical pattern recognition problem, texture classifi-
cation primarily consists of two critical subproblems: feature
extraction and classifier designation [1], [2]. It is generally
agreed that the extraction of powerful texture features plays a
relatively more important role, since if poor features are used
even the best classifier will fail to achieve good recognition
results. Consequently, most research in texture classification
focuses on the feature extraction part and numerous texture
feature extraction methods have been developed, with excellent
surveys given in [1]–[5]. Most existing methods have not,
however, been capable of performing sufficiently well for
real-world applications, which have demanding requirements
including database size, nonideal environmental conditions,
and running in real-time.
('39695518', 'Li Liu', 'li liu')
('1716428', 'Songyang Lao', 'songyang lao')
('1731709', 'Paul W. Fieguth', 'paul w. fieguth')
('1714724', 'Matti Pietikäinen', 'matti pietikäinen')
03e88bf3c5ddd44ebf0e580d4bd63072566613ad
03f4c0fe190e5e451d51310bca61c704b39dcac8J Ambient Intell Human Comput
DOI 10.1007/s12652-016-0406-z
O R I G I N A L R E S E A R C H
CHEAVD: a Chinese natural emotional audio–visual database
Received: 30 March 2016 / Accepted: 22 August 2016
Ó Springer-Verlag Berlin Heidelberg 2016
('1704841', 'Ya Li', 'ya li')
('37670752', 'Jianhua Tao', 'jianhua tao')
('1850313', 'Linlin Chao', 'linlin chao')
('1694779', 'Wei Bao', 'wei bao')
('3095820', 'Yazhu Liu', 'yazhu liu')
03bd58a96f635059d4bf1a3c0755213a51478f12Smoothed Low Rank and Sparse Matrix Recovery by
Iteratively Reweighted Least Squares Minimization
This work presents a general framework for solving the low
rank and/or sparse matrix minimization problems, which may
involve multiple non-smooth terms. The Iteratively Reweighted
Least Squares (IRLS) method is a fast solver, which smooths the
objective function and minimizes it by alternately updating the
variables and their weights. However, the traditional IRLS can
only solve a sparse only or low rank only minimization problem
with squared loss or an affine constraint. This work generalizes
IRLS to solve joint/mixed low rank and sparse minimization
problems, which are essential formulations for many tasks. As a
concrete example, we solve the Schatten-p norm and (cid:96)2,q-norm
regularized Low-Rank Representation (LRR) problem by IRLS,
and theoretically prove that the derived solution is a stationary
point (globally optimal if p, q ≥ 1). Our convergence proof of
IRLS is more general than previous one which depends on
the special properties of the Schatten-p norm and (cid:96)2,q-norm.
Extensive experiments on both synthetic and real data sets
demonstrate that our IRLS is much more efficient.
Index Terms—Low-rank and sparse minimization, Iteratively
Reweighted Least Squares.
I. INTRODUCTION
I N recent years, the low rank and sparse matrix learning
problems have been hot research topics and lead to broad
applications in computer vision and machine learning, such
as face recognition [1], collaborative filtering [2], background
modeling [3], and subspace segmentation [4], [5]. The (cid:96)1-
norm and nuclear norm are popular choices for sparse and
low rank matrix minimizations with theoretical guarantees
and competitive performance in practice. The models can be
formulated as a joint low rank and sparse matrix minimization
problem as follow:
T(cid:88)
nuclear norm ||M||∗ = (cid:80)
min
i=1
where x and bi can be either vectors or matrices, Fi is a
convex function (the Frobenius norm ||M||2
ij;
ij M 2
i σi(M ), the sum of all singular
F = (cid:80)
Fi(Ai(x) + bi),
(1)
Copyright (c) 2014 IEEE. Personal use of this material
is permitted.
However, permission to use this material for any other purposes must be
This research is supported by the Singapore National Research Foundation
administered by the IDM Programme Office. Z. Lin is supported by NSF
China (grant nos. 61272341 and 61231002), 973 Program of China (grant no.
2015CB3525) and MSRA Collaborative Research Program.
C. Lu and S. Yan are with the Department of Electrical and Com-
puter Engineering, National University of Singapore, Singapore (e-mails
Z. Lin is with the Key Laboratory of Machine Perception (MOE), School
values of a matrix; (cid:96)1-norm ||M||1 = (cid:80)
norm ||M||2,1 =(cid:80)
= (cid:80)
ij |Mij|; and (cid:96)2,1-
j ||Mj||2, the sum of the (cid:96)2-norm of each
column of a matrix) and Ai : Rd → Rm is a linear mapping.
In this work, we further consider the nonconvex Schatten-p
norm ||M||p
ij |Mij|p
and (cid:96)2,p-norm ||M||p
j ||Mj||p
2 with 0 < p < 1 for
pursuing lower rank or sparser solutions.
i σp(M ), (cid:96)p-norm ||M||p
2,p = (cid:80)
p = (cid:80)
Sp
Problem (1) is general which involves a wide range of
problems, such as Lasso [6], group Lasso [7], trace Lasso [4],
matrix completion [8], Robust Principle Component Analysis
(RPCA) [3] and Low-Rank Representation (LRR) [5]. In this
work, we aim to propose a general solver for (1). For the ease
of discussion, we focus on the following two representative
problems,
RPCA:
s.t. X = Z + E,
(2)
||Z||∗ + λ||E||1,
min
Z,E
||Z||∗ + λ||E||2,1,
min
Z,E
s.t. X = XZ + E,
LRR:
(3)
where X ∈ Rd×n is a given data matrix, Z and E are with
compatible dimensions and λ > 0 is the model parameter. No-
tice that these problems can be reformulated as unconstrained
problems (by representing E by Z) as that in problem (1).
A. Related Works
The sparse and low rank minimization problems can be
solved by various methods, such as Semi-Definite Program-
ming (SDP) [9], Accelerated Proximal Gradient (APG) [10],
and Alternating Direction Method (ADM) [11]. However, SDP
has a complexity of O(n6) for an n × n sized matrix, which
is unbearable for large scale applications. APG requires that
at
least one term of the objective function has Lipschitz
continuous gradient. Such an assumption is violated in many
problems, e.g., problem (2) and (3). Compared with SDP
and APG, ADM is the most widely used one. But it usually
requires introducing several auxiliary variables corresponding
to non-smooth terms. The auxiliary variables may slow down
the convergence, or even lead to divergence when there are
too many variables. Linearized ADM (LADM) [12] may
reduce the number of auxiliary variables, but suffer the same
convergence issue. The work [12] proposes an accelerated
LADM with Adaptive Penalty (LADMAP) with lower per-
iteration cost. However, the accelerating trick is special for the
LRR problem. And thus are not general for other problems.
Another drawback for many low rank minimization solvers is
that they have to perform the soft singular value thresholding:
λ||Z||∗ +
||Z − Y ||2
F ,
min
(4)
('33224509', 'Canyi Lu', 'canyi lu')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
under its International Research Centre @Singapore Funding Initiative and
canyilu@gmail.com; eleyans@nus.edu.sg).
of EECS, Peking University, China (e-mail: zlin@pku.edu.cn).
031055c241b92d66b6984643eb9e05fd605f24e2Multi-fold MIL Training for Weakly Supervised Object Localization
Inria∗
('1939006', 'Ramazan Gokberk Cinbis', 'ramazan gokberk cinbis')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
0332ae32aeaf8fdd8cae59a608dc8ea14c6e3136Int J Comput Vis
DOI 10.1007/s11263-017-1009-7
Large Scale 3D Morphable Models
Received: 15 March 2016 / Accepted: 24 March 2017
© The Author(s) 2017. This article is an open access publication
('1848903', 'James Booth', 'james booth')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('2931390', 'Anastasios Roussos', 'anastasios roussos')
('5137183', 'Allan Ponniah', 'allan ponniah')
034addac4637121e953511301ef3a3226a9e75fdImplied Feedback: Learning Nuances of User Behavior in Image Search
Virginia Tech
('1713589', 'Devi Parikh', 'devi parikh')parikh@vt.edu
03701e66eda54d5ab1dc36a3a6d165389be0ce79179
Improved Principal Component Regression for Face
Recognition Under Illumination Variations
('1776127', 'Shih-Ming Huang', 'shih-ming huang')
('1749263', 'Jar-Ferr Yang', 'jar-ferr yang')
03fe3d031afdcddf38e5cc0d908b734884542eebDOI: http://dx.doi.org/10.14236/ewic/EVA2017.60
Engagement with Artificial Intelligence
through Natural Interaction Models
Sara (Salevati) Feldman
Simon Fraser University
Vancouver, Canada
Simon Fraser University
Vancouver, Canada
Simon Fraser University
Vancouver, Canada
As Artificial Intelligence (AI) systems become more ubiquitous, what user experience design
paradigms will be used by humans to impart their needs and intents to an AI system, in order to
engage in a more social interaction? In our work, we look mainly at expression and creativity
based systems, where the AI both attempts to model or understand/assist in processes of human
expression and creativity. We therefore have designed and implemented a prototype system with
more natural interaction modes for engagement with AI as well as other human computer
interaction (HCI) where a more open natural communication stream is beneficial. Our proposed
conversational agent system makes use of the affective signals from the gestural behaviour of the
user and the semantic information from the speech input in order to generate a personalised,
human-like conversation that is expressed in the visual and conversational output of the 3D virtual
avatar system. We describe our system and two application spaces we are using it in – a care
advisor / assistant for the elderly and an interactive creative assistant for uses to produce art
forms.
Artificial Intelligence. Natural user interfaces. Voice systems. Expression systems. ChatBots.
1. INTRODUCTION
is
for
way
there
sensor
natural
devices,
understand
requirement
to
the human
Due to the increase of natural user interfaces and
untethered
a
corresponding
for computational
models that can utilise interactive and affective
user data in order to understand and emulate a
more
conversational
communication. From an emulation standpoint, it is
the mechanisms
important
underlying
to human multilayered
semantic communication to achieve a more natural
user experience. Humans tend to make use of
gestures and expressions
in a conversational
setting in addition to the linguistic components that
allow them to express more than the semantics of
is usually
the utterances. This phenomenon
automated
current
disregarded
to
conversational
due
being
computationally demanding and
requiring a
cognitive component to be able to model the
complexity of the additional signals. With the
advances in the current technology we are now
closer to achieve more natural-like conversational
systems. Gesture capture and recognition systems
for video and sound input can be combined with
output systems such as Artificial Intelligence (AI)
based conversational
tools and 3D modelling
systems
the
in
© Feldman et al. Published by
BCS Learning and Development Ltd.
Proceedings of Proceedings of EVA London 2017, UK
296
to
include
in order
systems
to achieve human-level
meaningful communication. This may allow the
interaction to be more intuitive, open and fluent that
can be more helpful in certain situations. In this
work, we attempt
the affective
components from these input signals in order to
generate a compatible and personalised character
that can reflect some human-like qualities.
Given
these goals, we overview our 3D
conversational avatar system and describe its use
in our two application spaces, stressing its use
where AI systems are involved. Our first application
space is CareAdvisor, for maintaining active and
healthy aging in older adults through a multi-
modular Personalised Virtual Coaching system.
Here the natural communication system is better
suited for the elderly, who are technologically less
experienced,
non-
confrontationally and as an assistant conduit to
health data from other less conversational devices.
Our second application space is in the interactive
art exhibition area, where our avatar system is able
to converse with users in a more open way,
compared to say forms and input systems, on
issues of art and creativity. This allows for more
open,
to an
intuitive conversation
especially when
leading
used
('22588208', 'Ozge Nilay Yalcin', 'ozge nilay yalcin')
('1700040', 'Steve DiPaola', 'steve dipaola')
sara_salevati@sfu.ca
oyalcin@sfu.ca
sdipaola@sfu.ca
9b318098f3660b453fbdb7a579778ab5e9118c4c3931
Joint Patch and Multi-label Learning for Facial
Action Unit and Holistic Expression Recognition
classifiers without
('2393320', 'Kaili Zhao', 'kaili zhao')
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1720776', 'Honggang Zhang', 'honggang zhang')
9be94fa0330dd493f127d51e4ef7f9fd64613cfcResearch Article
Effects of pose and image resolution on
automatic face recognition
ISSN 2047-4938
Received on 5th February 2015
Revised on 16th May 2015
Accepted on 14th September 2015
doi: 10.1049/iet-bmt.2015.0008
www.ietdl.org
North Dakota State University, Fargo, ND 58108-6050, USA
Faculty of Computer Science, Mathematics, and Engineering, University of Twente, Enschede, Netherlands
('3001880', 'Zahid Mahmood', 'zahid mahmood')
('1798087', 'Tauseef Ali', 'tauseef ali')
✉ E-mail: zahid.mahmood@ndsu.edu
9bd35145c48ce172b80da80130ba310811a44051Face Detection with End-to-End Integration of a
ConvNet and a 3D Model
1Nat’l Engineering Laboratory for Video Technology,
Key Laboratory of Machine Perception (MoE),
Cooperative Medianet Innovation Center, Shanghai
Sch l of EECS, Peking University, Beijing, 100871, China
2Department of ECE and the Visual Narrative Cluster,
North Carolina State University, Raleigh, USA
('3422021', 'Yunzhu Li', 'yunzhu li')
('3423002', 'Benyuan Sun', 'benyuan sun')
('47353858', 'Tianfu Wu', 'tianfu wu')
('1717863', 'Yizhou Wang', 'yizhou wang')
{leo.liyunzhu, sunbenyuan, Yizhou.Wang}@pku.edu.cn, tianfu wu@ncsu.edu
9b000ccc04a2605f6aab867097ebf7001a52b459
9b0489f2d5739213ef8c3e2e18739c4353c3a3b7Visual Data Augmentation through Learning
Imperial College London, UK
Middlesex University London, UK
('34586458', 'Grigorios G. Chrysos', 'grigorios g. chrysos')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
{g.chrysos, i.panagakis, s.zafeiriou}@imperial.ac.uk
9b474d6e81e3b94e0c7881210e249689139b3e04VG-RAM Weightless Neural Networks for
Face Recognition
Departamento de Inform´atica
Universidade Federal do Esp´ırito Santo
Av. Fernando Ferrari, 514, 29075-910 - Vit´oria-ES
Brazil
1. Introduction
Computerized human face recognition has many practical applications, such as access control,
security monitoring, and surveillance systems, and has been one of the most challenging and
active research areas in computer vision for many decades (Zhao et al.; 2003). Even though
current machine recognition systems have reached a certain level of maturity, the recognition
of faces with different facial expressions, occlusions, and changes in illumination and/or pose
is still a hard problem.
A general statement of the problem of machine recognition of faces can be formulated as fol-
lows: given an image of a scene, (i) identify or (ii) verify one or more persons in the scene
using a database of faces. In identification problems, given a face as input, the system reports
back the identity of an individual based on a database of known individuals; whereas in veri-
fication problems, the system confirms or rejects the claimed identity of the input face. In both
cases, the solution typically involves segmentation of faces from scenes (face detection), fea-
ture extraction from the face regions, recognition, or verification. In this chapter, we examine
the recognition of frontal face images required in the context of identification problems.
Many approaches have been proposed to tackle the problem of face recognition. One can
roughly divide these into (i) holistic approaches, (ii) feature-based approaches, and (iii) hybrid
approaches (Zhao et al.; 2003). Holistic approaches use the whole face region as the raw input
to a recognition system (a classifier). In feature-based approaches, local features, such as the
eyes, nose, and mouth, are first extracted and their locations and local statistics (geometric
and/or appearance based) are fed into a classifier. Hybrid approaches use both local features
and the whole face region to recognize a face.
Among
fisher-
faces (Belhumeur et al.; 1997; Etemad and Chellappa; 1997) have proved to be effective
(Turk and Pentland;
eigenfaces
holistic
approaches,
1991)
and
('1699216', 'Alberto F. De Souza', 'alberto f. de souza')
('3015563', 'Claudine Badue', 'claudine badue')
('3158075', 'Felipe Pedroni', 'felipe pedroni')
('3169286', 'Hallysson Oliveira', 'hallysson oliveira')
9b928c0c7f5e47b4480cb9bfdf3d5b7a29dfd493Close the Loop: Joint Blind Image Restoration and Recognition
with Sparse Representation Prior
School of Computer Science, Northwestern Polytechnical University, Xi an China
Beckman Institute, University of Illinois at Urbana-Champaign, IL USA
U.S. Army Research Laboratory, 2800 Powder Mill Road, Adelphi, MD USA
('40479011', 'Haichao Zhang', 'haichao zhang')
('1706007', 'Jianchao Yang', 'jianchao yang')
('1801395', 'Yanning Zhang', 'yanning zhang')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
‡{hczhang,jyang29,huang}@ifp.uiuc.edu †ynzhang@nwpu.edu.cn §nasser.m.nasrabadi.civ@mail.mil
9bc01fa9400c231e41e6a72ec509d76ca797207c
9b2c359c36c38c289c5bacaeb5b1dd06b464f301Dense Face Alignment
Michigan State University, MI
2Monta Vista High School, Cupertino, CA
('6797891', 'Yaojie Liu', 'yaojie liu')
('2357264', 'Amin Jourabloo', 'amin jourabloo')
('26365310', 'William Ren', 'william ren')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
1{liuyaoj1,jourablo,liuxm}@msu.edu, 2williamyren@gmail.com
9bcfadd22b2c84a717c56a2725971b6d49d3a804How to Detect a Loss of Attention in a Tutoring System
using Facial Expressions and Gaze Direction
('2975858', 'Mark ter Maat', 'mark ter maat')
9b1bcef8bfef0fb5eb5ea9af0b699aa0534fcecaPosition-Squeeze and Excitation Module
for Facial Attribute Analysis
Shanghai Key Laboratory of
Multidimensional Information
Processing,
East China Normal University
200241 Shanghai, China
('36124320', 'Yan Zhang', 'yan zhang')
('7962836', 'Wanxia Shen', 'wanxia shen')
('49755228', 'Li Sun', 'li sun')
('12493943', 'Qingli Li', 'qingli li')
('36124320', 'Yan Zhang', 'yan zhang')
('7962836', 'Wanxia Shen', 'wanxia shen')
('49755228', 'Li Sun', 'li sun')
('12493943', 'Qingli Li', 'qingli li')
452642781@qq.com
51151214005@ecnu.cn
sunli@ee.ecnu.edu.cn
qlli@cs.ecnu.edu.cn
9b07084c074ba3710fee59ed749c001ae70aa408698535 CDPXXX10.1177/0963721417698535MartinezComputational Models of Face Perception
research-article2017
Computational Models of Face Perception
Aleix M. Martinez
Department of Electrical and Computer Engineering, Center for Cognitive and Brain Sciences,
and Mathematical Biosciences Institute, The Ohio State University
Current Directions in Psychological
Science
1 –7
© The Author(s) 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0963721417698535
https://doi.org/10.1177/0963721417698535
www.psychologicalscience.org/CDPS
9be653e1bc15ef487d7f93aad02f3c9552f3ee4aComputer Vision for Head Pose Estimation:
Review of a Competition
Tampere University of Technology, Finland
University of Paderborn, Germany
3 Zorgon, The Netherlands
('1847889', 'Heikki Huttunen', 'heikki huttunen')
('40394658', 'Ke Chen', 'ke chen')
('2364638', 'Abhishek Thakur', 'abhishek thakur')
('2558923', 'Artus Krohn-Grimberghe', 'artus krohn-grimberghe')
('2300445', 'Oguzhan Gencoglu', 'oguzhan gencoglu')
('3328835', 'Xingyang Ni', 'xingyang ni')
('2067035', 'Mohammed Al-Musawi', 'mohammed al-musawi')
('40448210', 'Lei Xu', 'lei xu')
('3152947', 'Hendrik Jacob van Veen', 'hendrik jacob van veen')
9b246c88a0435fd9f6d10dc88f47a1944dd8f89ePICODES: Learning a Compact Code for
Novel-Category Recognition
Dartmouth College
Hanover, NH, U.S.A.
Andrew Fitzgibbon
Microsoft Research
Cambridge, United Kingdom
('34338883', 'Alessandro Bergamo', 'alessandro bergamo')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
{aleb, lorenzo}@cs.dartmouth.edu
awf@microsoft.com
9b164cef4b4ad93e89f7c1aada81ae7af802f3a4 Research Journal of Recent Sciences _________________________________________________ ISSN 2277-2502
Vol. 2(1), 17-20, January (2013)
Res.J.Recent Sci.
A Fully Automatic and Haar like Feature Extraction-Based Method for Lip
Contour Detection
School of Computer Engineering, Shahrood University of Technology, Shahrood, IRAN
Received 26th September 2012, revised 27th October 2012, accepted 6th November 2012
Available online at: www.isca.in
9bac481dc4171aa2d847feac546c9f7299cc5aa0Matrix Product State for Higher-Order Tensor
Compression and Classification
('2852180', 'Johann A. Bengua', 'johann a. bengua')
('2839912', 'Ho N. Phien', 'ho n. phien')
('1834451', 'Minh N. Do', 'minh n. do')
9b93406f3678cf0f16451140ea18be04784faeeeA Bayesian Approach to Alignment-Based
Image Hallucination
University of Central Florida
2 Microsoft Research New England
('1802944', 'Marshall F. Tappen', 'marshall f. tappen')
('1681442', 'Ce Liu', 'ce liu')
mtappen@eecs.ucf.edu
celiu@microsoft.com
9b7974d9ad19bb4ba1ea147c55e629ad7927c5d7Faical Expression Recognition by Combining
Texture and Geometrical Features
('3057167', 'Renjie Liu', 'renjie liu')
('36485086', 'Ruofei Du', 'ruofei du')
('40371477', 'Bao-Liang Lu', 'bao-liang lu')
9b6d0b3fbf7d07a7bb0d86290f97058aa6153179NII, Japan at the first THUMOS Workshop 2013
National Institute of Informatics
2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan 101-8430
('39814149', 'Sang Phan', 'sang phan')
('1802416', 'Duy-Dinh Le', 'duy-dinh le')
('40693818', 'Shin’ichi Satoh', 'shin’ichi satoh')
{plsang,ledduy,satoh}@nii.ac.jp
9b684e2e2bb43862f69b12c6be94db0e7a756187Differentiating Objects by Motion:
Joint Detection and Tracking of Small Flying Objects
The University of Tokyo
CSIRO-Data61
Australian National University
The University of Tokyo
Figure 1: Importance of multi-frame information for recognizing apparently small flying objects (birds in these examples).
While visual features in single frames are vague and limited, multi-frame information, including deformation and pose
changes, provides better clues with which to recognize birds. To extract such useful motion patterns, tracking is necessary for
compensating translation of objects, but the tracking itself is a challenge due to the limited visual information. The blue boxes
are birds tracked by our method that utilizes multi-frame representation for detection, while the red boxes are the results of a
single-frame handcrafted-feature-based tracker [11] , which tends to fail when tracking small objects.
('1890560', 'Ryota Yoshihashi', 'ryota yoshihashi')
('38621343', 'Tu Tuan Trinh', 'tu tuan trinh')
('48727803', 'Rei Kawakami', 'rei kawakami')
('2941564', 'Shaodi You', 'shaodi you')
('33313329', 'Makoto Iida', 'makoto iida')
('48795689', 'Takeshi Naemura', 'takeshi naemura')
{yoshi, tu, rei, naemura}@hc.ic.i.u-tokyo.ac.jp
iida@ilab.eco.rcast.u-tokyo.ac.jp
shaodi.you@data61.csiro.au
9e8637a5419fec97f162153569ec4fc53579c21eSegmentation and Normalization of Human Ears
using Cascaded Pose Regression
University of Applied Sciences Darmstadt - CASED
Haardtring 100,
64295 Darmstadt, Germany
http://www.h-da.de
('1742085', 'Christoph Busch', 'christoph busch')anika.pflug@cased.de
christoph.busch@hig.no
9ea223c070ec9a00f4cb5ca0de35d098eb9a8e32Exploring Temporal Preservation Networks for Precise Temporal Action
Localization
National Laboratory for Parallel and Distributed Processing,
National University of Defense Technology
Changsha, China
('2352864', 'Ke Yang', 'ke yang')
('2292038', 'Peng Qiao', 'peng qiao')
('1718853', 'Dongsheng Li', 'dongsheng li')
('1893776', 'Shaohe Lv', 'shaohe lv')
('1791001', 'Yong Dou', 'yong dou')
{yangke13,pengqiao,dongshengli,yongdou,shaohelv}@nudt.edu.cn
9e4b052844d154c3431120ec27e78813b637b4fcJournal of AI and Data Mining
Vol. 2, No .1, 2014, 33-38.
Local gradient pattern - A novel feature representation for facial
expression recognition
School of Applied Statistics, National Institute of Development Administration, Bangkok, Thailand
Received 23 April 2013; accepted 16 June 2013
('31914125', 'M. Shahidul Islam', 'm. shahidul islam')*Corresponding author: suva.93@grads.nida.ac.th (M.Shahidul Islam)
9e42d44c07fbd800f830b4e83d81bdb9d106ed6bLearning Discriminative Aggregation Network for Video-based Face Recognition
Tsinghua University, Beijing, China
2State Key Lab of Intelligent Technologies and Systems, Beijing, China
3Tsinghua National Laboratory for Information Science and Technology (TNList), Beijing, China
('39358728', 'Yongming Rao', 'yongming rao')
('2772283', 'Ji Lin', 'ji lin')
('1697700', 'Jiwen Lu', 'jiwen lu')
('39491387', 'Jie Zhou', 'jie zhou')
raoyongming95@gmail.com; lin-j14@mails.tsinghua.edu.cn; {lujiwen,jzhou}@tsinghua.edu.cn
9eb86327c82b76d77fee3fd72e2d9eff03bbe5e0Max-Margin Invariant Features from Transformed
Unlabeled Data
Department of Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
('2628116', 'Dipan K. Pal', 'dipan k. pal')
('27756148', 'Ashwin A. Kannan', 'ashwin a. kannan')
('27693929', 'Gautam Arakalgud', 'gautam arakalgud')
('1794486', 'Marios Savvides', 'marios savvides')
{dipanp,aalapakk,garakalgud,marioss}@cmu.edu
9ea73660fccc4da51c7bc6eb6eedabcce7b5ceadTalking Head Detection by Likelihood-Ratio Test†
MIT Lincoln Laboratory,
Lexington MA 02420, USA
('2877010', 'Carl Quillen', 'carl quillen')wcampbell@ll.mit.edu
9e9052256442f4e254663ea55c87303c85310df9International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 10, October 2015
Review On Attribute-assisted Reranking for
Image Search
9eeada49fc2cba846b4dad1012ba8a7ee78a8bb7A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA
A New Facial Expression Recognition Method Based on
Local Gabor Filter Bank and PCA plus LDA
1 School of Electronic and Information Engineering, South China
University of Technology, Guangzhou, 510640, P.R.China
Motorola China Research Center, Shanghai, 210000, P.R.China
('15414934', 'Hong-Bo Deng', 'hong-bo deng')
('2949795', 'Lian-Wen Jin', 'lian-wen jin')
('1751744', 'Li-Xin Zhen', 'li-xin zhen')
('34824270', 'Jian-Cheng Huang', 'jian-cheng huang')
('15414934', 'Hong-Bo Deng', 'hong-bo deng')
('2949795', 'Lian-Wen Jin', 'lian-wen jin')
('1751744', 'Li-Xin Zhen', 'li-xin zhen')
('34824270', 'Jian-Cheng Huang', 'jian-cheng huang')
{hbdeng, eelwjin}@scut.edu.cn
{Li-Xin.Zhen, Jian-Cheng.Huang}@motorola.com
9ef2b2db11ed117521424c275c3ce1b5c696b9b3Robust Face Alignment Using a Mixture of Invariant Experts
‡Intel Corporation
Mitsubishi Electric Research Labs (MERL
('2577513', 'Oncel Tuzel', 'oncel tuzel')
('14939251', 'Salil Tambe', 'salil tambe')
('34749896', 'Tim K. Marks', 'tim k. marks')
{oncel, tmarks}@merl.com,
salil.tambe@intel.com
9e5acdda54481104aaf19974dca6382ed5ff21edYulia Gizatdinova and Veikko Surakka 
Automatic localization of facial
landmarks from expressive images
of high complexity
DEPARTMENT OF COMPUTER SCIENCES 
UNIVERSITY OF TAMPERE
D‐2008‐9 
TAMPERE 2008 
9ed943f143d2deaac2efc9cf414b3092ed482610Independent subspace of dynamic Gabor features for facial expression classification
School of Information Science
Japan Advanced Institute of Science and Technology
Asahidai 1-1, Nomi-city, Ishikawa, Japan
('2847306', 'Prarinya Siritanawan', 'prarinya siritanawan')
('1791753', 'Kazunori Kotani', 'kazunori kotani')
('1753878', 'Fan Chen', 'fan chen')
Email: {p.siritanawan, ikko, chen-fan}@jaist.ac.jp
9e1c3b8b1653337094c1b9dba389e8533bc885b0Demographic Classification with Local Binary
Patterns
Department of Computer Science and Technology,
Tsinghua University, Beijing 100084, China
('4381671', 'Zhiguang Yang', 'zhiguang yang')
('1679380', 'Haizhou Ai', 'haizhou ai')
ahz@mail.tsinghua.edu.cn
9e0285debd4b0ba7769b389181bd3e0fd7a02af6From face images and attributes to attributes
Computer Vision Laboratory, ETH Zurich, Switzerland
('9664434', 'Robert Torfason', 'robert torfason')
('2794259', 'Eirikur Agustsson', 'eirikur agustsson')
('2173683', 'Rasmus Rothe', 'rasmus rothe')
('1732855', 'Radu Timofte', 'radu timofte')
9ed4ad41cbad645e7109e146ef6df73f774cd75dSARFRAZ, SIDDIQUE, STIEFELHAGEN: RPM FOR PAIR-WISE FACE-SIMILARITY
RPM: Random Points Matching for Pair-wise
Face-Similarity
Institute for Anthropomatics
Karlsruhe Institute of Technology
Karlsruhe, Germany
Swiss Federal Institute of Technology
(ETH) Zurich
Zurich, Switzerland
('4241648', 'M. Saquib Sarfraz', 'm. saquib sarfraz')
('6262445', 'Muhammad Adnan Siddique', 'muhammad adnan siddique')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
saquib.sarfraz@kit.edu
siddique@ifu.baug.ethz.ch
rainer.stiefelhagen@kit.edu
9e182e0cd9d70f876f1be7652c69373bcdf37fb4Talking Face Generation by Adversarially
Disentangled Audio-Visual Representation
The Chinese University of Hong Kong
('40576774', 'Hang Zhou', 'hang zhou')
('1715752', 'Yu Liu', 'yu liu')
('3243969', 'Ziwei Liu', 'ziwei liu')
('47571885', 'Ping Luo', 'ping luo')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
9e8d87dc5d8a6dd832716a3f358c1cdbfa97074cWhat Makes an Image Popular?
Massachusetts Institute
of Technology
eBay Research Labs
DigitalGlobe
('2556428', 'Aditya Khosla', 'aditya khosla')
('2541992', 'Atish Das Sarma', 'atish das sarma')
('37164887', 'Raffay Hamid', 'raffay hamid')
khosla@csail.mit.edu
atish.dassarma@gmail.com
raffay@gmail.com
9e5c2d85a1caed701b68ddf6f239f3ff941bb707
044d9a8c61383312cdafbcc44b9d00d650b21c70300 Faces in-the-Wild Challenge: The first facial landmark localization
Challenge
Imperial College London, UK
School of Computer Science, University of Lincoln, U.K
EEMCS, University of Twente, The Netherlands
('3320415', 'Christos Sagonas', 'christos sagonas')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{c.sagonas, gt204, s.zafeiriou, m.pantic}@imperial.ac.uk
04bb3fa0824d255b01e9db4946ead9f856cc0b59
040dc119d5ca9ea3d5fc39953a91ec507ed8cc5dNoname manuscript No.
(will be inserted by the editor)
Large-scale Bisample Learning on ID vs. Spot Face Recognition
Received: date / Accepted: date
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('34679741', 'Stan Z. Li', 'stan z. li')
04f0292d9a062634623516edd01d92595f03bd3fDistribution-based Iterative Pairwise Classification of
Emotions in the Wild Using LGBP-TOP
The University of Nottingham
Mised Reality Lab
Anıl Yüce
Signal Processing
Laboratory(LTS5)
École Polytechnique Fédérale
de Lausanne, Switzerland
The University of Nottingham
Mixed Reality Lab
The University of Nottingham
Mixed Reality Lab
('2449665', 'Timur R. Almaev', 'timur r. almaev')
('1795528', 'Michel F. Valstar', 'michel f. valstar')
('2321668', 'Alexandru Ghitulescu', 'alexandru ghitulescu')
psxta4@nottingham.ac.uk
anil.yuce@epfl.ch
psyadg@nottingham.ac.uk
michel.valstar@nottingham.ac.uk
047f6afa87f48de7e32e14229844d1587185ce45An Improvement of Energy-Transfer Features
Using DCT for Face Detection
Technical University of Ostrava, FEECS
17. listopadu 15, 708 33 Ostrava-Poruba, Czech Republic
('2467747', 'Radovan Fusek', 'radovan fusek')
('2557877', 'Eduard Sojka', 'eduard sojka')
{radovan.fusek,eduard.sojka,karel.mozdren,milan.surkala}@vsb.cz
04b851f25d6d49e61a528606953e11cfac7df2b2Optical Flow Guided Feature: A Fast and Robust Motion Representation for
Video Action Recognition
The University of Sydney 2SenseTime Research 3The Chinese University of Hong Kong
('1837024', 'Shuyang Sun', 'shuyang sun')
('1874900', 'Zhanghui Kuang', 'zhanghui kuang')
('37145669', 'Lu Sheng', 'lu sheng')
('3001348', 'Wanli Ouyang', 'wanli ouyang')
('1726357', 'Wei Zhang', 'wei zhang')
{shuyang.sun wanli.ouyang}@sydney.edu.au
{wayne.zhang kuangzhanghui}@sensetime.com
lsheng@ee.cuhk.edu.hk
04522dc16114c88dfb0ebd3b95050fdbd4193b90Appears in 2nd Canadian Conference on Computer and Robot Vision, Victoria, Canada, 2005.
Minimum Bayes Error Features for Visual Recognition by Sequential Feature
Selection and Extraction
Department of Computer Science
University of British Columbia
Department of Electrical and Computer engineering
University of California San Diego
('3265767', 'Gustavo Carneiro', 'gustavo carneiro')
('1699559', 'Nuno Vasconcelos', 'nuno vasconcelos')
carneiro@cs.ubc.ca
nuno@ece.ucsd.edu
04470861408d14cc860f24e73d93b3bb476492d0
0486214fb58ee9a04edfe7d6a74c6d0f661a7668Patch-based Probabilistic Image Quality Assessment for
Face Selection and Improved Video-based Face Recognition
NICTA, PO Box 6020, St Lucia, QLD 4067, Australia ∗
The University of Queensland, School of ITEE, QLD 4072, Australia
('3026404', 'Yongkang Wong', 'yongkang wong')
('3104113', 'Shaokang Chen', 'shaokang chen')
('40080354', 'Sandra Mau', 'sandra mau')
('1781182', 'Conrad Sanderson', 'conrad sanderson')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
0447bdb71490c24dd9c865e187824dee5813a676Manifold Estimation in View-based Feature
Space for Face Synthesis Across Pose
Paper 27
0435a34e93b8dda459de49b499dd71dbb478dc18VEGAC: Visual Saliency-based Age, Gender, and Facial Expression Classification
Using Convolutional Neural Networks
Department of Electronics and Communication Engineering and
Computer Vision Group, L. D. College of Engineering, Ahmedabad, India
the need for handcrafted facial descriptors and data
preprocessing. D-CNN models have been not only
successfully applied to human face analysis, but also for
the visual saliency detection [21, 22, 23]. Visual Saliency
is fundamentally an intensity map where higher intensity
signifies regions, where a general human being would
look, and lower intensities mean decreasing level of visual
attention. It’s a measure of visual attention of humans
based on the content of the image. It has numerous
applications in computer vision and image processing
tasks. It is still an open problem when considering the MIT
Saliency Benchmark [24].
In previous five years, considering age estimation,
gender classification and facial expression classification
accuracies
increased rapidly on several benchmarks.
However, in unconstrained environments, i.e. low to high
occluded face and
this
classification tasks are still facing challenges to achieve
competitive results. Some of the sample images are shown
in the Fig. 1.
low-resolution facial
image,
Figure 1: Sample images having unconstrained environments i.e.
occlusion, low resolution.
In this paper, we tackle the age, gender, and facial
expression classification problem from different angle. We
are inspired by the recent progress in the domain of image
classification and visual saliency prediction using deep
learning to achieve the competitive results. Based on the
above motivation our work
this multi-task
classification of the facial image is as follows:
Our VEGAC method uses off-the-shelf face detector
proposed by Mathias et al. [2] to obtain the location of the
face in the test image. Then, we increase the margin of
detected face by 30% and crop the face. After getting the
cropped face, we pass the cropped face on the Deep Multi-
for
('27343041', 'Ayesha Gurnani', 'ayesha gurnani')
('23922616', 'Vandit Gajjar', 'vandit gajjar')
('22239413', 'Viraj Mavani', 'viraj mavani')
('26425477', 'Yash Khandhediya', 'yash khandhediya')
{gurnani.ayesha.52, gajjar.vandit.381, mavani.viraj.604, khandhediya.yash.364}@ldce.ac.in
043efe5f465704ced8d71a067d2b9d5aa5b59c29EGGER ET AL.: OCCLUSION-AWARE 3D MORPHABLE FACE MODELS
Occlusion-aware 3D Morphable Face Models
Department of Mathematics and
Computer Science
University of Basel
Basel Switzerland
http://gravis.cs.unibas.ch
Andreas Morel-Forster
('34460642', 'Bernhard Egger', 'bernhard egger')
('49462138', 'Andreas Schneider', 'andreas schneider')
('39550224', 'Clemens Blumer', 'clemens blumer')
('1987368', 'Sandro Schönborn', 'sandro schönborn')
('1687079', 'Thomas Vetter', 'thomas vetter')
bernhard.egger@unibas.ch
andreas.schneider@unibas.ch
clemens.blumer@unibas.ch
andreas.forster@unibas.ch
sandro.schoenborn@unibas.ch
thomas.vetter@unibas.ch
044ba70e6744e80c6a09fa63ed6822ae241386f2TO APPEAR IN AUTONOMOUS ROBOTS, SPECIAL ISSUE IN LEARNING FOR HUMAN-ROBOT COLLABORATION
Early Prediction for Physical Human Robot
Collaboration in the Operating Room
('2641330', 'Tian Zhou', 'tian zhou')
04661729f0ff6afe4b4d6223f18d0da1d479accfFrom Facial Parts Responses to Face Detection: A Deep Learning Approach
The Chinese University of Hong Kong
Shenzhen Key Lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('1692609', 'Shuo Yang', 'shuo yang')
('1693209', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{ys014, pluo, ccloy, xtang}@ie.cuhk,edu.hk
04dcdb7cb0d3c462bdefdd05508edfcff5a6d315Assisting the training of deep neural networks
with applications to computer vision
tesi doctoral està subjecta a
la
Aquesta
CompartirIgual 4.0. Espanya de Creative Commons.
Esta tesis doctoral está sujeta a la licencia Reconocimiento - NoComercial – CompartirIgual
4.0. España de Creative Commons.
This doctoral thesis is licensed under the Creative Commons Attribution-NonCommercial-
ShareAlike 4.0. Spain License.
llicència Reconeixement- NoComercial –
('3995639', 'Adriana Romero', 'adriana romero')
044fdb693a8d96a61a9b2622dd1737ce8e5ff4faDynamic Texture Recognition Using Local Binary
Patterns with an Application to Facial Expressions
('1757287', 'Guoying Zhao', 'guoying zhao')
04f55f81bbd879773e2b8df9c6b7c1d324bc72d8Multi-view Face Analysis Based on Gabor Features
College of Information and Control Engineering in China University of Petroleum
Qingdao 266580, China
('1707922', 'Hongli Liu', 'hongli liu')
04250e037dce3a438d8f49a4400566457190f4e2
0431e8a01bae556c0d8b2b431e334f7395dd803aLearning Localized Perceptual Similarity Metrics for Interactive Categorization
Google Inc.
google.com
('2367820', 'Catherine Wah', 'catherine wah')
04b4c779b43b830220bf938223f685d1057368e9Video retrieval based on deep convolutional
neural network
Yajiao Dong
School of Information and Electronics,
Beijing Institution of Technology, Beijing, China
Jianguo Li
School of Information and Electronics,
Beijing Institution of Technology, Beijing, China
yajiaodong@bit.edu.cn
jianguoli@bit.edu.cn
04616814f1aabe3799f8ab67101fbaf9fd115ae4UNIVERSIT´EDECAENBASSENORMANDIEU.F.R.deSciences´ECOLEDOCTORALESIMEMTH`ESEPr´esent´eeparM.GauravSHARMAsoutenuele17D´ecembre2012envuedel’obtentionduDOCTORATdel’UNIVERSIT´EdeCAENSp´ecialit´e:InformatiqueetapplicationsArrˆet´edu07aoˆut2006Titre:DescriptionS´emantiquedesHumainsPr´esentsdansdesImagesVid´eo(SemanticDescriptionofHumansinImages)TheworkpresentedinthisthesiswascarriedoutatGREYC-UniversityofCaenandLEAR–INRIAGrenobleJuryM.PatrickPEREZDirecteurdeRechercheINRIA/Technicolor,RennesRapporteurM.FlorentPERRONNINPrincipalScientistXeroxRCE,GrenobleRapporteurM.JeanPONCEProfesseurdesUniversit´esENS,ParisExaminateurMme.CordeliaSCHMIDDirectricedeRechercheINRIA,GrenobleDirectricedeth`eseM.Fr´ed´ericJURIEProfesseurdesUniversit´esUniversit´edeCaenDirecteurdeth`ese
04c2cda00e5536f4b1508cbd80041e9552880e67Hipster Wars: Discovering Elements
of Fashion Styles
University of North Carolina at Chapel Hill, NC, USA
Tohoku University, Japan
('1772294', 'M. Hadi Kiapour', 'm. hadi kiapour')
('1721910', 'Kota Yamaguchi', 'kota yamaguchi')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
{hadi,aberg,tlberg}@cs.unc.edu
kyamagu@vision.is.tohoku.ac.jp
04ff69aa20da4eeccdabbe127e3641b8e6502ec0Sequential Face Alignment via Person-Specific Modeling in the Wild
Rutgers University
University of Texas at Arlington
Piscataway, NJ 08854
Arlington, TX 76019
Rutgers University
Piscataway, NJ 08854
('4340744', 'Xi Peng', 'xi peng')
('1768190', 'Junzhou Huang', 'junzhou huang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
xpeng.nb@cs.rutgers.edu
jzhuang@uta.edu
dnm@cs.rutgers.edu
046a694bbb3669f2ff705c6c706ca3af95db798cConditional Convolutional Neural Network for Modality-aware Face Recognition
Imperial College London
National University of Singapore
3Panasonic R&D Center Singapore
('34336393', 'Chao Xiong', 'chao xiong')
('1874505', 'Xiaowei Zhao', 'xiaowei zhao')
('40245930', 'Danhang Tang', 'danhang tang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
{chao.xiong10, x.zhao, d.tang11}@imperial.ac.uk, Karlekar.Jayashree@sg.panasonic.com, eleyans@nus.edu.sg, tk.kim@imperial.ac.uk
047d7cf4301cae3d318468fe03a1c4ce43b086edCo-Localization of Audio Sources in Images Using
Binaural Features and Locally-Linear Regression
To cite this version:
Sources in Images Using Binaural Features and Locally-Linear Regression. IEEE Transactions
on Audio Speech and Language Processing, 2015, 15p.
HAL Id: hal-01112834
https://hal.inria.fr/hal-01112834
Submitted on 3 Feb 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('3307172', 'Antoine Deleforge', 'antoine deleforge')
('1794229', 'Radu Horaud', 'radu horaud')
('2159538', 'Yoav Y. Schechner', 'yoav y. schechner')
('1780746', 'Laurent Girin', 'laurent girin')
('3307172', 'Antoine Deleforge', 'antoine deleforge')
('1794229', 'Radu Horaud', 'radu horaud')
('2159538', 'Yoav Y. Schechner', 'yoav y. schechner')
('1780746', 'Laurent Girin', 'laurent girin')
04317e63c08e7888cef480fe79f12d3c255c5b00Face Recognition Using a Unified 3D Morphable Model
Hu, G., Yan, F., Chan, C-H., Deng, W., Christmas, W., Kittler, J., & Robertson, N. M. (2016). Face Recognition
Using a Unified 3D Morphable Model. In Computer Vision – ECCV 2016: 14th European Conference,
Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII (pp. 73-89). (Lecture Notes in
Computer Science; Vol. 9912). Springer Verlag. DOI: 10.1007/978-3-319-46484-8_5
Published in:
Computer Vision – ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14,
2016, Proceedings, Part VIII
Document Version:
Peer reviewed version
Queen's University Belfast - Research Portal
Link to publication record in Queen's University Belfast Research Portal
Publisher rights
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-46484-8_5
General rights
Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other
copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated
with these rights.
Take down policy
The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to
ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the
Download date:12. Sep. 2018
Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk.
046865a5f822346c77e2865668ec014ec3282033Discovering Informative Social Subgraphs and Predicting
Pairwise Relationships from Group Photos
National Taiwan University, Taipei, Taiwan
†Academia Sinica, Taipei, Taiwan
('35081710', 'Yan-Ying Chen', 'yan-ying chen')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
('1704678', 'Hong-Yuan Mark Liao', 'hong-yuan mark liao')
yanying@cmlab.csie.ntu.edu.tw, winston@csie.ntu.edu.tw, liao@iis.sinica.edu.tw
047bb1b1bd1f19b6c8d7ee7d0324d5ecd1a3efffUnsupervised Training for 3D Morphable Model Regression
Princeton University
2Google Research
3MIT CSAIL
('32627314', 'Kyle Genova', 'kyle genova')
('39578349', 'Forrester Cole', 'forrester cole')
0470b0ab569fac5bbe385fa5565036739d4c37f8Automatic Face Naming with Caption-based Supervision
To cite this version:
with Caption-based Supervision. CVPR 2008 - IEEE Conference on Computer Vision
Pattern Recognition,
ciety,
<10.1109/CVPR.2008.4587603>.
Jun
2008,
pp.1-8,
2008, Anchorage, United
so-
.
IEEE Computer
States.
HAL Id: inria-00321048
https://hal.inria.fr/inria-00321048v2
Submitted on 11 Apr 2011
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('2737253', 'Matthieu Guillaumin', 'matthieu guillaumin')
('1722052', 'Thomas Mensink', 'thomas mensink')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
('2737253', 'Matthieu Guillaumin', 'matthieu guillaumin')
('1722052', 'Thomas Mensink', 'thomas mensink')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
6a3a07deadcaaab42a0689fbe5879b5dfc3ede52Learning to Estimate Pose by Watching Videos
Department of Computer Science and Engineering
IIT Kanpur
('36668573', 'Prabuddha Chakraborty', 'prabuddha chakraborty')
('1744135', 'Vinay P. Namboodiri', 'vinay p. namboodiri')
{prabudc, vinaypn} @iitk.ac.in
6a67e6fbbd9bcd3f724fe9e6cecc9d48d1b6ad4dCooperative Learning with Visual Attributes
Carnegie Mellon University
Georgia Tech
('32519394', 'Tanmay Batra', 'tanmay batra')
('1713589', 'Devi Parikh', 'devi parikh')
tbatra@cmu.edu
parikh@gatech.edu
6afed8dc29bc568b58778f066dc44146cad5366cKernel Hebbian Algorithm for Single-Frame
Super-Resolution
Max Planck Institute f ur biologische Kybernetik
Spemannstr. 38, D-72076 T¨ubingen, Germany
http://www.kyb.tuebingen.mpg.de/
('1808255', 'Kwang In Kim', 'kwang in kim')
('30541601', 'Matthias O. Franz', 'matthias o. franz')
{kimki, mof, bs}@tuebingen.mpg.de
6ad107c08ac018bfc6ab31ec92c8a4b234f67d49
6a184f111d26787703f05ce1507eef5705fdda83
6a16b91b2db0a3164f62bfd956530a4206b23feaA Method for Real-Time Eye Blink Detection and Its Application
Mahidol Wittayanusorn School
Puttamonton, Nakornpatom 73170, Thailand
Chinnawat.Deva@gmail.com
6a806978ca5cd593d0ccd8b3711b6ef2a163d810Facial feature tracking for Emotional Dynamic
Analysis
1ISIR, CNRS UMR 7222
Univ. Pierre et Marie Curie, Paris
2LAMIA, EA 4540
Univ. of Fr. West Indies & Guyana
('3093849', 'Thibaud Senechal', 'thibaud senechal')
('3074790', 'Vincent Rapp', 'vincent rapp')
('2554802', 'Lionel Prevost', 'lionel prevost')
{rapp, senechal}@isir.upmc.fr
lionel.prevost@univ-ag.fr
6a8a3c604591e7dd4346611c14dbef0c8ce9ba54ENTERFACE’10, JULY 12TH - AUGUST 6TH, AMSTERDAM, THE NETHERLANDS.
58
An Affect-Responsive Interactive Photo Frame
('1713360', 'Ilkka Kosunen', 'ilkka kosunen')
('32062164', 'Marcos Ortega Hortas', 'marcos ortega hortas')
('1764521', 'Albert Ali Salah', 'albert ali salah')
6aa43f673cc42ed2fa351cbc188408b724cb8d50
6a2b83c4ae18651f1a3496e48a35b0cd7a2196dfTop Rank Supervised Binary Coding for Visual Search
Department of ECE
School of Electronic Engineering
School of Information Science
UC San Diego
Xidian University
and Engineering
Xiamen University
Department of Mathematics
UC San Diego
IBM T. J. Watson
Research Center
('2451800', 'Dongjin Song', 'dongjin song')
('39059457', 'Wei Liu', 'wei liu')
('1725599', 'Rongrong Ji', 'rongrong ji')
('3520515', 'David A. Meyer', 'david a. meyer')
('1732563', 'John R. Smith', 'john r. smith')
dosong@ucsd.edu
wliu@ee.columbia.edu
rrji@xmu.edu.cn
dmeyer@math.ucsd.edu
jsmith@us.ibm.com
6a52e6fce541126ff429f3c6d573bc774f5b8d89Role of Facial Emotion in Social Correlation
Department of Computer Science and Engineering
Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, 466-8555 Japan
('2159044', 'Pankaj Mishra', 'pankaj mishra')
('47865262', 'Takayuki Ito', 'takayuki ito')
{pankaj.mishra, rafik}@itolab.nitech.ac.jp,
ito.takayuki@nitech.ac.jp
6a5fe819d2b72b6ca6565a0de117c2b3be448b02Supervised and Projected Sparse Coding for Image Classification
Computer Science and Engineering Department
University of Texas at Arlington
Arlington,TX,76019
('39122448', 'Jin Huang', 'jin huang')
('1688370', 'Feiping Nie', 'feiping nie')
('1748032', 'Heng Huang', 'heng huang')
huangjinsuzhou@gmail.com, feipingnie@gmail.com, heng@uta.edu, chqding@uta.edu
6afeb764ee97fbdedfa8f66810dfc22feae3fa1fRobust Principal Component Analysis with Complex Noise
School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
The Hong Kong Polytechnic University, Hong Kong, China
('40209122', 'Qian Zhao', 'qian zhao')
('1803714', 'Deyu Meng', 'deyu meng')
('7814629', 'Zongben Xu', 'zongben xu')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('36685537', 'Lei Zhang', 'lei zhang')
TIMMY.ZHAOQIAN@GMAIL.COM
DYMENG@MAIL.XJTU.EDU.CN
ZBXU@MAIL.XJTU.EDU.CN
CSWMZUO@GMAIL.COM
CSLZHANG@COMP.POLYU.EDU.HK
6aa61d28750629febe257d1cb69379e14c66c67fMax–Planck–Institut f¨ur biologische Kybernetik
Max Planck Institute for Biological Cybernetics
Technical Report No. 109
Kernel Hebbian Algorithm for
Iterative Kernel Principal
Component Analysis
Sch¨olkopf1
June 2003
This report is available in PDF–format via anonymous ftp at ftp://ftp.kyb.tuebingen.mpg.de/pub/mpi-memos/pdf/kha.pdf. The com-
plete series of Technical Reports is documented at: http://www.kyb.tuebingen.mpg.de/techreports.html
('1808255', 'Kwang In Kim', 'kwang in kim')
('30541601', 'Matthias O. Franz', 'matthias o. franz')
1 Department Sch¨olkopf, email: kimki;mof;bs@tuebingen.mpg.de
6ae96f68187f1cdb9472104b5431ec66f4b2470fCarnegie Mellon University
Dietrich College Honors Theses
Dietrich College of Humanities and Social Sciences
4-30-2012
Improving Task Performance in an Affect-mediated
Computing System
Follow this and additional works at: http://repository.cmu.edu/hsshonors
Part of the Databases and Information Systems Commons
('29120285', 'Vivek Pai', 'vivek pai')Research Showcase @ CMU
Carnegie Mellon University, vpai@cmu.edu
This Thesis is brought to you for free and open access by the Dietrich College of Humanities and Social Sciences at Research Showcase @ CMU. It has
been accepted for inclusion in Dietrich College Honors Theses by an authorized administrator of Research Showcase @ CMU. For more information,
please contact research-showcase@andrew.cmu.edu.
6a4419ce2338ea30a570cf45624741b754fa52cbStatistical transformer networks: learning shape
and appearance models via self supervision
University of York
('39180407', 'Anil Bas', 'anil bas')
('1687021', 'William A. P. Smith', 'william a. p. smith')
{ab1792,william.smith}@york.ac.uk
6af65e2a1eba6bd62843e7bf717b4ccc91bce2b8A New Weighted Sparse Representation Based
on MSLBP and Its Application to Face Recognition
School of IoT Engineering, Jiangnan University, Wuxi 214122, China
('1823451', 'He-Feng Yin', 'he-feng yin')
('37020604', 'Xiao-Jun Wu', 'xiao-jun wu')
yinhefeng@126.com, wu_xiaojun@yahoo.com.cn
6a657995b02bc9dee130701138ea45183c18f4aeTHE TIMING OF FACIAL MOTION IN POSED AND SPONTANEOUS SMILES
J.F. COHN* and K.L.SCHMIDT
University of Pittsburgh
Department of Psychology
4327 Sennott Square, 210 South Bouquet Street
Pittsburgh, PA 15260, USA
Revised 19 March 2004
Almost all work in automatic facial expression analysis has focused on recognition of prototypic
expressions rather than dynamic changes in appearance over time. To investigate the relative
contribution of dynamic features to expression recognition, we used automatic feature tracking to
measure the relation between amplitude and duration of smile onsets in spontaneous and deliberate
smiles of 81 young adults of Euro- and African-American background. Spontaneous smiles were of
smaller amplitude and had a larger and more consistent relation between amplitude and duration than
deliberate smiles. A linear discriminant classifier using timing and amplitude measures of smile
onsets achieved a 93% recognition rate. Using timing measures alone, recognition rate declined only
marginally to 89%. These findings suggest that by extracting and representing dynamic as well as
morphological features, automatic facial expression analysis can begin to discriminate among the
message values of morphologically similar expressions.
Keywords: automatic facial expression analysis, timing, spontaneous facial behavior
AMS Subject Classification:
1. Introduction
Almost all work in automatic facial expression analysis has sought to recognize either
prototypic expressions of emotion (e.g., joy or anger) or more molecular appearance
prototypes such as FACS action units. This emphasis on prototypic expressions follows
from the work of Darwin10and more recently Ekman12 who proposed that basic emotions
have corresponding prototypic expressions and described their components, such as
crows-feet wrinkles lateral to the outer eye corners, in emotion-specified joy expressions.
Considerable evidence suggests that six prototypic expressions (joy, surprise, anger,
sadness, disgust, and fear) are universal in their performance and in their perception12
and can communicate subjective emotion, communicative
intent, and action
tendencies.18, 19, 26
*jeffcohn@pitt.edu
kschmidt@pitt.edu
6a0368b4e132f4aa3bbdeada8d894396f201358aOne-Class Multiple Instance Learning via
Robust PCA for Common Object Discovery
Huazhong University of Science and Technology
2Visual Computing Group, Microsoft Research Asia
3Lab of Neuro Imaging and Department of Computer Science, UCLA
('2443233', 'Xinggang Wang', 'xinggang wang')
('2554701', 'Zhengdong Zhang', 'zhengdong zhang')
('1700297', 'Yi Ma', 'yi ma')
('1686737', 'Xiang Bai', 'xiang bai')
('1743698', 'Wenyu Liu', 'wenyu liu')
('1736745', 'Zhuowen Tu', 'zhuowen tu')
{wxghust,zhangzdfaint}@gmail.com, mayi@microsoft.com,
{xbai,liuwy}@hust.edu.cn, ztu@loni.ucla.edu
6ab33fa51467595f18a7a22f1d356323876f8262Ordinal Hyperplanes Ranker with Cost Sensitivities for Age Estimation
Institute of Information Science, Academia Sinica, Taipei, Taiwan
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
National Taiwan University, Taipei, Taiwan
Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan
('34692779', 'Kuang-Yu Chang', 'kuang-yu chang')
('1720473', 'Chu-Song Chen', 'chu-song chen')
('1732064', 'Yi-Ping Hung', 'yi-ping hung')
{kuangyu, song}@iis.sinica.edu.tw, hung@csie.ntu.edu.tw
6aefe7460e1540438ffa63f7757c4750c844764dNon-rigid Segmentation using Sparse Low Dimensional Manifolds and
Deep Belief Networks ∗
Instituto de Sistemas e Rob´otica
Instituto Superior T´ecnico, Portugal
('3259175', 'Jacinto C. Nascimento', 'jacinto c. nascimento')
6a2ac4f831bd0f67db45e7d3cdaeaaa075e7180aExcitation Dropout:
Encouraging Plasticity in Deep Neural Networks
1Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia
Boston University
3Adobe Research
University of Verona
('40063519', 'Andrea Zunino', 'andrea zunino')
('3298267', 'Sarah Adel Bargal', 'sarah adel bargal')
('2322579', 'Pietro Morerio', 'pietro morerio')
('1701293', 'Jianming Zhang', 'jianming zhang')
('1749590', 'Stan Sclaroff', 'stan sclaroff')
('1727204', 'Vittorio Murino', 'vittorio murino')
{andrea.zunino,vittorio.murino}@iit.it,
{sbargal,sclaroff}@bu.edu, jianmzha@adobe.com
6a4ebd91c4d380e21da0efb2dee276897f56467aHOG ACTIVE APPEARANCE MODELS
cid:2)Imperial College London, U.K
University of Lincoln, School of Computer Science, U.K
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
6a1beb34a2dfcdf36ae3c16811f1aef6e64abff2
6a7e464464f70afea78552c8386f4d2763ea1d9cReview Article
International Journal of Current Engineering and Technology
E-ISSN 2277 – 4106, P-ISSN 2347 - 5161
©2014 INPRESSCO
, All Rights Reserved
Available at http://inpressco.com/category/ijcet
Facial Landmark Localization – A Literature Survey
PES Institute of Technology, Bangalore, Karnataka, India
Accepted 25 May 2014, Available online 01 June2014, Vol.4, No.3 (June 2014)
32925200665a1bbb4fc8131cd192cb34c2d7d9e33-9
MVA2009 IAPR Conference on Machine Vision Applications, May 20-22, 2009, Yokohama, JAPAN
An Active Appearance Model with a Derivative-Free
Optimization
CNRS , Institute of Automation of the Chinese Academy of Sciences
95, Zhongguancun Dong Lu, PO Box 2728 − Beijing 100190 − PR China
LIAMA Sino-French IT Lab.
('8214735', 'Jixia Zhang', 'jixia zhang')
('1742818', 'Franck Davoine', 'franck davoine')
('3364363', 'Chunhong Pan', 'chunhong pan')
Franck.Davoine@gmail.com
322c063e97cd26f75191ae908f09a41c534eba90Noname manuscript No.
(will be inserted by the editor)
Improving Image Classification using Semantic Attributes
Received: date / Accepted: date
('1758652', 'Yu Su', 'yu su')
325b048ecd5b4d14dce32f92bff093cd744aa7f8CVPR
#2670
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
CVPR 2008 Submission #2670. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
CVPR
#2670
Multi-Image Graph Cut Clothing Segmentation for Recognizing People
Anonymous CVPR submission
Paper ID 2670
32f7e1d7fa62b48bedc3fcfc9d18fccc4074d347HIERARCHICAL SPARSE AND COLLABORATIVE LOW-RANK REPRESENTATION FOR
EMOTION RECOGNITION
Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA
('40031188', 'Xiang Xiang', 'xiang xiang')
('31507586', 'Minh Dao', 'minh dao')
('1678633', 'Gregory D. Hager', 'gregory d. hager')
('1709073', 'Trac D. Tran', 'trac d. tran')
{xxiang, minh.dao, ghager1, trac}@jhu.edu
32d8e555441c47fc27249940991f80502cb70bd5Machine Learning Models that Remember Too Much
Cornell University
Cornell Tech
Cornell Tech
('3469125', 'Congzheng Song', 'congzheng song')
('1723945', 'Vitaly Shmatikov', 'vitaly shmatikov')
('1707461', 'Thomas Ristenpart', 'thomas ristenpart')
cs2296@cornell.edu
ristenpart@cornell.edu
shmat@cs.cornell.edu
3294e27356c3b1063595885a6d731d625b15505aIllumination Face Spaces are Idiosyncratic
2, H. Kley1, C. Peterson1 ∗
Colorado State University, Fort Collins, CO 80523, USA
('2640182', 'Jen-Mei Chang', 'jen-mei chang')
324f39fb5673ec2296d90142cf9a909e595d82cfHindawi Publishing Corporation
Mathematical Problems in Engineering
Volume 2011, Article ID 864540, 15 pages
doi:10.1155/2011/864540
Research Article
Relationship Matrix Nonnegative
Decomposition for Clustering
Faculty of Science and State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong
University, Xi an Shaanxi Province, Xi an 710049, China
Received 18 January 2011; Revised 28 February 2011; Accepted 9 March 2011
Copyright q 2011 J.-Y. Pan and J.-S. Zhang. This is an open access article distributed under
the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Nonnegative matrix factorization (cid:2)NMF(cid:3) is a popular tool for analyzing the latent structure of non-
negative data. For a positive pairwise similarity matrix, symmetric NMF (cid:2)SNMF(cid:3) and weighted
NMF (cid:2)WNMF(cid:3) can be used to cluster the data. However, both of them are not very efficient
for the ill-structured pairwise similarity matrix. In this paper, a novel model, called relationship
matrix nonnegative decomposition (cid:2)RMND(cid:3), is proposed to discover the latent clustering structure
from the pairwise similarity matrix. The RMND model is derived from the nonlinear NMF
algorithm. RMND decomposes a pairwise similarity matrix into a product of three low rank
nonnegative matrices. The pairwise similarity matrix is represented as a transformation of a
positive semidefinite matrix which pops out the latent clustering structure. We develop a learning
procedure based on multiplicative update rules and steepest descent method to calculate the
nonnegative solution of RMND. Experimental results in four different databases show that the
proposed RMND approach achieves higher clustering accuracy.
1. Introduction
Nonnegative matrix factorization (cid:2)NMF(cid:3) (cid:6)1(cid:7) has been introduced as an effective technique for
analyzing the latent structure of nonnegative data such as images and documents. A variety
of real-world applications of NMF has been found in many areas such as machine learning,
signal processing (cid:6)2–4(cid:7), data clustering (cid:6)5, 6(cid:7), and computer vision (cid:6)7(cid:7).
Most applications focus on the clustering aspect of NMF (cid:6)8, 9(cid:7). Each sample can be
represented as a linear combination of clustering centroids. Recently, a theoretic analysis
has shown the equivalence between NMF and K-means/spectral clustering (cid:6)10(cid:7). Symmetric
NMF (cid:2)SNMF(cid:3) (cid:6)10(cid:7) is an extension of NMF. It aims at learning clustering structure from
the kernel matrix or pairwise similarity matrix which is positive semidefinite. When the simi-
larity matrix is not positive semidefinite, SNMF is not able to capture the clustering structure
('9416881', 'Ji-Yuan Pan', 'ji-yuan pan')
('2265568', 'Jiang-She Zhang', 'jiang-she zhang')
('14464924', 'Angelo Luongo', 'angelo luongo')
Correspondence should be addressed to Ji-Yuan Pan, panjiyuan@gmail.com
321bd4d5d80abb1bae675a48583f872af3919172Wang et al. EURASIP Journal on Image and Video Processing (2016) 2016:44
DOI 10.1186/s13640-016-0152-3
EURASIP Journal on Image
and Video Processing
R EV I E W
Entropy-weighted feature-fusion method
for head-pose estimation
Open Access
('40579241', 'Kang Liu', 'kang liu')
('2076553', 'Xu Qian', 'xu qian')
3240c9359061edf7a06bfeb7cc20c103a65904c2PPR-FCN: Weakly Supervised Visual Relation Detection via Parallel Pairwise
R-FCN
Columbia University, National University of Singapore
('5462268', 'Hanwang Zhang', 'hanwang zhang')
('26538630', 'Zawlin Kyaw', 'zawlin kyaw')
('46380822', 'Jinyang Yu', 'jinyang yu')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{hanwangzhang, kzl.zawlin, yjy941124}@gmail.com; shih.fu.chang@columbia.edu
32b8c9fd4e3f44c371960eb0074b42515f318ee7
32575ffa69d85bbc6aef5b21d73e809b37bf376d-)5741/ *1-641+ 5)2- 37)16; 1 6-45 . *1-641+ 1.4)61
7ELAHIEJO B JJ=M=
)*564)+6
IKHA L=HE=JEI E >EAJHE? I=FA GK=EJO 9A >ACE MEJD
IKHAAJI 9A JDA IDM JD=J JDA >EAJHE? EBH=JE BH
JA EI JDA A= D(p(cid:107)q) BH = FAHII E JDA FFK=JE 1
BH I= ALAI B >KH MEJD = =IOFJJE? >AD=LEH =J =HCAH
>KH
 164,7+61
*EAJHE? I=FA GK=EJO EI = A=IKHA B JDA KIABKAII B =
GK=EJO
F=FAH MA FHFIA = AM =FFH=?D J A=IKHA JDEI GK=JEJO
JDA EJKEJELA >IAHL=JE JD=J = DECD GK=EJO >EAJHE? E=CA
>EAJHE? EBH=JE
EIIKAI E >EAJHE? JA?DCO .H AN=FA A B JDA IJ
? >EAJHE? GKAIJEI EI JD=J B KEGKAAII AC J MD=J
ANJAJ =HA CAHFHEJI KEGKA .H JDA FEJ B LEAM B
=>A EBH=JE EI =L=E=>A BH = CELA JA?DCO IK?D
  $  "
1 JDEI F=FAH MA A=>H=JA = =FFH=?D J
BMI
AJI
 >ABHA = >EAJHE? A=IKHAAJ t0 =J MDE?D JEA MA O
M = FAHI p EI F=HJ B = FFK=JE q MDE?D =O >A JDA
4E?D=H@ ;K=H= =@ )@O )@AH
5?D B 1BH=JE 6A?DCO =@ -CEAAHEC
J=HE +==@=
6DEI F=FAH @ALAFI = AM =FFH=?D J K@AHIJ=@ =@ A=
JDA EJKEJE JD=J @ACH=@=JEI J = >EAJHE? I=FA ME HA
@K?A JDA =KJ B E@AJE=>A EBH=JE =L=E=>A 1 H
@AH J A=IKHA JDA =KJ B E@AJE=>A EBH=JE MA
@AA >EAJHE? EBH=JE =I JDA @A?HA=IA E K?AHJ=EJO
=>KJ JDA E@AJEJO B = FAHI @KA J = IAJ B >EAJHE? A=
= FAHI =O >A ?=?K=JA@ >O JDA HA=JELA AJHFO D(p(cid:107)q)
>AJMAA JDA FFK=JE BA=JKHA @EIJHE>KJE q =@ JDA FAHII
BA=JKHA @EIJHE>KJE p 6DA >EAJHE? EBH=JE BH = IOI
H@AH J FH=?JE?=O A=IKHA D(p(cid:107)q) MEJD EEJA@ @=J= I=
FAI MA EJH@K?A = =CHEJD MDE?D HACK=HEAI = /=KIIE=
@A B JDA BA=JKHA ?L=HE=?AI ) AN=FA B JDEI AJD@
EI IDM BH 2+) .EIDAH EA=H @EI?HEE=J ., =@ 1+)
>=IA@ B=?A HA?CEJE MEJD >EAJHE? EBH=JE ?=?K=JA@
J >A 45.0 >EJI 2+) 37.0 >EJI ., 39.0 >EJI 1+) =@
55.6 >EJI BKIE B 2+) =@ ., BA=JKHAI *=IA@  JDEI
@AEJE B >EAJHE? EBH=JE MA IEK=JA @ACH=@=JEI
B >EAJHE? E=CAI =@ ?=?K=JA JDA HAIKJEC @A?HA=IA E
>EAJHE? EBH=JE 4AIKJI IDM = GK=IEEA=H @A?HA=IA
>EAJHE? E=CA ' A HA?AJ @ALAFAJ EI JDA IECEB
E?=J ALA B EJAHAIJ E IJ=@=H@I BH A=IKHAAJ B >E
AJHE? GK=EJO .H AN=FA 15 D=I HA?AJO AIJ=>EIDA@ =
>EAJHE? I=FA GK=EJO @H=BJ IJ=@=H@ ' )??H@EC J '
>EAJHE? I=FA GK=EJO =O >A ?IE@AHA@ BH JDA FEJ B
LEAM B ?D=H=?JAH EDAHAJ BA=JKHAI @AEJO =??KH=?O B BA=
JKHAI H KJEEJO FHA@E?JA@ >EAJHE?I FAHBH=?A ) CA
AH= ?IAIKI D=I @ALAFA@ JD=J JDA IJ EFHJ=J A=IKHA
B = GK=EJO AJHE? EI EJI KJEEJO ` E=CAI AL=K=JA@ =I DECDAH
GK=EJO KIJ >A JDIA JD=J HAIKJ E >AJJAH E@AJE?=JE B E
@ELE@K=I =I A=IKHA@ >O = E?HA=IA@ IAF=H=JE B CAKEA
=@ EFIJH =J?D I?HA @EIJHE>KJEI 6DA =JKHA B >E
AJHE? I=FA @AEJO D=I IAA EJJA ELAIJEC=JE =JDKCD
BH IFA?E? >EAJHE? @=EJEAI =CHEJDI J A=IKHA >E
AJHE? GK=EJO D=LA >AA FHFIA@ .H AN=FA JDA .13
=CHEJD   EI = ME@AO KIA@ A=IKHA BH CAHFHEJ E=CA
A ?KHHAJ @EB?KJO EI JD=J JDAHA EI  ?IAIKI =I J MD=J
= A=IKHA B >EAJHE? I=FA @AEJO IDK@ CELA 1 JDEI
>=IA@  = EBH=JE JDAHAJE? BH=AMH 9A >ACE MEJD
EI HA KIABK J E@AJEBO JDA E@ELE@K= JD= = M GK=EJO
E=CA 6DEI IKCCAIJI JD=J JDA GK=JEJO B E@AJE=>A EBH
=JE @A?HA=IAI MEJD = HA@K?JE E GK=EJO /ELA = M=O J
A=IKHA JDA @A?HA=IA E EBH=JE ?=KIA@ >O = CELA E
=CA @ACH=@=JE A ?= A=IKHA JDA =II?E=JA@ @A?HA=IA E
A=IKHEC >EAJHE? EBH=JE ?JAJ EI HA=JA@ J =O
E@AJE=>EEJO A =O >A EJAHAIJA@ E DM K?D E@AJE
=I LE@A IKHLAE=?A 1 JDA ?JANJ B >EAJHE? BKIE 
A MK@ EA J >A =>A J GK=JEBO JDA >EAJHE? EBH=
JE E A=?D IOIJA E@ELE@K=O =@ JDA FJAJE= C=E BH
BKIEC JDA IOIJAI )@@EJE=O IK?D = A=IKHA EI HAAL=J
J >EAJHE? ?HOFJIOIJAI =@ FHEL=?O A=IKHAI 5ALAH=
=KJDHI D=LA FHAIAJA@ =FFH=?DAI HAAL=J J JDEI GKAIJE
=@@HAII JDEI GKAIJE >=IA@  @AEJEI BH EBH=JE
JDAHO   9A @AA JDA JAH ]>EAJHE? EBH=JE^ =I
>EAJHE? EBH=JE *1 JDA @A?HA=IA E K?AHJ=EJO =>KJ
JDA E@AJEJO B = FAHI @KA J = IAJ B >EAJHE? A=IKHA
1 H@AH J EJAHFHAJ JDEI @AEJE MA HABAH J JM EIJ=JI
MDA F=AJ =@  =BJAH HA?AELEC = IAJ B A=IKHAAJI
t1 MA D=LA HA EBH=JE =@ AII K?AHJ=EJO =>KJ JDA
FAHII E@AJEJO
*=IA@  JDAIA A=IKHAI MA JDA @AA JDA EBH=JE II
@KA J = @ACH=@=JE E E=CA GK=EJO =I JDA HA=JELA ?D=CA
32ecbbd76fdce249f9109594eee2d52a1cafdfc7Object Specific Deep Learning Feature and Its Application to Face Detection
University of Nottingham, Ningbo, China
University of Nottingham, Ningbo, China
Shenzhen University, Shenzhen, China
University of Nottingham, Ningbo, China
('3468964', 'Xianxu Hou', 'xianxu hou')
('39508183', 'Ke Sun', 'ke sun')
('1687690', 'LinLin Shen', 'linlin shen')
('1698461', 'Guoping Qiu', 'guoping qiu')
xianxu.hou@nottingham.edu.cn
ke.sun@nottingham.edu.cn
llshen@szu.edu.cn
guoping.qiu@nottingham.edu.cn
32c20afb5c91ed7cdbafb76408c3a62b38dd9160Viewing Real-World Faces in 3D
The Open University of Israel, Israel
('1756099', 'Tal Hassner', 'tal hassner')hassner@openu.ac.il
32a40c43a9bc1f1c1ed10be3b9f10609d7e0cb6bLighting Aware Preprocessing for Face
Recognition across Varying Illumination
1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing 100190, China
Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Institute of Digital Media, Peking University, Beijing 100871, China
('34393045', 'Hu Han', 'hu han')
('1685914', 'Shiguang Shan', 'shiguang shan')
('2343895', 'Laiyun Qing', 'laiyun qing')
('1710220', 'Xilin Chen', 'xilin chen')
('1698902', 'Wen Gao', 'wen gao')
{hhan,sgshan,lyqing,xlchen,wgao}@jdl.ac.cn
329394480fc5e9e96de4250cc1a2b060c3677c94Improved Dense Trajectory with Cross Streams
Graduate School of
Information
Science and Technology
University of Tokyo
tokyo.ac.jp
Graduate School of
Information
Science and Technology
University of Tokyo
tokyo.ac.jp
Graduate School of
Information
Science and Technology
University of Tokyo
tokyo.ac.jp
('8197937', 'Katsunori Ohnishi', 'katsunori ohnishi')
('2859204', 'Masatoshi Hidaka', 'masatoshi hidaka')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
ohnishi@mi.t.u-
hidaka@mi.t.u-
harada@mi.t.u-
32728e1eb1da13686b69cc0bd7cce55a5c963cddAutomatic Facial Emotion Recognition Method Based on Eye
Region Changes
Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
Faculty of Electrical and Computer Engineering, Bu-Ali Sina University, Hamadan, Iran
Received: 19/Apr/2015 Revised: 19/Mar/2016 Accepted: 19/Apr/2016
('35191740', 'Nasrollah Moghadam Charkari', 'nasrollah moghadam charkari')
('2239524', 'Muharram Mansoorizadeh', 'muharram mansoorizadeh')
m.navran@modares.ac.ir
charkari@modares.ac.ir
mansoorm@basu.ac.ir
32c9ebd2685f522821eddfc19c7c91fd6b3caf22Finding Correspondence from Multiple Images
via Sparse and Low-Rank Decomposition
School of Computer Engineering, Nanyang Technological University, Singapore
2 Advanced Digital Sciences Center, Singapore
('1920683', 'Zinan Zeng', 'zinan zeng')
('1926757', 'Tsung-Han Chan', 'tsung-han chan')
('2370507', 'Kui Jia', 'kui jia')
('1714390', 'Dong Xu', 'dong xu')
{znzeng,dongxu}@ntu.edu.sg, {Th.chan,Chris.jia}@adsc.com.sg
3270b2672077cc345f188500902eaf7809799466Multibiometric Systems: Fusion Strategies and
Template Security
By
A Dissertation
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy
Department of Computer Science and Engineering
2008
('34633765', 'Karthik Nandakumar', 'karthik nandakumar')
321c8ba38db118d8b02c0ba209be709e6792a2c7Learn to Combine Multiple Hypotheses for Accurate Face Alignment
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1721677', 'Junjie Yan', 'junjie yan')
('1718623', 'Zhen Lei', 'zhen lei')
('1716143', 'Dong Yi', 'dong yi')
('34679741', 'Stan Z. Li', 'stan z. li')
{jjyan,zlei,dyi,szli}@nlpr.ia.ac.cn
324b9369a1457213ec7a5a12fe77c0ee9aef1ad4Dynamic Facial Analysis: From Bayesian Filtering to Recurrent Neural Network
NVIDIA
('2931118', 'Jinwei Gu', 'jinwei gu'){jinweig,xiaodongy,shalinig,jkautz}@nvidia.com
329d58e8fb30f1bf09acb2f556c9c2f3e768b15cLeveraging Intra and Inter-Dataset Variations for
Robust Face Alignment
Department of Computer Science and Technology
Tsinghua University
Department of Information Engineering
The Chinese University of Hong Kong
('38766009', 'Wenyan Wu', 'wenyan wu')
('1692609', 'Shuo Yang', 'shuo yang')
wwy15@mails.tsinghua.edu.cn
ys014@ie.cuhk.edu.hk
32df63d395b5462a8a4a3c3574ae7916b0cd4d1d978-1-4577-0539-7/11/$26.00 ©2011 IEEE
1489
ICASSP 2011
35308a3fd49d4f33bdbd35fefee39e39fe6b30b7('1799216', 'Jeong-Jik Seo', 'jeong-jik seo')
('1780155', 'Jisoo Son', 'jisoo son')
('7627712', 'Wesley De Neve', 'wesley de neve')
('1692847', 'Yong Man Ro', 'yong man ro')
353b6c1f431feac6edde12b2dde7e6e702455abdMulti-scale Patch based Collaborative
Representation for Face Recognition with
Margin Distribution Optimization
Biometric Research Center
The Hong Kong Polytechnic University
School of Computer Science and Technology, Tianjin University
('2873638', 'Pengfei Zhu', 'pengfei zhu')
('36685537', 'Lei Zhang', 'lei zhang')
('1688792', 'Qinghua Hu', 'qinghua hu')
{cspzhu,cslzhang}@comp.polyu.edu.hk
352d61eb66b053ae5689bd194840fd5d33f0e9c0Analysis Dictionary Learning based
Classification: Structure for Robustness
('49501811', 'Wen Tang', 'wen tang')
('1733181', 'Ashkan Panahi', 'ashkan panahi')
('1769928', 'Hamid Krim', 'hamid krim')
('2622498', 'Liyi Dai', 'liyi dai')
350da18d8f7455b0e2920bc4ac228764f8fac292From: AAAI Technical Report SS-03-08. Compilation copyright © 2003, AAAI (www.aaai.org). All rights reserved.
Automatic Detecting Neutral Face for Face Authentication and
Facial Expression Analysis
Exploratory Computer Vision Group
IBM Thomas J. Watson Research Center
PO Box 704, Yorktown Heights, NY 10598
('40383812', 'Ying-li Tian', 'ying-li tian')
('1773140', 'Ruud M. Bolle', 'ruud m. bolle')
{yltian, bolle}@us.ibm.com
3538d2b5f7ab393387ce138611ffa325b6400774A DSP-BASED APPROACH FOR THE IMPLEMENTATION OF FACE RECOGNITION
ALGORITHMS
A. U. Batur
B. E. Flinchbaugh
M. H. Hayes IIl
Center for Signal and Image Proc.
Georgia Inst. Of Technology
Atlanta, GA
Imaging and Audio Lab.
Texas Instruments
Dallas, TX
Center for Signal and Image Proc.
Georgia Inst. Of Technology
Atlanta, CA
3504907a2e3c81d78e9dfe71c93ac145b1318f9cNoname manuscript No.
(will be inserted by the editor)
Unconstrained Still/Video-Based Face Verification with Deep
Convolutional Neural Networks
Received: date / Accepted: date
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('2682056', 'Ching-Hui Chen', 'ching-hui chen')
('9215658', 'Rama Chellappa', 'rama chellappa')
('26988560', 'Rajeev Ranjan', 'rajeev ranjan')
35b1c1f2851e9ac4381ef41b4d980f398f1aad68Geometry Guided Convolutional Neural Networks for
Self-Supervised Video Representation Learning
('2551285', 'Chuang Gan', 'chuang gan')
('40206014', 'Boqing Gong', 'boqing gong')
('2473509', 'Kun Liu', 'kun liu')
('49466491', 'Hao Su', 'hao su')
('1744254', 'Leonidas J. Guibas', 'leonidas j. guibas')
351c02d4775ae95e04ab1e5dd0c758d2d80c3dddActionSnapping: Motion-based Video
Synchronization
Disney Research
('2893744', 'Alexander Sorkine-Hornung', 'alexander sorkine-hornung')
35f03f5cbcc21a9c36c84e858eeb15c5d6722309Placing Broadcast News Videos in their Social Media
Context using Hashtags
Columbia University
('2136860', 'Joseph G. Ellis', 'joseph g. ellis')
('2602265', 'Svebor Karaman', 'svebor karaman')
('1786871', 'Hongzhi Li', 'hongzhi li')
('36009509', 'Hong Bin Shim', 'hong bin shim')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{jge2105, svebor.karaman, hongzhi.li, h.shim, sc250}@columbia.edu
35e4b6c20756cd6388a3c0012b58acee14ffa604Gender Classification in Large Databases
E. Ram´on-Balmaseda, J. Lorenzo-Navarro, and M. Castrill´on-Santana (cid:63)
Universidad de Las Palmas de Gran Canaria
SIANI
Spain
enrique.de101@.alu.ulpgc.es{jlorenzo,mcastrillon}@siani.es
356b431d4f7a2a0a38cf971c84568207dcdbf189Recognize Complex Events from Static Images by Fusing Deep Channels
The Chinese University of Hong Kong
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology
CAS, China
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')xy012@ie.cuhk.edu.hk
zk013@ie.cuhk.edu.hk
dhlin@ie.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
35f921def890210dda4b72247849ad7ba7d35250Exemplar-based Graph Matching
for Robust Facial Landmark Localization
Carnegie Mellon University
Pittsburgh, PA 15213
http://www.f-zhou.com
Adobe Research
San Jose, CA 95110
('1757386', 'Feng Zhou', 'feng zhou')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
{jbrandt, zlin}@adobe.com
357963a46dfc150670061dbc23da6ba7d6da786e
35ec9b8811f2d755c7ad377bdc29741b55b09356Efficient, Robust and Accurate Fitting of a 3D Morphable Model
University of Basel
Bernoullistrasse 16, CH - 4056 Basel, Switzerland
('3293655', 'Sami Romdhani', 'sami romdhani')
('1687079', 'Thomas Vetter', 'thomas vetter')
fsami.romdhani, thomas.vetterg@unibas.ch
35f1bcff4552632419742bbb6e1927ef5e998eb4
35c973dba6e1225196566200cfafa150dd231fa8
35f084ddee49072fdb6e0e2e6344ce50c02457efA Bilinear Illumination Model
for Robust Face Recognition
The Harvard community has made this
article openly available. Please share how
this access benefits you. Your story matters
Citation
Machiraju. 2005. A bilinear illumination model for robust face
recognition. Proceedings of the Tenth IEEE International Conference
on Computer Vision: October 17-21, 2005, Beijing, China. 1177-1184.
Los Almamitos, C.A.: IEEE Computer Society.
Published Version
doi:10.1109/ICCV.2005.5
Citable link
http://nrs.harvard.edu/urn-3:HUL.InstRepos:4238979
Terms of Use

repository, and is made available under the terms and conditions
applicable to Other Posted Material, as set forth at http://
nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-
use#LAA
('1780935', 'Baback Moghaddam', 'baback moghaddam')
('1701371', 'Hanspeter Pfister', 'hanspeter pfister')
3505c9b0a9631539e34663310aefe9b05ac02727A Joint Discriminative Generative Model for Deformable Model
Construction and Classification
Imperial College London, UK
Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, The
Netherlands
('2000297', 'Ioannis Marras', 'ioannis marras')
('1793625', 'Symeon Nikitidis', 'symeon nikitidis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
2 Yoti Ltd, London, UK, e-mail: symeon.nikitidis@yoti.com
3506518d616343d3083f4fe257a5ee36b376b9e1Unsupervised Domain Adaptation for
Personalized Facial Emotion Recognition
University of Trento
Trento, Italy
FBK
University of Perugia
Trento, Italy
Perugia, Italy
University of Trento
Trento, Italy
('2933565', 'Gloria Zen', 'gloria zen')
('1716310', 'Enver Sangineto', 'enver sangineto')
('40811261', 'Elisa Ricci', 'elisa ricci')
('1703601', 'Nicu Sebe', 'nicu sebe')
353a89c277cca3e3e4e8c6a199ae3442cdad59b5
35e0256b33212ddad2db548484c595334f15b4daAttentive Fashion Grammar Network for
Fashion Landmark Detection and Clothing Category Classification
Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, China
University of California, Los Angeles, USA
('2693875', 'Wenguan Wang', 'wenguan wang')
('2762640', 'Yuanlu Xu', 'yuanlu xu')
('34926055', 'Jianbing Shen', 'jianbing shen')
('3133970', 'Song-Chun Zhu', 'song-chun zhu')
35e6f6e5f4f780508e5f58e87f9efe2b07d8a864This paper is a preprint (IEEE accepted status). IEEE copyright notice. 2018 IEEE.
Personal use of this material is permitted. Permission from IEEE must be obtained for all
other uses, in any current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for resale or redistribu-
tion to servers or lists, or reuse of any copyrighted.
A. Tejero-de-Pablos, Y. Nakashima, T. Sato, N. Yokoya, M. Linna and E. Rahtu, ”Sum-
marization of User-Generated Sports Video by Using Deep Action Recognition Features,” in
doi: 10.1109/TMM.2018.2794265
keywords: Cameras; Feature extraction; Games; Hidden Markov models; Semantics;
Three-dimensional displays; 3D convolutional neural networks; Sports video summarization;
action recognition; deep learning; long short-term memory; user-generated video,
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8259321&isnumber=4456689
35e87e06cf19908855a16ede8c79a0d3d7687b5cStrategies for Multi-View Face Recognition for
Identification of Human Faces: A Review
Department of Computer Science
Mahatma Gandhi Shikshan Mandal’s,
Arts, Science and Commerce College, Chopda
Dist: Jalgaon (M.S)
Dr. R.R.Manza
Department of Computer Science and IT
Dr. Babasaheb Ambedkar Marathwada University
Aurangabad.
('21182750', 'Pritesh G. Shah', 'pritesh g. shah')pritshah143@gmail.com
manzaramesh@gmail.com
352110778d2cc2e7110f0bf773398812fd905eb1TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. X, NO. X, JUNE 2014
Matrix Completion for Weakly-supervised
Multi-label Image Classification
('31671904', 'Ricardo Cabral', 'ricardo cabral')
('1683568', 'Fernando De la Torre', 'fernando de la torre')
('2884203', 'Alexandre Bernardino', 'alexandre bernardino')
6964af90cf8ac336a2a55800d9c510eccc7ba8e1Temporal Relational Reasoning in Videos
MIT CSAIL
('1804424', 'Bolei Zhou', 'bolei zhou')
('50112310', 'Alex Andonian', 'alex andonian')
('1690178', 'Antonio Torralba', 'antonio torralba')
{bzhou,aandonia,oliva,torralba}@csail.mit.edu
697b0b9630213ca08a1ae1d459fabc13325bdcbb
69ff40fd5ce7c3e6db95a2b63d763edd8db3a102HUMAN AGE ESTIMATION VIA GEOMETRIC AND TEXTURAL
FEATURES
Merve KILINC1 and Yusuf Sinan AKGUL2
1TUBITAK BILGEM UEKAE, Anibal Street, 41470, Gebze, Kocaeli, Turkey
GIT Vision Lab, http://vision.gyte.edu.tr/, Gebze Institute of Technology
Kocaeli, Turkey
Keywords:
Age estimation:age classification:geometric features:LBP:Gabor:LGBP:cross ratio:FGNET:MORPH
mkilinc@uekae.tubitak.gov.tr1, mkilinc@gyte.edu.tr2, akgul@bilmuh.gyte.edu.tr2
69adbfa7b0b886caac15ebe53b89adce390598a3Face hallucination using cascaded
super-resolution and identity priors
University of Ljubljana, Faculty of Electrical Engineering
University of Notre Dame
Fig. 1. Sample face hallucination results generated with the proposed method.
('3387470', 'Klemen Grm', 'klemen grm')
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
69d29012d17cdf0a2e59546ccbbe46fa49afcd68Subspace clustering of dimensionality-reduced data
ETH Zurich, Switzerland
('1730683', 'Reinhard Heckel', 'reinhard heckel')
('2208878', 'Michael Tschannen', 'michael tschannen')
Email: {heckel,boelcskei}@nari.ee.ethz.ch, michaelt@student.ethz.ch
69a68f9cf874c69e2232f47808016c2736b90c35Learning Deep Representation for Imbalanced Classification
The Chinese University of Hong Kong
2SenseTime Group Limited
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('2000034', 'Chen Huang', 'chen huang')
('9263285', 'Yining Li', 'yining li')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{chuang,ly015,ccloy,xtang}@ie.cuhk.edu.hk
69de532d93ad8099f4d4902c4cad28db958adfea
69a55c30c085ad1b72dd2789b3f699b2f4d3169fInternational Journal of Computer Trends and Technology (IJCTT) – Volume 34 Number 3 - April 2016
Automatic Happiness Strength Analysis of a
Group of People using Facial Expressions
Sagiri Prasanthi#1, Maddali M.V.M. Kumar*2,
#1PG Student, #2Assistant Professor
St. Ann s College of Engineering and Technology, Andhra Pradesh, India
is a collective concern
69b18d62330711bfd7f01a45f97aaec71e9ea6a5RESEARCH ARTICLE
M-Track: A New Software for Automated
Detection of Grooming Trajectories in Mice
State University of New York Polytechnic Institute, Utica, New York
United States of America, State University of New York Albany, Albany, New York
United States of America, State University of New York Albany, Albany
New York, United States of America
☯ These authors contributed equally to this work.
a11111
('35820210', 'Sheldon L. Reeves', 'sheldon l. reeves')
('8626210', 'Kelsey E. Fleming', 'kelsey e. fleming')
('1708615', 'Lin Zhang', 'lin zhang')
('3976998', 'Annalisa Scimemi', 'annalisa scimemi')
* scimemia@gmail.com, ascimemi@albany.edu
69526cdf6abbfc4bcd39616acde544568326d856636
[17] B. Moghaddam, T. Jebara, and A. Pentland, “Bayesian face recogni-
tion,” Pattern Recognit., vol. 33, no. 11, pp. 1771–1782, Nov. 2000.
[18] A. Nefian, “A hidden Markov model-based approach for face detection
and recognition,” Ph.D. dissertation, Dept. Elect. Comput. Eng. Elect.
Eng., Georgia Inst. Technol., Atlanta, 1999.
[19] P. J. Phillips et al., “Overview of the face recognition grand challenge,”
presented at the IEEE CVPR, San Diego, CA, Jun. 2005.
[20] H. T. Tanaka, M. Ikeda, and H. Chiaki, “Curvature-based face surface
recognition using spherical correlation-principal direction for curved
object recognition,” in Proc. Int. Conf. Automatic Face and Gesture
Recognition, 1998, pp. 372–377.
[21] M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cognit. Sci.,
pp. 71–86, 1991.
[22] V. N. Vapnik, Statistical Learning Theory. New York: Wiley, 1998.
[23] W. Zhao, R. Chellappa, A. Rosenfeld, and P. Phillips, “Face recogni-
tion: A literature survey,” ACM Comput. Surveys, vol. 35, no. 44, pp.
399–458, 2003.
[24] W. Zhao, R. Chellappa, and P. J. Phillips, “Subspace linear discrimi-
nant analysis for face recognition,” UMD TR4009, 1999.
Face Verification Using Template Matching
('2627097', 'Anil Kumar Sao', 'anil kumar sao')
690d669115ad6fabd53e0562de95e35f1078dfbbProgressive versus Random Projections for Compressive Capture of Images,
Lightfields and Higher Dimensional Visual Signals
MIT Media Lab
75 Amherst St, Cambridge, MA
MERL
201 Broadway, Cambridge MA
MIT Media Lab
75 Amherst St, Cambridge, MA
('1912905', 'Rohit Pandharkar', 'rohit pandharkar')
('1785066', 'Ashok Veeraraghavan', 'ashok veeraraghavan')
('1717566', 'Ramesh Raskar', 'ramesh raskar')
6993bca2b3471f26f2c8a47adfe444bfc7852484The Do’s and Don’ts for CNN-based Face Verification
Carlos Castillo
University of Maryland, College Park
UMIACS
('2068427', 'Ankan Bansal', 'ankan bansal')
('48467498', 'Rajeev Ranjan', 'rajeev ranjan')
('9215658', 'Rama Chellappa', 'rama chellappa')
{ankan,carlos,rranjan1,rama}@umiacs.umd.edu
69eb6c91788e7c359ddd3500d01fb73433ce2e65CAMGRAPH: Distributed Graph Processing for
Camera Networks
College of Computing
Georgia Institute of Technology
Atlanta, GA, USA
('3427189', 'Steffen Maass', 'steffen maass')
('5540701', 'Kirak Hong', 'kirak hong')
('1751741', 'Umakishore Ramachandran', 'umakishore ramachandran')
steffen.maass@gatech.edu,khong9@cc.gatech.edu,rama@cc.gatech.edu
691964c43bfd282f6f4d00b8b0310c554b613e3bTemporal Hallucinating for Action Recognition with Few Still Images
2†
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
The Chinese University of Hong Kong 3 SenseTime Group Limited
('46696518', 'Lei Zhou', 'lei zhou')
('33427555', 'Yu Qiao', 'yu qiao')
69063f7e0a60ad6ce16a877bc8f11b59e5f7348eClass-Specific Image Deblurring
2, Fatih Porikli1
The Australian National University Canberra ACT 2601, Australia
2NICTA, Locked Bag 8001, Canberra ACT 2601, Australia
('33672969', 'Saeed Anwar', 'saeed anwar')
('1774721', 'Cong Phuoc Huynh', 'cong phuoc huynh')
69a9da55bd20ce4b83e1680fbc6be2c976067631
69c2ac04693d53251500557316c854a625af84eeJID: PATREC
ARTICLE IN PRESS
Contents lists available at ScienceDirect
Pattern Recognition Letters
journal homepage: www.elsevier.com/locate/patrec
[m5G; April 22, 2016;10:30 ]
50 years of biometric research: Accomplishments, challenges,
and opportunities
a , 1 ,
a
Michigan State University, East Lansing, MI 48824, USA
b IBM Research Singapore, 9 Changi Business Park Central 1, 486048 Singapore
a r t i c l e
i n f o
a b s t r a c t
Article history:
Received 4 February 2015
Available online xxx
Keywords:
Biometrics
Fingerprints
Face
Iris
Security
Privacy
Forensics
Biometric recognition refers to the automated recognition of individuals based on their biological and
behavioral characteristics such as fingerprint, face, iris, and voice. The first scientific paper on automated
fingerprint matching was published by Mitchell Trauring in the journal Nature in 1963. The first objec-
tive of this paper is to document the significant progress that has been achieved in the field of biometric
recognition in the past 50 years since Trauring’s landmark paper. This progress has enabled current state-
of-the-art biometric systems to accurately recognize individuals based on biometric trait(s) acquired un-
der controlled environmental conditions from cooperative users. Despite this progress, a number of chal-
lenging issues continue to inhibit the full potential of biometrics to automatically recognize humans. The
second objective of this paper is to enlist such challenges, analyze the solutions proposed to overcome
them, and highlight the research opportunities in this field. One of the foremost challenges is the de-
sign of robust algorithms for representing and matching biometric samples obtained from uncooperative
subjects under unconstrained environmental conditions (e.g., recognizing faces in a crowd). In addition,
fundamental questions such as the distinctiveness and persistence of biometric traits need greater atten-
tion. Problems related to the security of biometric data and robustness of the biometric system against
spoofing and obfuscation attacks, also remain unsolved. Finally, larger system-level issues like usability,
user privacy concerns, integration with the end application, and return on investment have not been ad-
equately addressed. Unlocking the full potential of biometrics through inter-disciplinary research in the
above areas will not only lead to widespread adoption of this promising technology, but will also result
in wider user acceptance and societal impact.
© 2016 Published by Elsevier B.V.
1. Introduction
“It is the purpose of this article to present, together with some evi-
dence of its feasibility, a method by which decentralized automatic
identity verification, such as might be desired for credit, banking
or security purposes, can be accomplished through automatic com-
parison of the minutiae in finger-ridge patterns.”
– Mitchell Trauring, Nature, March 1963
In modern society, the ability to reliably identify individu-
als in real-time is a fundamental requirement in many applica-
tions including forensics, international border crossing, financial
transactions, and computer security. Traditionally, an exclusive pos-
This paper has been recommended for acceptance by S. Sarkar.
Corresponding author. Tel.: +1 517 355 9282; fax: +1 517 432 1061.
1 IAPR Fellow.
http://dx.doi.org/10.1016/j.patrec.2015.12.013
0167-8655/© 2016 Published by Elsevier B.V.
session of a token, such as a passport or an ID card, has been ex-
tensively used for identifying individuals. In the context of com-
puter systems and applications, knowledge-based schemes based
on passwords and PINs are commonly used for person authentica-
2 Since both token-based and knowledge-based mechanisms
tion.
have their own strengths and limitations, the use of two-factor
authentication schemes that combine both these authentication
mechanisms are also popular.
Biometric recognition, or simply biometrics, refers to the auto-
mated recognition of individuals based on their biological and be-
havioral characteristics [39] . Examples of biometric traits that have
been successfully used in practical applications include face, fin-
gerprint, palmprint, iris, palm/finger vein, and voice. The use of
DNA, in the context of biometrics (as opposed to just forensics), is
also beginning to gain traction. Since biometric traits are generally
inherent to an individual, there is a strong and reasonably
2 Authentication involves verifying the claimed identity of a person.
Please cite this article as: A.K. Jain et al., 50 years of biometric research: Accomplishments, challenges, and opportunities, Pattern Recog-
nition Letters (2016), http://dx.doi.org/10.1016/j.patrec.2015.12.013
('6680444', 'Anil K. Jain', 'anil k. jain')
('34633765', 'Karthik Nandakumar', 'karthik nandakumar')
('1698707', 'Arun Ross', 'arun ross')
E-mail addresses: jain@cse.msu.edu (A.K. Jain), nkarthik@sg.ibm.com
(K. Nandakumar), rossarun@cse.msu.edu (A. Ross).
6974449ce544dc208b8cc88b606b03d95c8fd368
69fb98e11df56b5d7ec7d45442af274889e4be52Harnessing the Deep Net Object Models for
enhancing Human Action Recognition
O.V. Ramana Murthy1 and Roland Goecke1,2
Vision and Sensing, HCC Lab, ESTeM, University of Canberra
IHCC, RSCS, CECS, Australian National University
Email: O.V.RamanaMurthy@ieee.org, roland.goecke@ieee.org
3cb2841302af1fb9656f144abc79d4f3d0b27380See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/319928941
When 3D-Aided 2D Face Recognition Meets Deep
Learning: An extended UR2D for Pose-Invariant
Face Recognition
Article · September 2017
CITATIONS
4 authors:
READS
33
Xiang Xu
University of Houston
Pengfei Dou
University of Houston
8 PUBLICATIONS 10 CITATIONS
9 PUBLICATIONS 29 CITATIONS
SEE PROFILE
SEE PROFILE
Ha Le
University of Houston
7 PUBLICATIONS 2 CITATIONS
Ioannis A Kakadiaris
University of Houston
468 PUBLICATIONS 5,233 CITATIONS
SEE PROFILE
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
3D-Aided 2D Face Recognition View project
iRay: mobile medical AR View project
All content following this page was uploaded by Xiang Xu on 27 September 2017.
The user has requested enhancement of the downloaded file.
3c78b642289d6a15b0fb8a7010a1fb829beceee2Analysis of Facial Dynamics
Using a Tensor Framework
University of Bristol
Department of Computer Science
Bristol, United Kingdom
University of Bristol
Department of Experimental Psychology
Bristol, United Kingdom
('2903159', 'Lisa Gralewski', 'lisa gralewski')
('23725787', 'Edward Morrison', 'edward morrison')
('2022210', 'Ian Penton-Voak', 'ian penton-voak')
gralewsk@cs.bris.ac.uk
3cc3cf57326eceb5f20a02aefae17108e8c8ab57BENCHMARK FOR EVALUATING BIOLOGICAL IMAGE ANALYSIS TOOLS
Center for Bio-Image Informatics, Electrical and Computer Engineering Department,
University of California, Santa Barbara
http://www.bioimage.ucsb.edu
Biological images are critical components for a detailed understanding of the structure and functioning of cells and proteins.
Image processing and analysis tools increasingly play a significant role in better harvesting this vast amount of data, most of
which is currently analyzed manually and qualitatively. A number of image analysis tools have been proposed to automatically
extract the image information. As the studies relying on image analysis tools have become widespread, the validation of
these methods, in particular, segmentation methods, has become more critical. There have been very few efforts at creating
benchmark datasets in the context of cell and tissue imaging, while, there have been successful benchmarks in other fields, such
as the Berkeley segmentation dataset [1], the handwritten digit recognition dataset MNIST [2] and face recognition dataset [3, 4].
In the field of biomedical image processing, most of standardized benchmark data sets concentrates on macrobiological images
such as mammograms and magnet resonance imaging (MRI) images [5], however, there is still a lack of a standardized dataset
for microbiological structures (e.g. cells and tissues) and it is well known in biomedical imaging [5].
We propose a benchmark for biological images to: 1) provide image collections with well defined ground truth; 2) provide
image analysis tools and evaluation methods to compare and validate analysis tools. We include a representative dataset of
microbiological structures whose scales range from a subcellular level (nm) to a tissue level (µm), inheriting intrinsic challenges
in the domain of biomedical image analysis (Fig. 1). The dataset is acquired through two of the main microscopic imaging
techniques: transmitted light microscopy and confocal laser scanning microscopy. The analysis tools1in the benchmark are
designed to obtain different quantitative measures from the dataset including microtubule tracing, cell segmentation, and retinal
layer segmentation.
Fig. 1. Example dataset provided in the benchmark.
This research is supported by NSF ITR-0331697.
1All analysis tools mentioned in this work can be found at http://www.bioimage.ucsb.edu/publications/.
ScaleConfocal microscopyLight microscopymicrotubulehorizontal cellSubcellular(< 1 µm)photoreceptorsbreast cancer cellsCOS1 cellsCellularTissue(< 10 µm)(< 30 µm)(< 350 µm)(≈10-50 µm in width)retinal layers
('8451780', 'Elisa Drelie Gelasca', 'elisa drelie gelasca')
('3045933', 'Jiyun Byun', 'jiyun byun')
('3064236', 'Boguslaw Obara', 'boguslaw obara')
3cb488a3b71f221a8616716a1fc2b951dd0de549Facial Age Estimation by
Adaptive Label Distribution Learning
School of Computer Science and Engineering
Key Lab of Computer Network and Information Integration, Ministry of Education
Southeast University, Nanjing 211189, China
('1735299', 'Xin Geng', 'xin geng')
('1794816', 'Qin Wang', 'qin wang')
('40228279', 'Yu Xia', 'yu xia')
Email: {xgeng, qinwang, xiayu}@seu.edu.cn
3cfbe1f100619a932ba7e2f068cd4c41505c9f58A Realistic Simulation Tool for Testing Face Recognition
Systems under Real-World Conditions∗
M. Correa, J. Ruiz-del-Solar, S. Parra-Tsunekawa, R. Verschae
Department of Electrical Engineering, Universidad de Chile
Advanced Mining Technology Center, Universidad de Chile
3c563542db664321aa77a9567c1601f425500f94TV-GAN: Generative Adversarial Network Based Thermal to Visible Face
Recognition
The University of Queensland, School of ITEE, QLD 4072, Australia
('50615828', 'Teng Zhang', 'teng zhang')
('2331880', 'Arnold Wiliem', 'arnold wiliem')
('1973322', 'Siqi Yang', 'siqi yang')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
[patrick.zhang, a.williem, siqi.yang]@uq.edu.au, lovell@itee.uq.edu.au
3c03d95084ccbe7bf44b6d54151625c68f6e74d0
3cd7b15f5647e650db66fbe2ce1852e00c05b2e4
3c6cac7ecf546556d7c6050f7b693a99cc8a57b3Robust Facial Landmark Detection in the Wild
Submitted for the Degree of
Doctor of Philosophy
from the
University of Surrey
Centre for Vision, Speech and Signal Processing
Faculty of Engineering and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
January 2016
('37705062', 'Zhenhua Feng', 'zhenhua feng')
('37705062', 'Zhenhua Feng', 'zhenhua feng')
3c57e28a4eb463d532ea2b0b1ba4b426ead8d9a0Defeating Image Obfuscation with Deep Learning
The University of Texas at
Austin
Cornell Tech
Cornell Tech
('34861228', 'Richard McPherson', 'richard mcpherson')
('2520493', 'Reza Shokri', 'reza shokri')
('1723945', 'Vitaly Shmatikov', 'vitaly shmatikov')
richard@cs.utexas.edu
shokri@cornell.edu
shmat@cs.cornell.edu
3cd9b0a61bdfa1bb8a0a1bf0369515a76ecd06e3Submitted 2/11; Revised 10/11; Published ??/11
Distance Metric Learning with Eigenvalue Optimization
College of Engineering, Mathematics and Physical Sciences
University of Exeter
Harrison Building, North Park Road
Exeter, EX4 4QF, UK
Department of Engineering Mathematics
University of Bristol
Merchant Venturers Building, Woodland Road
Bristol, BS8 1UB, UK
Editor:
('38954213', 'Yiming Ying', 'yiming ying')
('1695363', 'Peng Li', 'peng li')
y.ying@exeter.ac.uk
lipeng@ieee.org
3c97c32ff575989ef2869f86d89c63005fc11ba9Face Detection with the Faster R-CNN
Erik Learned-Miller
University of Massachusetts Amherst
University of Massachusetts Amherst
Amherst MA 01003
Amherst MA 01003
('40175280', 'Huaizu Jiang', 'huaizu jiang')hzjiang@cs.umass.edu
elm@cs.umass.edu
3ce2ecf3d6ace8d80303daf67345be6ec33b3a93
3c1aef7c2d32a219bdbc89a44d158bc2695e360aAdversarial Attack Type I: Generating False Positives
Shanghai Jiao Tong University
Shanghai, P.R. China 200240
Shanghai Jiao Tong University
Shanghai, P.R. China 200240
Shanghai Jiao Tong University
Shanghai, P.R. China 200240
Shanghai Jiao Tong University
Shanghai, P.R. China 200240
('51428687', 'Sanli Tang', 'sanli tang')
('13858459', 'Mingjian Chen', 'mingjian chen')
('2182657', 'Xiaolin Huang', 'xiaolin huang')
('1688428', 'Jie Yang', 'jie yang')
tangsanli@sjtu.edu.cn
w179261466@sjtu.edu.cn
xiaolinhuang@sjtu.edu.cn
jieyang@sjtu.edu.cn
3c374cb8e730b64dacb9fbf6eb67f5987c7de3c8Measuring Gaze Orientation for Human-Robot
Interaction
∗ CNRS; LAAS; 7 avenue du Colonel Roche, 31077 Toulouse Cedex, France
† Universit´e de Toulouse; UPS; LAAS-CNRS : F-31077 Toulouse, France
Introduction
In the context of Human-Robot interaction estimating gaze orientation brings
useful information about human focus of attention. This is a contextual infor-
mation : when you point something you usually look at it. Estimating gaze
orientation requires head pose estimation. There are several techniques to esti-
mate head pose from images, they are mainly based on training [3, 4] or on local
face features tracking [6]. The approach described here is based on local face
features tracking in image space using online learning, it is a mixed approach
since we track face features using some learning at feature level. It uses SURF
features [2] to guide detection and tracking. Such key features can be matched
between images, used for object detection or object tracking [10]. Several ap-
proaches work on fixed size images like training techniques which mainly work
on low resolution images because of computation costs whereas approaches based
on local features tracking work on high resolution images. Tracking face features
such as eyes, nose and mouth is a common problem in many applications such as
detection of facial expression or video conferencing [8] but most of those appli-
cations focus on front face images [9]. We developed an algorithm based on face
features tracking using a parametric model. First we need face detection, then
we detect face features in following order: eyes, mouth, nose. In order to achieve
full profile detection we use sets of SURF to learn what eyes, mouth and nose
look like once tracking is initialized. Once those sets of SURF are known they
are used to detect and track face features. SURF have a descriptor which is often
used to identify a key point and here we add some global geometry information
by using the relative position between key points. Then we use a particle filter to
track face features using those SURF based detectors, we compute the head pose
angles from features position and pass the results through a median filter. This
paper is organized as follows. Section 2 describes our modeling of visual features,
section 3 presents our tracking implementation. Section 4 presents results we get
with our implementation and future works in section 5.
2 Visual features
We use some basic properties of facial features to initialize our algorithm : eyes
are dark and circular, mouth is an horizontal dark line with a specific color,...
('5253126', 'R. Brochard', 'r. brochard')
('2667229', 'B. Burger', 'b. burger')
('2325221', 'A. Herbulot', 'a. herbulot')
('1797260', 'F. Lerasle', 'f. lerasle')
3c0bbfe664fb083644301c67c04a7f1331d9515fThe Role of Color and Contrast in Facial Age Estimation
Paper ID: 7
No Institute Given
3c4f6d24b55b1fd3c5b85c70308d544faef3f69aA Hybrid Deep Learning Architecture for
Privacy-Preserving Mobile Analytics
cid:63)Sharif University of Technology, University College London, Queen Mary University of London
('8201306', 'Seyed Ali Ossia', 'seyed ali ossia')
('9920557', 'Ali Shahin Shamsabadi', 'ali shahin shamsabadi')
('2251846', 'Ali Taheri', 'ali taheri')
('1688652', 'Hamid R. Rabiee', 'hamid r. rabiee')
('1763096', 'Hamed Haddadi', 'hamed haddadi')
3cb0ef5aabc7eb4dd8d32a129cb12b3081ef264fAbsolute Head Pose Estimation From Overhead Wide-Angle Cameras
IBM T.J. Watson Research Center
19 Skyline Drive, Hawthorne, NY 10532 USA
('40383812', 'Ying-li Tian', 'ying-li tian')
('1690709', 'Arun Hampapur', 'arun hampapur')
{ yltian,lisabr,jconnell,sharat,arunh,aws,bolle }@us.ibm.com
3cb64217ca2127445270000141cfa2959c84d9e7
3c11a1f2bd4b9ce70f699fb6ad6398171a8ad3bdInternational Journal of Computer Information Systems and Industrial Management Applications (IJCISIM)
ISSN: 2150-7988 Vol.2 (2010), pp.262-278
http://www.mirlabs.org/ijcisim
Simulating Pareidolia of Faces for Architectural Image Analysis
Newcastle Robotics Laboratory
School of Electrical Engineering and Computer Science
The University of Newcastle, Callaghan 2308, Australia
School of Architecture and Built Environment
The University of Newcastle
Callaghan 2308, Australia
('1716539', 'Stephan K. Chalup', 'stephan k. chalup')
('40211094', 'Michael J. Ostwald', 'michael j. ostwald')
Stephan.Chalup@newcastle.edu.au, Kenny.Hong@uon.edu.au
Michael.Ostwald@newcastle.edu.au
3cd8ab6bb4b038454861a36d5396f4787a21cc68 Video‐Based Facial Expression Recognition Using Hough Forest
National Tsing Hua University, Hsin-Chu, Taiwan
Asian University, Taichung, Taiwan
('2790846', 'Shih-Chung Hsu', 'shih-chung hsu')
('1793389', 'Chung-Lin Huang', 'chung-lin huang')
E-mail: d9761817@oz.nthu.edu.tw, clhuang@asia.edu.tw
3cd5da596060819e2b156e8b3a28331ef633036b
3ca5d3b8f5f071148cb50f22955fd8c1c1992719EVALUATING RACE AND SEX DIVERSITY IN THE WORLD’S LARGEST
COMPANIES USING DEEP NEURAL NETWORKS
1 ​Youth Laboratories, Ltd, Diversity AI Group, Skolkovo Innovation Center, Nobel Street 5,
143026, Moscow, Russia
2 ​Insilico Medicine, Emerging Technology Centers, JHU, 1101 33rd Street, Baltimore, MD,
21218, USA
University of Oxford, Oxford, United Kingdom
Computer Engineering and Computer Science, Duthie Center for Engineering, University of
Louisville, Louisville, KY 40292, USA
5 ​Computer Vision Lab, Department of Information Technology and Electrical Engineering, ETH
Zürich, Switzerland
Center for Healthy Aging, University of
Copenhagen, Denmark
7 ​The Biogerontology Research Foundation, 2354 Chynoweth House, Trevissome Park, Truro,
TR4 8UN, UK.
Moscow Institute of Physics and Technology, Institutskiy per., 9, Dolgoprudny, 141701, Russia
('3888942', 'Konstantin Chekanov', 'konstantin chekanov')
('4017984', 'Polina Mamoshina', 'polina mamoshina')
('1976753', 'Roman V. Yampolskiy', 'roman v. yampolskiy')
('1732855', 'Radu Timofte', 'radu timofte')
('40336662', 'Alex Zhavoronkov', 'alex zhavoronkov')
Morten Scheibye-Knudsen: ​mscheibye@sund.ku.dk
Alex Zhavoronkov: ​alex@biogerontology.org
3c56acaa819f4e2263638b67cea1ec37a226691dBody Joint guided 3D Deep Convolutional
Descriptors for Action Recognition
('3201156', 'Congqi Cao', 'congqi cao')
('46867228', 'Yifan Zhang', 'yifan zhang')
('1713887', 'Chunjie Zhang', 'chunjie zhang')
('1694235', 'Hanqing Lu', 'hanqing lu')
3cc46bf79fb9225cf308815c7d41c8dd5625cc29AGE INTERVAL AND GENDER PREDICTION USING PARAFAC2 APPLIED TO SPEECH
UTTERANCES
Aristotle University of Thessaloniki
Thessaloniki 54124, GREECE
Cyprus University of Technology
3040 Limassol, Cyprus
('3352401', 'Evangelia Pantraki', 'evangelia pantraki')
('1736143', 'Constantine Kotropoulos', 'constantine kotropoulos')
('1830709', 'Andreas Lanitis', 'andreas lanitis')
{pantraki@|costas@aiia}.csd.auth.gr
andreas.lanitis@cut.ac.cy
3c8da376576938160cbed956ece838682fa50e9fChapter 4
Aiding Face Recognition with
Social Context Association Rule
based Re-Ranking
Humans are very efficient at recognizing familiar face images even in challenging condi-
tions. One reason for such capabilities is the ability to understand social context between
individuals. Sometimes the identity of the person in a photo can be inferred based on the
identity of other persons in the same photo, when some social context between them is
known. This chapter presents an algorithm to utilize the co-occurrence of individuals as
the social context to improve face recognition. Association rule mining is utilized to infer
multi-level social context among subjects from a large repository of social transactions.
The results are demonstrated on the G-album and on the SN-collection pertaining to 4675
identities prepared by the authors from a social networking website. The results show that
association rules extracted from social context can be used to augment face recognition and
improve the identification performance.
4.1
Introduction
Face recognition capabilities of humans have inspired several researchers to understand
the science behind it and use it in developing automated algorithms. Recently, it is also
argued that encoding social context among individuals can be leveraged for improved
automatic face recognition [175]. As shown in Figure 4.1, often times a person’s identity
can be inferred based on the identity of other persons in the same photo, when some social
context between them is known. A subject’s face in consumer photos generally co-occur
along with their socially relevant people. With the advent of social networking services,
the social context between individuals is readily available. Face recognition performance
105
56e4dead93a63490e6c8402a3c7adc493c230da5World Journal of Computer Application and Technology 1(2): 41-50, 2013
DOI: 10.13189/wjcat.2013.010204
http://www.hrpub.org
Face Recognition Techniques: A Survey
V.Vijayakumari
Sri krishna College of Technology, Coimbatore, India
Copyright © 2013 Horizon Research Publishing All rights reserved.
*Corresponding Author: ebinviji@rediffmail.com
56e885b9094391f7d55023a71a09822b38b26447FREQUENCY DECODED LOCAL BINARY PATTERN
Face Retrieval using Frequency Decoded Local
Descriptor
('34992579', 'Shiv Ram Dubey', 'shiv ram dubey')
56c700693b63e3da3b985777da6d9256e2e0dc21Global Refinement of Random Forest
University of Science and Technology of China
Microsoft Research
('3080683', 'Shaoqing Ren', 'shaoqing ren')
('2032273', 'Xudong Cao', 'xudong cao')
('1732264', 'Yichen Wei', 'yichen wei')
('40055995', 'Jian Sun', 'jian sun')
sqren@mail.ustc.edu.cn
{xudongca,yichenw,jiansun}@microsoft.com
56359d2b4508cc267d185c1d6d310a1c4c2cc8c2Shape Driven Kernel Adaptation in
Convolutional Neural Network for Robust Facial Trait Recognition
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, 100190, China
National University of Singapore, Singapore
('1688086', 'Shaoxin Li', 'shaoxin li')
('1757173', 'Junliang Xing', 'junliang xing')
('1773437', 'Zhiheng Niu', 'zhiheng niu')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
56e079f4eb40744728fd1d7665938b06426338e5Bayesian Approaches to Distribution Regression
University of Oxford
University College London
University of Oxford
Imperial College London
('35142231', 'Ho Chung Leon Law', 'ho chung leon law')
('36326783', 'Dougal J. Sutherland', 'dougal j. sutherland')
('1698032', 'Dino Sejdinovic', 'dino sejdinovic')
('2127497', 'Seth Flaxman', 'seth flaxman')
ho.law@spc.ox.ac.uk
dougal@gmail.com
dino.sejdinovic@stats.ox.ac.uk
s.flaxman@imperial.ac.uk
56e6f472090030a6f172a3e2f46ef9daf6cad757Asian Face Image Database PF01
Intelligent Multimedia Lab.
†Department of Computer Science and Engineering
Pohang University of Science and Technology
San 31, Hyoja-Dong, Nam-Gu, Pohang, 790-784, Korea
56a653fea5c2a7e45246613049fb16b1d204fc963287
Quaternion Collaborative and Sparse Representation
With Application to Color Face Recognition
representation-based
('2888882', 'Cuiming Zou', 'cuiming zou')
('3369665', 'Kit Ian Kou', 'kit ian kou')
('3154834', 'Yulong Wang', 'yulong wang')
56f86bef26209c85f2ef66ec23b6803d12ca6cd6Pyramidal RoR for Image Classification
North China Electric Power University, Baoding, China
('32164792', 'Ke Zhang', 'ke zhang')
('3451321', 'Liru Guo', 'liru guo')
('35038034', 'Ce Gao', 'ce gao')
('2626320', 'Zhenbing Zhao', 'zhenbing zhao')
Eail:zhangke41616@126.com
5666ed763698295e41564efda627767ee55cc943Manuscript
Click here to download Manuscript: template.tex
Click here to view linked References
Noname manuscript No.
(will be inserted by the editor)
Relatively-Paired Space Analysis: Learning a Latent Common
Space from Relatively-Paired Observations
Received: date / Accepted: date
('1874900', 'Zhanghui Kuang', 'zhanghui kuang')
566a39d753c494f57b4464d6bde61bf3593f7cebA Critical Review of Action Recognition Benchmarks
The Open University of Israel
('1756099', 'Tal Hassner', 'tal hassner')hassner@openu.ac.il
56c2fb2438f32529aec604e6fc3b06a595ddbfccMAICS 2016
pp. 97–102
Comparison of Recent Machine Learning Techniques for Gender Recognition
from Facial Images
Computer Science Department
Central Washington University
Ellensburg, WA, USA
Computer Science Department
Central Washington University
Ellensburg, WA, USA
R˘azvan Andonie
Computer Science Department
Central Washington University
Computer Science Department
Central Washington University
Ellensburg, WA, USA
Ellensburg, WA, USA
and
Electronics and Computers Department
Transilvania University
Bras¸ov, Romania
('9770023', 'Joseph Lemley', 'joseph lemley')
('9770023', 'Joseph Lemley', 'joseph lemley')
('40470929', 'Sami Abdul-Wahid', 'sami abdul-wahid')
('35877118', 'Dipayan Banik', 'dipayan banik')
56f231fc40424ed9a7c93cbc9f5a99d022e1d242Age Estimation Based on A Single Network with
Soft Softmax of Aging Modeling
1Center for Biometrics and Security Research & National Laboratory of Pattern
Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences
3Faculty of Information Technology,
Macau University of Science and Technology, Macau
('9645431', 'Zichang Tan', 'zichang tan')
('2950852', 'Shuai Zhou', 'shuai zhou')
('1756538', 'Jun Wan', 'jun wan')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
5615d6045301ecbc5be35e46cab711f676aadf3aDiscriminatively Learned Hierarchical Rank Pooling Networks
Received: date / Accepted: date
('1688071', 'Basura Fernando', 'basura fernando')
561ae67de137e75e9642ab3512d3749b34484310December 2017
DeepGestalt - Identifying Rare Genetic Syndromes
Using Deep Learning
1FDNA Inc., Boston, Massachusetts, USA
Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
Recanati Genetic Institute, Rabin Medical Center and Schneider Children s Medical Center, Petah Tikva, Israel
Institute for Genomic Statistic and Bioinformatics, University Hospital Bonn
Rheinische-Friedrich-Wilhelms University, Bonn, Germany
Institute of Human Genetics, University Hospital Magdeburg, Magdeburg, Germany
University of California, San Diego, California, USA
7Division of Genetics/Dysmorphology, Rady Children’s Hospital San Diego, San Diego, California, USA
8Division of Medical Genetics, A. I. du Pont Hospital for Children/Nemours, Wilmington, Delaware,USA
Boston 186 South St. 5th Floor, Boston, MA 02111 U.S.A., Tel: +1 (617) 412-7000
Conflict of interest: YG, YH, OB, NF, DG are employees of FDNA; LBS is an advisor of FDNA;
LBS, PK, LMB, KWG are members of the scientific advisory board of FDNA
('2916582', 'Yaron Gurovich', 'yaron gurovich')
('1917486', 'Yair Hanani', 'yair hanani')
('40142952', 'Omri Bar', 'omri bar')
('40443403', 'Nicole Fleischer', 'nicole fleischer')
('35487552', 'Dekel Gelbman', 'dekel gelbman')
('20717247', 'Lina Basel-Salmon', 'lina basel-salmon')
('4346029', 'Martin Zenker', 'martin zenker')
('6335877', 'Lynne M. Bird', 'lynne m. bird')
('5404116', 'Karen W. Gripp', 'karen w. gripp')
568cff415e7e1bebd4769c4a628b90db293c1717Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16)
Concepts Not Alone: Exploring Pairwise Relationships
for Zero-Shot Video Activity Recognition
IIIS, Tsinghua University, Beijing, China
QCIS, University of Technology Sydney, Sydney, Australia
DCMandB, University of Michigan, Ann Arbor, USA 4 SCS, Carnegie Mellon University, Pittsburgh, USA
('2551285', 'Chuang Gan', 'chuang gan')
('2735055', 'Ming Lin', 'ming lin')
('39033919', 'Yi Yang', 'yi yang')
('1732213', 'Gerard de Melo', 'gerard de melo')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
560e0e58d0059259ddf86fcec1fa7975dee6a868Face Recognition in Unconstrained Videos with Matched Background Similarity
The Blavatnik School of Computer Science, Tel-Aviv University, Israel
Computer Science Division, The Open University of Israel
('1776343', 'Lior Wolf', 'lior wolf')
('3352629', 'Itay Maoz', 'itay maoz')
56a677c889e0e2c9f68ab8ca42a7e63acf986229Mining Spatial and Spatio-Temporal ROIs for Action Recognition
Jiang Wang2 Alan Yuille1,3
University of California, Los Angeles
Baidu Research, USA 3John Hopkins University
('5964529', 'Xiaochen Lian', 'xiaochen lian'){lianxiaochen@,yuille@stat.}ucla.edu
{chenzhuoyuan,yangyi05,wangjiang03}@baidu.com
566038a3c2867894a08125efe41ef0a40824a090978-1-4244-2354-5/09/$25.00 ©2009 IEEE
1945
ICASSP 2009
56dca23481de9119aa21f9044efd7db09f618704Riemannian Dictionary Learning and Sparse
Coding for Positive Definite Matrices
('2691929', 'Anoop Cherian', 'anoop cherian')
('3072326', 'Suvrit Sra', 'suvrit sra')
56ae6d94fc6097ec4ca861f0daa87941d1c10b70Distance Estimation of an Unknown Person
from a Portrait
1 Technicolor - Cesson S´evign´e, France
California Institute of Technology, Pasadena, CA, USA
('2232848', 'Xavier P. Burgos-Artizzu', 'xavier p. burgos-artizzu')
('3339867', 'Matteo Ruggero Ronchi', 'matteo ruggero ronchi')
('1690922', 'Pietro Perona', 'pietro perona')
xavier.burgos@technicolor.com, {mronchi,perona}@caltech.edu
56f812661c3248ed28859d3b2b39e033b04ae6aeMultiple Feature Fusion by Subspace Learning
Beckman Institute
University of Illinois at
Urbana-Champaign
Urbana, IL 61801, USA
Durham, NC 27707, USA
Computer Science
North Carolina Central
University
Beckman Institute
University of Illinois at
Urbana-Champaign
Urbana, IL 61801, USA
('1708679', 'Yun Fu', 'yun fu')
('37575012', 'Liangliang Cao', 'liangliang cao')
('1822413', 'Guodong Guo', 'guodong guo')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
{yunfu2,cao4}@uiuc.edu
gdguo@nccu.edu
huang@ifp.uiuc.edu
516a27d5dd06622f872f5ef334313350745eadc3> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
1
Fine-Grained Facial Expression Analysis Us-
ing Dimensional Emotion Model
('41179750', 'Feng Zhou', 'feng zhou')
('34362536', 'Shu Kong', 'shu kong')
('3157443', 'Charless C. Fowlkes', 'charless c. fowlkes')
('29889388', 'Tao Chen', 'tao chen')
('40216538', 'Baiying Lei', 'baiying lei')
512befa10b9b704c9368c2fbffe0dc3efb1ba1bfEvidence and a Computational Explanation of Cultural Differences in
Facial Expression Recognition
Matthew N. Dailey
Computer Science and Information Management
Asian Institute of Technology, Pathumthani, Thailand
Computer Science and Engineering
University of California, San Diego, USA
Michael J. Lyons
College of Image Arts and Sciences
Ritsumeikan University, Kyoto, Japan
Faculty of Informatics
Kogakuin University, Tokyo, Japan
Department of Design and Computer Applications
Sendai National College of Technology, Natori, Japan
Department of Psychology
Tohoku University, Sendai, Japan
Garrison W. Cottrell
Computer Science and Engineering
University of California, San Diego, USA
Facial expressions are crucial to human social communication, but the extent to which they are
innate and universal versus learned and culture dependent is a subject of debate. Two studies
explored the effect of culture and learning on facial expression understanding. In Experiment
better than the other at classifying facial expressions posed by members of the same culture.
In Experiment 2, this reciprocal in-group advantage was reproduced by a neurocomputational
model trained in either a Japanese cultural context or an American cultural context. The model
demonstrates how each of us, interacting with others in a particular cultural context, learns to
recognize a culture-specific facial expression dialect.
The scientific literature on innate versus culture-specific
years ago, Darwin (1872/1998) argued for innate production
of facial expressions based on cross-cultural comparisons.
Landis (1924), however, found little agreement between par-
ticipants. Woodworth (1938) and Schlosberg (1952) found
structure in the disagreement in interpretation, proposing a
low-dimensional similarity space characterizing affective fa-
cial expressions.
Starting in the 1960’s, researchers found more support for
facial expressions as innate, universal indicators of particular
sions (Tomkins, 1962–1963; Tomkins & McCarter, 1964).
Ekman and colleagues found cross-cultural consistency in
pressions in both literate and preliterate cultures (Ekman,
1972; Ekman, Friesen, O’Sullivan, et al., 1987; Ekman,
Sorensen, & Friesen, 1969).
Today, researchers disagree on the precise degree to which
sal versus culture-specific (Ekman, 1994, 1999b; Fridlund,
1994; Izard, 1994; Russell, 1994, 1995), but there appears
to be consensus that universal factors interact to some extent
with culture-specific learning to produce differences between
cultures. A number of modern theories (Ekman, 1999a; Rus-
sell & Bullock, 1986; Scherer, 1992; Russell, 1994) attempt
to account for these universals and culture-specific varia-
tions.
Cultural differences in facial expression interpre-
tation
The early cross-cultural studies on facial expression
recognition focused mainly on the question of universality
sought to analyze and interpret the cultural differences that
came up in those studies. However, a steadily increasing
number of studies have focused on the factors underlying
cultural differences. These studies either compare the fa-
cial expression judgments made by participants from differ-
ent cultures or attempt to find the relevant dimensions of
culture predicting observed cultural differences. Much of
the research was framed by Ekman’s “neuro-cultural” theory
elicitors, display rules, and/or consequences due to culture-
specific learning.
Ekman (1972) and Friesen (1972) proposed display rules
('33597747', 'Carrie Joyce', 'carrie joyce')
('40533190', 'Miyuki Kamachi', 'miyuki kamachi')
('12030857', 'Hanae Ishi', 'hanae ishi')
('8365437', 'Jiro Gyoba', 'jiro gyoba')
51c3050fb509ca685de3d9ac2e965f0de1fb21ccFantope Regularization in Metric Learning
Marc T. Law
Sorbonne Universit´es, UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris, France
('1728523', 'Nicolas Thome', 'nicolas thome')
('1702233', 'Matthieu Cord', 'matthieu cord')
516d0d9eb08825809e4618ca73a0697137ebabd5Regularizing Long Short Term
Memory with 3D Human-Skeleton
Sequences for Action Recognition
Oregon State University
CVPR 2016
('3112334', 'Behrooz Mahasseni', 'behrooz mahasseni')
('34917793', 'Sinisa Todorovic', 'sinisa todorovic')
519a724426b5d9ad384d38aaf2a4632d3824f243WANG et al.: LEARNING OBJECT RECOGNITION FROM DESCRIPTIONS
Learning Models for Object Recognition
from Natural Language Descriptions
School of Computing
University of Leeds
Leeds, UK
('2635321', 'Josiah Wang', 'josiah wang')
('1686341', 'Katja Markert', 'katja markert')
('3056091', 'Mark Everingham', 'mark everingham')
scs6jwks@comp.leeds.ac.uk
markert@comp.leeds.ac.uk
me@comp.leeds.ac.uk
5180df9d5eb26283fb737f491623395304d57497Scalable Angular Discriminative Deep Metric Learning
for Face Recognition
aCenter for Combinatorics, Nankai University, Tianjin 300071, China
bCenter for Applied Mathematics, Tianjin University, Tianjin 300072, China
('2143751', 'Bowen Wu', 'bowen wu')
51c7c5dfda47647aef2797ac3103cf0e108fdfb4CS 395T: Celebrity Look-Alikes ∗ ('2362854', 'Adrian Quark', 'adrian quark')quark@mail.utexas.edu
519f4eb5fe15a25a46f1a49e2632b12a3b18c94dNon-Lambertian Reflectance Modeling and
Shape Recovery of Faces using Tensor Splines
('9432255', 'Ritwik Kumar', 'ritwik kumar')
('1765280', 'Angelos Barmpoutis', 'angelos barmpoutis')
('3163927', 'Arunava Banerjee', 'arunava banerjee')
('1733005', 'Baba C. Vemuri', 'baba c. vemuri')
518edcd112991a1717856841c1a03dd94a250090Rice University
Endogenous Sparse Recovery
by
A Thesis Submitted
in Partial Fulfillment of the
Requirements for the Degree
Masters of Science
Approved, Thesis Committee:
Dr. Richard G. Baraniuk, Chair
Victor E. Cameron Professor of Electrical
and Computer Engineering
Dr. Don H. Johnson
J.S. Abercrombie Professor Emeritus of
Electrical and Computer Engineering
Dr. Wotao Yin
Assistant Professor of Computational and
Applied Mathematics
Houston, Texas
December 2011
('1746363', 'Eva L. Dyer', 'eva l. dyer')
51683eac8bbcd2944f811d9074a74d09d395c7f3Automatic Analysis of Facial Actions:
Learning from Transductive, Supervised and
Unsupervised Frameworks
CMU-RI-TR-17-01
January 2017
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Fernando De la Torre, Co-chair
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy in Robotics.
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1820249', 'Simon Lucey', 'simon lucey')
('1770537', 'Deva Ramanan', 'deva ramanan')
('1736042', 'Vladimir Pavlovic', 'vladimir pavlovic')
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
51faacfa4fb1e6aa252c6970e85ff35c5719f4ffZoom-Net: Mining Deep Feature Interactions for
Visual Relationship Recognition
University of Science and Technology of China, Key Laboratory of Electromagnetic
Space Information, the Chinese Academy of Sciences, 2SenseTime Group Limited,
CUHK-SenseTime Joint Lab, The Chinese University of Hong Kong
SenseTime-NTU Joint AI Research Centre, Nanyang Technological University
('4332039', 'Guojun Yin', 'guojun yin')
('37145669', 'Lu Sheng', 'lu sheng')
('50677886', 'Bin Liu', 'bin liu')
('1708598', 'Nenghai Yu', 'nenghai yu')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('49895575', 'Jing Shao', 'jing shao')
('1717179', 'Chen Change Loy', 'chen change loy')
gjyin@mail.ustc.edu.cn, {flowice,ynh}@ustc.edu.cn, ccloy@ieee.org,
{lsheng,xgwang}@ee.cuhk.edu.hk, shaojing@sensetime.com
51cc78bc719d7ff2956b645e2fb61bab59843d2bFace and Facial Expression Recognition with an
Embedded System for Human-Robot Interaction
School of Computer Engineering, Sejong University, Seoul, Korea
('2241562', 'Yang-Bok Lee', 'yang-bok lee')
('2706430', 'Yong-Guk Kim', 'yong-guk kim')
*ykim@sejong.ac.kr
511b06c26b0628175c66ab70dd4c1a4c0c19aee9International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730
Face Recognition using Laplace Beltrami Operator by Optimal Linear
Approximations
Institute of Engineering and Technology, Alwar, Rajasthan Technical University, Kota(Raj
Research Scholar (M.Tech, IT), Institute of Engineering and Technology
51528cdce7a92835657c0a616c0806594de7513b
51cb09ee04831b95ae02e1bee9b451f8ac4526e3Beyond Short Snippets: Deep Networks for Video Classification
Matthew Hausknecht2
University of Maryland, College Park
University of Texas at Austin
Google, Inc
('2340579', 'Joe Yue-Hei Ng', 'joe yue-hei ng')
('1689108', 'Oriol Vinyals', 'oriol vinyals')
('3089272', 'Rajat Monga', 'rajat monga')
('2259154', 'Sudheendra Vijayanarasimhan', 'sudheendra vijayanarasimhan')
('1805076', 'George Toderici', 'george toderici')
yhng@umiacs.umd.edu
mhauskn@cs.utexas.edu
svnaras@google.com
vinyals@google.com
rajatmonga@google.com
gtoderici@google.com
514a74aefb0b6a71933013155bcde7308cad2b46CARNEGIE MELLON UNIVERSITY
OPTIMAL CLASSIFIER ENSEMBLES
FOR IMPROVED BIOMETRIC VERIFICATION
A Dissertation
Submitted to the Faculty of Graduate School
In Partial Fulfillment of the Requirements
for The Degree of
DOCTOR OF PHILOSOPHY
in
ELECTRICAL AND COMPUTER ENGINEERING
by
COMMITTEE:
Advisor: Prof. Vijayakumar Bhagavatula
Prof. Tsuhan Chen
Prof. David Casasent
Prof. Arun Ross
Pittsburgh, Pennsylvania
January, 2007
('2202489', 'Krithika Venkataramani', 'krithika venkataramani')
('1794486', 'Marios Savvides', 'marios savvides')
51a8dabe4dae157aeffa5e1790702d31368b9161SPI-J068 00418
International Journal of Pattern Recognition
and Artificial Intelligence
Vol. 19, No. 4 (2005) 513–531
c(cid:1) World Scientific Publishing Company
FACE RECOGNITION UNDER GENERIC ILLUMINATION
BASED ON HARMONIC RELIGHTING
Graduate School of Chinese Academy Sciences
No. 19, Yuquan Road, Beijing, 100039, P.R. China
Institute of Computing Technology, CAS
No. 6 Kexueyuan South Road, Beijing, 100080, P.R. China
The performances of the current face recognition systems suffer heavily from the vari-
ations in lighting. To deal with this problem, this paper presents an illumination nor-
malization approach by relighting face images to a canonical illumination based on the
harmonic images model. Benefiting from the observations that human faces share sim-
ilar shape, and the albedos of the face surfaces are quasi-constant, we first estimate
the nine low-frequency components of the illumination from the input facial image. The
facial image is then normalized to the canonical illumination by re-rendering it using
the illumination ratio image technique. For the purpose of face recognition, two kinds of
canonical illuminations, the uniform illumination and a frontal flash with the ambient
lights, are considered, among which the former encodes merely the texture information,
while the latter encodes both the texture and shading information. Our experiments on
the CMU-PIE face database and the Yale B face database have shown that the proposed
relighting normalization can significantly improve the performance of a face recognition
system when the probes are collected under varying lighting conditions.
Keywords: Face recognition; varying lighting; harmonic images; lighting estimation;
illumination normalization.
1. Introduction
Face recognition has various potential applications in public security, law enforce-
ment and commerce such as mug-shot database matching, identity authentication
for credit card or driver license, access control, information security, and video
surveillance. In addition, there are many emerging fields that can benefit from face
recognition, such as human computer interfaces and e-services, including e-home
online-shopping and online-banking. Related research activities have significantly
increased over the past few years.5,26
513
('2343895', 'Laiyun Qing', 'laiyun qing')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1698902', 'Wen Gao', 'wen gao')
('1691233', 'Bo Du', 'bo du')
lyqing@jdl.ac.cn
sgshan@jdl.ac.cn
wgao@jdl.ac.cn
bdu@jdl.ac.cn
512b4c8f0f3fb23445c0c2dab768bcd848fa8392 Analysis and Synthesis of Facial Expressions by Feature-
Points Tracking and Deformable Model
1- Faculty of Electrical and Computer Eng.,
University of Tabriz, Tabriz, Iran
2- Department of Electrical Eng.,
Tarbiat Modarres University, Tehran, Iran
in
an
role
essential
facial expressions
('3210269', 'H. Seyedarabi', 'h. seyedarabi')
('31092101', 'A. Aghagolzadeh', 'a. aghagolzadeh')
('2052255', 'S. Khanmohammadi', 's. khanmohammadi')
('2922912', 'E. Kabir', 'e. kabir')
seyedarabi@tahoo.com, aghagol@tabrizu.ac.ir, khan@tabrizu.ac.ir
51eba481dac6b229a7490f650dff7b17ce05df73Situation Recognition:
Visual Semantic Role Labeling for Image Understanding
Computer Science and Engineering, University of Washington, Seattle, WA
Allen Institute for Arti cial Intelligence (AI2), Seattle, WA
Figure 1. Six images that depict situations where actors, objects, substances, and locations play roles in an activity. Below each image is a
realized frame that summarizes the situation: the left columns (blue) list activity-specific roles (derived from FrameNet, a broad coverage
verb lexicon) while the right columns (green) list values (from ImageNet) for each role. Three different activities are shown, highlighting
that visual properties can vary widely between role values (e.g., clipping a sheep’s wool looks very different from clipping a dog’s nails).
('2064210', 'Mark Yatskar', 'mark yatskar')
('2270286', 'Ali Farhadi', 'ali farhadi')
[my89, lsz, ali]@cs.washington.edu
5173a20304ea7baa6bfe97944a5c7a69ea72530fSensors 2013, 13, 12830-12851; doi:10.3390/s131012830
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Best Basis Selection Method Using Learning Weights for
Face Recognition
The School of Electrical and Electronic Engineering, Yonsei University, 134 Shinchon-Dong
The School of Electrical Electronic and Control Engineering, Kongju National University
275 Budae-Dong, Seobuk-Gu, Cheonan, Chungnam 331-717, Korea
Tel.: +82-41-521-9168; Fax: +82-41-563-3689.
Received: 24 July 2013; in revised form: 26 August 2013 / Accepted: 16 September 2013/
Published: 25 September 2013
('1801849', 'Wonju Lee', 'wonju lee')
('2840643', 'Minkyu Cheon', 'minkyu cheon')
('2638048', 'Chang-Ho Hyun', 'chang-ho hyun')
('1718637', 'Mignon Park', 'mignon park')
Seodaemun-Gu, Seoul 120-749, Korea; E-Mails: delicado@yonsei.ac.kr (W.L.);
1000minkyu@gmail.com (M.C.); mignpark@yonsei.ac.kr (M.P.)
* Author to whom correspondence should be addressed; E-Mail: hyunch@kongju.ac.kr;
51ed4c92cab9336a2ac41fa8e0293c2f5f9bf3b6Computing and Informatics, Vol. 22, 2003, ??–??
A SURVEY OF FACE DETECTION, EXTRACTION
AND RECOGNITION
National Storage System Laboratory
School of Software Engineering
Huazhong University of Science and Technology
Wuhan, 430074, P. R. China
Manuscript received 23 June 2002; revised 27 January 2003
Communicated by Ladislav Hluch´y
('2366162', 'Yongzhong Lu', 'yongzhong lu')
('1711876', 'Jingli Zhou', 'jingli zhou')
('1714618', 'Shengsheng Yu', 'shengsheng yu')
e-mail: luyongz0@sohu.com
5161e38e4ea716dcfb554ccb88901b3d97778f64SSPP-DAN: DEEP DOMAIN ADAPTATION NETWORK FOR
FACE RECOGNITION WITH SINGLE SAMPLE PER PERSON
School of Computing, KAIST, Republic of Korea
('2487892', 'Sungeun Hong', 'sungeun hong')
('40506942', 'Woobin Im', 'woobin im')
5121f42de7cb9e41f93646e087df82b573b23311CLASSIFYING ONLINE DATING PROFILES ON TINDER USING FACENET FACIAL
EMBEDDINGS
FL
Charles F. Jekel (cjekel@ufl.edu; cj@jekel.me) and Raphael T. Haftka
51d1a6e15936727e8dd487ac7b7fd39bd2baf5eeJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
A Fast and Accurate System for Face Detection,
Identification, and Verification
('48467498', 'Rajeev Ranjan', 'rajeev ranjan')
('2068427', 'Ankan Bansal', 'ankan bansal')
('7674316', 'Jingxiao Zheng', 'jingxiao zheng')
('2680836', 'Hongyu Xu', 'hongyu xu')
('35199438', 'Joshua Gleason', 'joshua gleason')
('2927406', 'Boyu Lu', 'boyu lu')
('8435884', 'Anirudh Nanduri', 'anirudh nanduri')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')
('9215658', 'Rama Chellappa', 'rama chellappa')
5141cf2e59fb2ec9bb489b9c1832447d3cd93110Learning Person Trajectory Representations for Team Activity Analysis
Simon Fraser University
('10386960', 'Nazanin Mehrasa', 'nazanin mehrasa')
('19198359', 'Yatao Zhong', 'yatao zhong')
('2123865', 'Frederick Tung', 'frederick tung')
('3004771', 'Luke Bornn', 'luke bornn')
('10771328', 'Greg Mori', 'greg mori')
{nmehrasa, yataoz, ftung, lbornn}@sfu.ca, mori@cs.sfu.ca
5185f2a40836a754baaa7419a1abdd1e7ffaf2adA Multimodality Framework for Creating Speaker/Non-Speaker Profile
Databases for Real-World Video
Beckman Institute
University of Illinois
Urbana, IL 61801
Beckman Institute
University of Illinois
Urbana, IL 61801
Beckman Institute
University of Illinois
Urbana, IL 61801
('3082579', 'Jehanzeb Abbas', 'jehanzeb abbas')
('1804874', 'Charlie K. Dagli', 'charlie k. dagli')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
jabbas2@ifp.uiuc.edu
dagli@ifp.uiuc.edu
huang@ifp.uiuc.edu
511a8cdf2127ef8aa07cbdf9660fe9e0e2dfbde7Hindawi
Computational Intelligence and Neuroscience
Volume 2018, Article ID 4512473, 10 pages
https://doi.org/10.1155/2018/4512473
Research Article
A Community Detection Approach to Cleaning Extremely
Large Face Database
Computer School, University of South China, Hengyang, China
National Laboratory for Parallel and Distributed Processing, National University of Defense Technology, Changsha, China
Received 11 December 2017; Accepted 12 March 2018; Published 22 April 2018
Academic Editor: Amparo Alonso-Betanzos
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Though it has been easier to build large face datasets by collecting images from the Internet in this Big Data era, the time-consuming
manual annotation process prevents researchers from constructing larger ones, which makes the automatic cleaning of noisy labels
highly desirable. However, identifying mislabeled faces by machine is quite challenging because the diversity of a person’s face
images that are captured wildly at all ages is extraordinarily rich. In view of this, we propose a graph-based cleaning method that
mainly employs the community detection algorithm and deep CNN models to delete mislabeled images. As the diversity of faces is
preserved in multiple large communities, our cleaning results have both high cleanness and rich data diversity. With our method, we
clean the extremely large MS-Celeb-1M face dataset (approximately 10 million images with noisy labels) and obtain a clean version
of it called C-MS-Celeb (6,464,018 images of 94,682 celebrities). By training a single-net model using our C-MS-Celeb dataset,
without fine-tuning, we achieve 99.67% at Equal Error Rate on the LFW face recognition benchmark, which is comparable to other
state-of-the-art results. This demonstrates the data cleaning positive effects on the model training. To the best of our knowledge,
our C-MS-Celeb is the largest clean face dataset that is publicly available so far, which will benefit face recognition researchers.
1. Introduction
In the last few years, researchers have witnessed the remark-
able progress in face recognition due to the significant success
of deep convolutional neural networks [1] and the emergence
of large scale face datasets [2]. Although the data explosion
has made it easier to build datasets by collecting real world
images from the Internet [3], constructing a large scale face
dataset remains a highly time-consuming and costly task
because the mislabeled images returned by search engines
need to be manually removed [4]. Thus, automatic cleaning
of noisy labels in the raw dataset is strongly desirable.
However, identifying mislabeled faces automatically by
machine is by no means easy. The main reason for this is that,
for faces that are captured wildly, the variation of a man’s faces
can be so large that some of his images may easily be identified
as someone else’s [5]. Thus, a machine may be misled by this
rich data diversity within one person and delete correctly
labeled images. For example, if old faces of a man are the
majority in the dataset, a young face of him may be regarded
as someone else and removed. Another challenge is that, due
to the ambiguity of people’s names, searching for someone’s
pictures online usually returns images from multiple people
[2], which requires the cleaning method to be tolerant to the
high proportion of noisy labels in the raw dataset constructed
by online searching.
In order to clean noisy labels and meanwhile preserve
the rich data diversity of various faces, we propose a three-
stage graph-based method to clean large face datasets using
the community detection algorithm. For each image in the
raw dataset, we firstly use pretrained deep CNN models to
align the face and extract a feature vector to represent each
face. Secondly, for features of the same identity, based on the
cosine similarity between different features, we construct an
undirected graph, named “face similarity graph,” to quantify
the similarity between different images. After deleting weak
edges and applying the community detection algorithm, we
delete mislabeled images by removing minor communities. In
the last stage, we try to relabel each previously deleted image
('3335298', 'Chi Jin', 'chi jin')
('9856301', 'Ruochun Jin', 'ruochun jin')
('38536592', 'Kai Chen', 'kai chen')
('1791001', 'Yong Dou', 'yong dou')
('3335298', 'Chi Jin', 'chi jin')
Correspondence should be addressed to Ruochun Jin; sczjrc@163.com
51d048b92f6680aca4a8adf07deb380c0916c808This is the accepted version of the following article: "State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications",
which has been published in final form at http://onlinelibrary.wiley.com. This article may be used for non-commercial purposes in accordance
with the Wiley Self-Archiving Policy [http://olabout.wiley.com/WileyCDA/Section/id-820227.html].
EUROGRAPHICS 2018
K. Hildebrandt and C. Theobalt
(Guest Editors)
Volume 37 (2018), Number 2
STAR – State of The Art Report
State of the Art on Monocular 3D Face
Reconstruction, Tracking, and Applications
M. Zollhöfer1,2
J. Thies3 P. Garrido1,5 D. Bradley4 T. Beeler4 P. Pérez5 M. Stamminger6 M. Nießner3 C. Theobalt1
Max Planck Institute for Informatics
Stanford University
Technical University of Munich
4Disney Research
5Technicolor
University of Erlangen-Nuremberg
Figure 1: This state-of-the-art report provides an overview of monocular 3D face reconstruction and tracking, and highlights applications.
5134353bd01c4ea36bd007c460e8972b1541d0adFace Recognition with Multi-Resolution Spectral Feature
Images
School of Electrical Engineering and Automation, Anhui University, Hefei, China, Hong Kong Polytechnic
University, Hong Kong, China, 3 Center for Intelligent Electricity Networks, University of Newcastle, Newcastle, Australia, 4 School of Electrical and Electronic Engineering
Nanyang Technological University, Singapore, Singapore
('31443079', 'Zhan-Li Sun', 'zhan-li sun')
('1703078', 'Kin-Man Lam', 'kin-man lam')
('50067626', 'Zhao-yang Dong', 'zhao-yang dong')
('40465036', 'Han Wang', 'han wang')
('29927490', 'Qing-wei Gao', 'qing-wei gao')
5160569ca88171d5fa257582d161e9063c8f898dLocal Binary Patterns as an Image Preprocessing for Face Authentication
IDIAP Research Institute, Martigny, Switzerland
Ecole Polytechnique F´ed´erale de Lausanne (EPFL), Switzerland
('16602458', 'Guillaume Heusch', 'guillaume heusch')
('2820403', 'Yann Rodriguez', 'yann rodriguez')
fheusch, rodrig, marcelg@idiap.ch
5157dde17a69f12c51186ffc20a0a6c6847f1a29Evolutionary Cost-sensitive Extreme Learning
Machine
1
('40613723', 'Lei Zhang', 'lei zhang')
('1698371', 'David Zhang', 'david zhang')
51dc127f29d1bb076d97f515dca4cc42dda3d25b
3d18ce183b5a5b4dcaa1216e30b774ef49eaa46fFace Alignment Across Large Poses: A 3D Solution
Hailin Shi1
Institute of Automation, Chinese Academy of Sciences
Michigan State University
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('1718623', 'Zhen Lei', 'zhen lei')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('34679741', 'Stan Z. Li', 'stan z. li')
{xiangyu.zhu,zlei,hailin.shi,szli}@nlpr.ia.ac.cn
liuxm@msu.edu
3d143cfab13ecd9c485f19d988242e7240660c86Discriminative Collaborative Representation for
Classification
Academic Center for Computing and Media Studies, Kyoto University, Kyoto 606-8501, Japan
Institute of Scienti c and Industrial Research, Osaka University, Ibaraki-shi 567-0047, Japan
3 OMRON Social Solutions Co., LTD, Kyoto 619-0283, Japan
('2549020', 'Yang Wu', 'yang wu')
('40400215', 'Wei Li', 'wei li')
('1707934', 'Masayuki Mukunoki', 'masayuki mukunoki')
('1681266', 'Michihiko Minoh', 'michihiko minoh')
('1710195', 'Shihong Lao', 'shihong lao')
yangwu@mm.media.kyoto-u.ac.jp,seuliwei@126.com,
{minoh,mukunoki}@media.kyoto-u.ac.jp,lao_shihong@oss.omron.co.jp
3daafe6389d877fe15d8823cdf5ac15fd919676fHuman Action Localization
with Sparse Spatial Supervision
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('49142153', 'Xavier Martin', 'xavier martin')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
3dabf7d853769cfc4986aec443cc8b6699136ed0In A. Esposito, N. Bourbakis, N. Avouris, and I. Hatzilygeroudis. (Eds.) Lecture Notes in
Computer Science, Vol 5042: Verbal and Nonverbal Features of Human-human and Human-
machine Interaction, Springer Verlag, p. 1-21.
Data mining spontaneous facial behavior with
automatic expression coding
Institute for Neural Computation, University of California, San Diego, La Jolla, CA
Human Development and Applied Psychology, University of Toronto, Ontario, Canada
0445, USA
Engineering and Natural Science, Sabanci University, Istanbul, Turkey
('2724380', 'Gwen Littlewort', 'gwen littlewort')
('40322754', 'Esra Vural', 'esra vural')
('2855884', 'Kang Lee', 'kang lee')
mbartlett@ucsd.edu; gwen@mpmlab.ucsd.edu, movellan@mplab.ucsd.edu,
vesra@ucsd.edu, kang.lee@utoronto.ca
3db75962857a602cae65f60f202d311eb4627b41
3daf1191d43e21a8302d98567630b0e2025913b0Can Autism be Catered with Artificial Intelligence-Assisted Intervention
Technology? A Literature Review
Faculty of Information Technology, Barrett Hodgson University, Karachi, Pakistan
†Universit´e Claude Bernard Lyon 1, France
('38817141', 'Muhammad Shoaib Jaliawala', 'muhammad shoaib jaliawala')
('1943666', 'Rizwan Ahmed Khan', 'rizwan ahmed khan')
3d36f941d8ec613bb25e80fb8f4c160c1a2848dfOut-of-sample generalizations for supervised
manifold learning for classification
('12636684', 'Elif Vural', 'elif vural')
('1780587', 'Christine Guillemot', 'christine guillemot')
3d5a1be4c1595b4805a35414dfb55716e3bf80d8Hidden Two-Stream Convolutional Networks for
Action Recognition
('1749901', 'Yi Zhu', 'yi zhu')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
3d62b2f9cef997fc37099305dabff356d39ed477Joint Face Alignment and 3D Face
Reconstruction with Application to Face
Recognition
('33320460', 'Feng Liu', 'feng liu')
('7345195', 'Qijun Zhao', 'qijun zhao')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('39422721', 'Dan Zeng', 'dan zeng')
3dc522a6576c3475e4a166377cbbf4ba389c041f
3dd4d719b2185f7c7f92cc97f3b5a65990fcd5ddEnsemble of Hankel Matrices for
Face Emotion Recognition
DICGIM, Universit´a degli Studi di Palermo,
V.le delle Scienze, Ed. 6, 90128 Palermo, Italy,
DRAFT
To appear in ICIAP 2015
('1711610', 'Liliana Lo Presti', 'liliana lo presti')
('9127836', 'Marco La Cascia', 'marco la cascia')
liliana.lopresti@unipa.it
3d1a6a5fd5915e0efb953ede5af0b23debd1fc7fProceedings of the Pakistan Academy of Sciences 52 (1): 27–38 (2015)
Copyright © Pakistan Academy of Sciences
ISSN: 0377 - 2969 (print), 2306 - 1448 (online)
Pakistan Academy of Sciences
Research Article
Bimodal Human Emotion Classification in the
Speaker-Dependent Scenario
University of Peshawar, Peshawar, Pakistan
University of Engineering and Technology
Sarhad University of Science and Information Technology
University of Peshawar, Peshawar, Pakistan
Peshawar, Pakistan
Peshawar, Pakistan
('34267835', 'Sanaul Haq', 'sanaul haq')
('3124216', 'Tariqullah Jan', 'tariqullah jan')
('1766329', 'Muhammad Asif', 'muhammad asif')
('1710701', 'Amjad Ali', 'amjad ali')
('40332145', 'Naveed Ahmad', 'naveed ahmad')
3d0379688518cc0e8f896e30815d0b5e8452d4cdAutotagging Facebook:
Social Network Context Improves Photo Annotation
Harvard University
Todd Zickler
Harvard University
UC Berkeley EECS & ICSI
('2201347', 'Zak Stone', 'zak stone')
('1753210', 'Trevor Darrell', 'trevor darrell')
zstone@fas.harvard.edu
zickler@seas.harvard.edu
trevor@eecs.berkeley.edu
3dda181be266950ba1280b61eb63ac11777029f9
3d24b386d003bee176a942c26336dbe8f427aaddSequential Person Recognition in Photo Albums with a Recurrent Network∗
The University of Adelaide, Australia
('39948681', 'Yao Li', 'yao li')
('2604251', 'Guosheng Lin', 'guosheng lin')
('3194022', 'Bohan Zhuang', 'bohan zhuang')
('2161037', 'Lingqiao Liu', 'lingqiao liu')
('1780381', 'Chunhua Shen', 'chunhua shen')
('5546141', 'Anton van den Hengel', 'anton van den hengel')
3dcebd4a1d66313dcd043f71162d677761b07a0d Yerel Đkili Örüntü Ortamında Yerel Görünüme Dayalı Yüz Tanıma
Local Binary Pattern Domain Local Appearance Face Recognition
Hazım K. Ekenel1, Mika Fischer1, Erkin Tekeli2, Rainer Stiefelhagen1, Aytül Erçil2
1Institut für Theorestische Informatik, Universität Karlsruhe (TH), Karlsruhe, Germany
Faculty of Engineering and Natural Sciences, Sabanc University, stanbul, Turkey
Özetçe
Bu bildiride, ayrık kosinüs dönüşümü tabanlı yerel görünüme
dayalı yüz tanıma algoritması ile yüz imgelerinin yerel ikili
örüntüye (YĐÖ) dayalı betimlemesini birleştiren hızlı bir yüz
tanıma algoritması sunulmuştur. Bu tümleştirmedeki amaç,
yerel ikili örüntünün dayanıklı imge betimleme yeteneği ile
ayrık kosinüs dönüşümünün derli-toplu veri betimleme
yeteneğinden yararlanmaktır. Önerilen yaklaşımda, yerel
görünümün modellenmesinden önce girdi yüz imgesi yerel
ikili örüntü ile betimlenmiştir. Elde edilen YĐÖ betimlemesi,
birbirleri ile örtüşmeyen bloklara ayrılmış ve her blok
üzerinde yerel özniteliklerin çıkartımı için ayrık kosinüs
dönüşümü uygulanmıştır. Çıkartımı yapılan yerel öznitelikler
daha sonra arka arkaya eklenerek global öznitelik vektörü
oluşturulmuştur. Önerilen algoritma, CMU PIE ve FRGC
versiyon 2 veritabanlarından seçilen yüz imgeleri üzerinde
sınanmıştır. Deney sonuçları, tümleşik yöntemin başarımı
önemli ölçüde arttırdığını göstermiştir.
{ekenel,mika.fischer,stiefel}@ira.uka.de, {erkintekeli,aytulercil}@sabanciuniv.edu
3d0f9a3031bee4b89fab703ff1f1d6170493dc01SVDD-Based Illumination Compensation
for Face Recognition
The Robotics Institute, Carnegie Mellon University
5000 Forbes Ave., Pittsburgh, PA 15213, USA
Center for Arti cial Vision Research, Korea University
Anam-dong, Seongbuk-ku, Seoul 136-713, Korea
('2348968', 'Sang-Woong Lee', 'sang-woong lee')
('1703007', 'Seong-Whan Lee', 'seong-whan lee')
rhiephil@cs.cmu.edu
swlee@image.korea.ac.kr
3d6ee995bc2f3e0f217c053368df659a5d14d5b5
3d0c21d4780489bd624a74b07e28c16175df6355Deep or Shallow Facial Descriptors? A Case for
Facial Attribute Classification and Face Retrieval
1 Faculty of Engineering,
Multimedia University, Cyberjaya, Malaysia
2 Faculty of Computing & Informatics,
Multimedia University, Cyberjaya, Malaysia
('3366793', 'Rasoul Banaeeyan', 'rasoul banaeeyan')
('31612015', 'Mohd Haris Lye', 'mohd haris lye')
('4759494', 'Mohammad Faizal Ahmad Fauzi', 'mohammad faizal ahmad fauzi')
('2339975', 'John See', 'john see')
banaeeyan@gmail.com, {haris.lye, faizal1, hezerul, johnsee}@mmu.edu.my
3df8cc0384814c3fb05c44e494ced947a7d43f36The Pose Knows: Video Forecasting by Generating Pose Futures
Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
('14192361', 'Jacob Walker', 'jacob walker')
('35789996', 'Kenneth Marino', 'kenneth marino')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
('1709305', 'Martial Hebert', 'martial hebert')
{jcwalker, kdmarino, abhinavg, hebert}@cs.cmu.edu
3d42e17266475e5d34a32103d879b13de2366561Proc.4thIEEEInt’lConf.AutomaticFace&GestureRecognition,Grenoble,France,pp264–270
The Global Dimensionality of Face Space
(cid:3)
http://venezia.rockefeller.edu/
The Rockefeller University
Laboratory of Computational Neuroscience
Laboratory for Applied Mathematics
Mount Sinai School of Medicine
c(cid:13) IEEE2000
1230 York Avenue, New York, NY 10021
One Gustave L. Levy Place, New York, NY 10029
('2939761', 'Penio S. Penev', 'penio s. penev')
('3266322', 'Lawrence Sirovich', 'lawrence sirovich')
PenevPS@IEEE.org
chico@camelot.mssm.edu
3dd906bc0947e56d2b7bf9530b11351bbdff2358
3dfd94d3fad7e17f52a8ae815eb9cc5471172bc0Face2Text: Collecting an Annotated Image Description Corpus for the
Generation of Rich Face Descriptions
University of Malta
University of Copenhagen
('1700894', 'Albert Gatt', 'albert gatt')
('32227979', 'Marc Tanti', 'marc tanti')
('35347012', 'Adrian Muscat', 'adrian muscat')
('1782032', 'Patrizia Paggio', 'patrizia paggio')
('2870709', 'Claudia Borg', 'claudia borg')
('3356545', 'Lonneke van der Plas', 'lonneke van der plas')
{albert.gatt, marc.tanti.06, adrian.muscat, patrizia.paggio, reuben.farrugia}@um.edu.mt
{claudia.borg, kenneth.camilleri, mike.rosner, lonneke.vanderplas}@um.edu.mt
paggio@hum.ku.dk
3dbfd2fdbd28e4518e2ae05de8374057307e97b3Improving Face Detection
CISUC, University of Coimbra
Faculty of Computer Science, University of A Coru na, Coru na, Spain
('2045142', 'Penousal Machado', 'penousal machado')
('39583137', 'Juan Romero', 'juan romero')
3030 Coimbra, Portugal machado@dei.uc.pt, jncor@dei.uc.pt
jj@udc.pt
3df7401906ae315e6aef3b4f13126de64b894a54Robust Learning of Discriminative Projection for Multicategory Classification on
the Stiefel Manifold
Curtin University of Technology
GPO Box U1987, Perth, WA 6845, Australia
('1725024', 'Duc-Son Pham', 'duc-son pham')
('1679520', 'Svetha Venkatesh', 'svetha venkatesh')
dspham@ieee.org, svetha@cs.curtin.edu.au
3d68cedd80babfbb04ab197a0b69054e3c196cd9Bimodal Information Analysis for Emotion Recognition
Master of Engineering
Department of Electrical and Computer Engineering
McGill University
Montreal, Quebec
October 2009
Revised: February 2010
A Thesis submitted to McGill University in partial fulfillment of the requirements for the
degree of Master of Engineering
i
('2376514', 'Malika Meghjani', 'malika meghjani')
('2376514', 'Malika Meghjani', 'malika meghjani')
3dfb822e16328e0f98a47209d7ecd242e4211f82Cross-Age LFW: A Database for Studying Cross-Age Face Recognition in
Unconstrained Environments
Beijing University of Posts and Telecommunications
Beijing 100876,China
('15523767', 'Tianyue Zheng', 'tianyue zheng')
('1774956', 'Weihong Deng', 'weihong deng')
('23224233', 'Jiani Hu', 'jiani hu')
2231135739@qq.com, whdeng@bupt.edu.cn, 40902063@qq.com
3d1af6c531ebcb4321607bcef8d9dc6aa9f0dc5a1892
Random Multispace Quantization as
an Analytic Mechanism for BioHashing
of Biometric and Random Identity Inputs
('2124820', 'Alwyn Goh', 'alwyn goh')
3d6943f1573f992d6897489b73ec46df983d776c
3d948e4813a6856e5b8b54c20e50cc5050e66abeA Smart Phone Image Database for Single
Image Recapture Detection
Institute for Infocomm Research, A*STAR, Singapore
2 Department of Electrical and Computer Engineering
National University of Singapore, Singapore
3 Department of Electrical and Computer Engineering
New Jersey Institute of Technology, USA
('2740420', 'Xinting Gao', 'xinting gao')
('2821964', 'Bo Qiu', 'bo qiu')
('3138499', 'JingJing Shen', 'jingjing shen')
('2475944', 'Tian-Tsong Ng', 'tian-tsong ng')
{xgao, qiubo, ttng}@i2r.a-star.eud.sg
shenjingjing89@gmail.com
shi@njit.edu
3d94f81cf4c3a7307e1a976dc6cb7bf38068a3813846
Data-Dependent Label Distribution Learning
for Age Estimation
('3276410', 'Zhouzhou He', 'zhouzhou he')
('40613648', 'Xi Li', 'xi li')
('1720488', 'Zhongfei Zhang', 'zhongfei zhang')
('28342797', 'Fei Wu', 'fei wu')
('1735299', 'Xin Geng', 'xin geng')
('2998634', 'Yaqing Zhang', 'yaqing zhang')
('37144787', 'Ming-Hsuan Yang', 'ming-hsuan yang')
('1755711', 'Yueting Zhuang', 'yueting zhuang')
3d9db1cacf9c3bb7af57b8112787b59f45927355Original research
published: 20 June 2016
doi: 10.3389/fict.2016.00011
improving Medical students’
awareness of Their non-Verbal
communication through automated
non-Verbal Behavior Feedback
School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW, Australia, 2 Sydney Medical
School, The University of Sydney, Sydney, NSW, Australia
The non-verbal communication of clinicians has an impact on patients’ satisfaction and
health outcomes. Yet medical students are not receiving enough training on the appropri-
ate non-verbal behaviors in clinical consultations. Computer vision techniques have been
used for detecting different kinds of non-verbal behaviors, and they can be incorporated
in educational systems that help medical students to develop communication skills.
We describe EQClinic, a system that combines a tele-health platform with automated
non-verbal behavior recognition. The system aims to help medical students improve
their communication skills through a combination of human and automatically generated
feedback. EQClinic provides fully automated calendaring and video conferencing features
for doctors or medical students to interview patients. We describe a pilot (18 dyadic
interactions) in which standardized patients (SPs) (i.e., someone acting as a real patient)
were interviewed by medical students and provided assessments and comments about
their performance. After the interview, computer vision and audio processing algorithms
were used to recognize students’ non-verbal behaviors known to influence the quality of
a medical consultation: including turn taking, speaking ratio, sound volume, sound pitch,
smiling, frowning, head leaning, head tilting, nodding, shaking, face-touch gestures and
overall body movements. The results showed that students’ awareness of non-verbal
communication was enhanced by the feedback information, which was both provided
by the SPs and generated by the machines.
Keywords: non-verbal communication, non-verbal behavior, clinical consultation, medical education,
communication skills, non-verbal behavior detection, automated feedback
inTrODUcTiOn
Edited by:
Leman Figen Gul,
Istanbul Technical University, Turkey
Reviewed by:
Marc Aurel Schnabel,
Victoria University of Wellington
New Zealand
Antonella Lotti,
University of Genoa, Italy
*Correspondence:
Specialty section:
This article was submitted
to Digital Education,
a section of the journal
Frontiers in ICT
Received: 28 April 2016
Accepted: 07 June 2016
Published: 20 June 2016
Citation:
Liu C, Calvo RA and Lim R (2016)
Improving Medical Students’
Awareness of Their Non-Verbal
Communication through Automated
Non-Verbal Behavior Feedback.
doi: 10.3389/fict.2016.00011
Over the last 10 years, we have witnessed a dramatic improvement in affective computing (Picard,
2000; Calvo et  al., 2015) and behavior recognition techniques (Vinciarelli et  al., 2012). These
techniques have progressed from the recognition of person-specific posed behavior to the more
difficult person-independent recognition of behavior in “the-wild” (Vinciarelli et al., 2009). They
are considered robust enough that they are being incorporated into new applications. For example,
new learning technologies have been developed that detect a student’s emotions and use this to guide
the learning experience (Calvo and D’Mello, 2011). They can also be used to support reflection by
Frontiers in ICT | www.frontiersin.org
June 2016 | Volume 3 | Article 11
('30772945', 'Chunfeng Liu', 'chunfeng liu')
('1742162', 'Rafael A. Calvo', 'rafael a. calvo')
('36807976', 'Renee Lim', 'renee lim')
('1742162', 'Rafael A. Calvo', 'rafael a. calvo')
rafael.calvo@sydney.edu.au
580f86f1ace1feed16b592d05c2b07f26c429b4bDense-Captioning Events in Videos
Stanford University
('2580593', 'Ranjay Krishna', 'ranjay krishna')
('35163655', 'Kenji Hata', 'kenji hata')
('3260219', 'Frederic Ren', 'frederic ren')
('3216322', 'Li Fei-Fei', 'li fei-fei')
('9200530', 'Juan Carlos Niebles', 'juan carlos niebles')
{ranjaykrishna, kenjihata, fren, feifeili, jniebles}@cs.stanford.edu
58d47c187b38b8a2bad319c789a09781073d052dFactorizable Net: An Efficient Subgraph-based
Framework for Scene Graph Generation
The Chinese University of Hong Kong, Hong Kong SAR, China
The University of Sydney, SenseTime Computer Vision Research Group
3 MIT CSAIL, USA
4 Sensetime Ltd, Beijing, China
Samsung Telecommunication Research Institute, Beijing, China
('2180892', 'Yikang Li', 'yikang li')
('3001348', 'Wanli Ouyang', 'wanli ouyang')
('1804424', 'Bolei Zhou', 'bolei zhou')
('1788070', 'Jianping Shi', 'jianping shi')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
{ykli, xgwang}@ee.cuhk.edu.hk, wanli.ouyang@sydney.edu.au,
bzhou@csail.mit.edu, shijianping@sensetime.com, c0502.zhang@samsung.com
582edc19f2b1ab2ac6883426f147196c8306685aDo We Really Need to Collect Millions of Faces
for Effective Face Recognition?
Institute for Robotics and Intelligent Systems, USC, CA, USA
Information Sciences Institute, USC, CA, USA
The Open University of Israel, Israel
('11269472', 'Iacopo Masi', 'iacopo masi')
('2955822', 'Jatuporn Toy Leksut', 'jatuporn toy leksut')
('1756099', 'Tal Hassner', 'tal hassner')
5859774103306113707db02fe2dd3ac9f91f1b9e
5892f8367639e9c1e3cf27fdf6c09bb3247651edEstimating Missing Features to Improve Multimedia Information Retrieval ('2666918', 'Abraham Bagherjeiran', 'abraham bagherjeiran')
('35089151', 'Nicole S. Love', 'nicole s. love')
('1696815', 'Chandrika Kamath', 'chandrika kamath')
5850aab97e1709b45ac26bb7d205e2accc798a87
587f81ae87b42c18c565694c694439c65557d6d5DeepFace: Face Generation using Deep Learning ('31560532', 'Hardie Cate', 'hardie cate')
('6415321', 'Fahim Dalvi', 'fahim dalvi')
('8815003', 'Zeshan Hussain', 'zeshan hussain')
ccate@stanford.edu
fdalvi@cs.stanford.edu
zeshanmh@stanford.edu
580054294ca761500ada71f7d5a78acb0e622f191331
A Subspace Model-Based Approach to Face
Relighting Under Unknown Lighting and Poses
('2081318', 'Hyunjung Shim', 'hyunjung shim')
('33642939', 'Jiebo Luo', 'jiebo luo')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
587c48ec417be8b0334fa39075b3bfd66cc29dbeJournal of Vision (2016) 16(15):28, 1–8
Serial dependence in the perception of attractiveness
University of California
Berkeley, CA, USA
University of California
Berkeley, CA, USA
University of California
Berkeley, CA, USA
Helen Wills Neuroscience Institute, University of
California, Berkeley, CA, USA
Vision Science Group, University of California
Berkeley, CA, USA
The perception of attractiveness is essential for choices
of food, object, and mate preference. Like perception of
other visual features, perception of attractiveness is
stable despite constant changes of image properties due
to factors like occlusion, visual noise, and eye
movements. Recent results demonstrate that perception
of low-level stimulus features and even more complex
attributes like human identity are biased towards recent
percepts. This effect is often called serial dependence.
Some recent studies have suggested that serial
dependence also exists for perceived facial
attractiveness, though there is also concern that the
reported effects are due to response bias. Here we used
an attractiveness-rating task to test the existence of
serial dependence in perceived facial attractiveness. Our
results demonstrate that perceived face attractiveness
was pulled by the attractiveness level of facial images
encountered up to 6 s prior. This effect was not due to
response bias and did not rely on the previous motor
response. This perceptual pull increased as the difference
in attractiveness between previous and current stimuli
increased. Our results reconcile previously conflicting
findings and extend previous work, demonstrating that
sequential dependence in perception operates across
different levels of visual analysis, even at the highest
levels of perceptual interpretation.
Introduction
Humans make aesthetic judgments all the time about
the attractiveness or desirability of objects and scenes.
Aesthetic judgments are not merely about judging
works of art; they are constantly involved in our daily
activity, influencing or determining our choices of food,
object (Creusen & Schoormans, 2005), and mate
preference (Rhodes, Simmons, & Peters, 2005).
Aesthetic judgments are based on perceptual pro-
cessing (Arnheim, 1954; Livingstone & Hubel, 2002;
Solso, 1996). These judgments, like other perceptual
experiences, are thought to be relatively stable in spite
of fluctuations in the raw visual input we receive due to
factors like occlusion, visual noise, and eye movements.
One mechanism that allows the visual system to achieve
this stability is serial dependence. Recent results have
revealed that the perception of visual features such as
orientation (Fischer & Whitney, 2014), numerosity
(Cicchini, Anobile, & Burr, 2014), and facial identity
(Liberman, Fischer, & Whitney, 2014) are systemati-
cally assimilated toward visual input from the recent
past. This perceptual pull has been distinguished from
hysteresis in motor responses or decision processes, and
has been shown to be tuned by the magnitude of the
difference between previous and current visual inputs
(Fischer & Whitney, 2014; Liberman, Fischer, &
Whitney, 2014).
Is aesthetics perception similarly stable like feature
perception? Some previous studies have suggested that
the answer is yes. It has been shown that there is a
positive correlation between observers’ successive
attractiveness ratings of facial images (Kondo, Taka-
hashi, & Watanabe, 2012; Taubert, Van der Burg, &
Alais, 2016). This suggests that there is an assimilative
sequential dependence in attractiveness judgments.
Citation: Xia, Y., Leib, A. Y., & Whitney, D. (2016). Serial dependence in the perception of attractiveness. Journal of Vision,
16(15):28, 1–8, doi:10.1167/16.15.28.
doi: 10 .116 7 /1 6. 15 . 28
Received July 13, 2016; published December 22, 2016
ISSN 1534-7362
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
('37397364', 'Ye Xia', 'ye xia')
('6931574', 'Allison Yamanashi Leib', 'allison yamanashi leib')
('1821337', 'David Whitney', 'david whitney')
58081cb20d397ce80f638d38ed80b3384af76869Embedded Real-Time Fall Detection Using Deep
Learning For Elderly Care
Samsung Research, Samsung Electronics
('1729858', 'Hyunwoo Lee', 'hyunwoo lee')
('1784186', 'Jooyoung Kim', 'jooyoung kim')
('32671800', 'Dojun Yang', 'dojun yang')
('3443235', 'Joon-Ho Kim', 'joon-ho kim')
{hyun0772.lee, joody.kim, dojun.yang, mythos.kim}@samsung.com
581e920ddb6ecfc2a313a3aa6fed3d933b917ab0Automatic Mapping of Remote Crowd Gaze to
Stimuli in the Classroom
University of T ubingen, T ubingen, Germany
2 Leibniz-Institut f¨ur Wissensmedien, T¨ubingen, Germany
Hector Research Institute of Education Sciences and Psychology, T ubingen
Germany
('2445102', 'Thiago Santini', 'thiago santini')
('24003697', 'Lucas Draghetti', 'lucas draghetti')
('3286609', 'Peter Gerjets', 'peter gerjets')
('2446461', 'Ulrich Trautwein', 'ulrich trautwein')
('1884159', 'Enkelejda Kasneci', 'enkelejda kasneci')
58fa85ed57e661df93ca4cdb27d210afe5d2cdcdCancún Center, Cancún, México, December 4-8, 2016
978-1-5090-4847-2/16/$31.00 ©2016 IEEE
4118
5860cf0f24f2ec3f8cbc39292976eed52ba2eafdInternational Journal of Automated Identification Technology, 3(2), July-December 2011, pp. 51-60
COMPUTATION EvaBio: A TOOL FOR PERFORMANCE
EVALUATION IN BIOMETRICS
GREYC Laboratory, ENSICAEN - University of Caen Basse Normandie - CNRS
6 Boulevard Maréchal Juin, 14000 Caen Cedex - France
('2774452', 'Julien Mahier', 'julien mahier')
('3356614', 'Baptiste Hemery', 'baptiste hemery')
('2174941', 'Mohamad El-Abed', 'mohamad el-abed')
('1793765', 'Christophe Rosenberger', 'christophe rosenberger')
584909d2220b52c0d037e8761d80cb22f516773fOCR-Free Transcript Alignment
Dept. of Mathematics and Computer Science
School of Computer Science
School of Computer Science
The Open University
Israel
Tel Aviv University
Tel-Aviv, Israel
Tel Aviv University
Tel-Aviv, Israel
('1756099', 'Tal Hassner', 'tal hassner')
('1776343', 'Lior Wolf', 'lior wolf')
('1759551', 'Nachum Dershowitz', 'nachum dershowitz')
Email: hassner@openu.ac.il
Email: wolf@cs.tau.ac.il
Email: nachumd@tau.ac.il
58bf72750a8f5100e0c01e55fd1b959b31e7dbcePyramidBox: A Context-assisted Single Shot
Face Detector.
Baidu Inc.
('48785141', 'Xu Tang', 'xu tang')
('14931829', 'Daniel K. Du', 'daniel k. du')
('31239588', 'Zeqiang He', 'zeqiang he')
('2272123', 'Jingtuo Liu', 'jingtuo liu')
tangxu02@baidu.com,daniel.kang.du@gmail.com,{hezeqiang,liujingtuo}@baidu.com
58542eeef9317ffab9b155579256d11efb4610f2International Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611
Face Recognition Revisited on Pose, Alignment,
Color, Illumination and Expression-PyTen
Computer Science, BIT Noida, India
58823377757e7dc92f3b70a973be697651089756Technical Report
UCAM-CL-TR-861
ISSN 1476-2986
Number 861
Computer Laboratory
Automatic facial expression analysis
October 2014
15 JJ Thomson Avenue
Cambridge CB3 0FD
United Kingdom
phone +44 1223 763500
http://www.cl.cam.ac.uk/
('1756344', 'Tadas Baltrusaitis', 'tadas baltrusaitis')
580e48d3e7fe1ae0ceed2137976139852b1755dfTHE EFFECTS OF MOTION AND ORIENTATION ON PERCEPTION OF
FACIAL EXPRESSIONS AND FACE RECOGNITION
by
B.S. University of Indonesia
M.S. Brunel University of West London
Submitted to the Graduate Faculty of
Arts and Sciences in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
University of Pittsburgh
2002
('2059653', 'Zara Ambadar', 'zara ambadar')
5865e824e3d8560e07840dd5f75cfe9bf68f9d96RESEARCH ARTICLE
Embodied conversational agents for
multimodal automated social skills training in
people with autism spectrum disorders
Graduate School of Information Science, Nara Institute of Science and Technology, Ikoma-shi, Nara
Japan, 2 Center for Special Needs Education, Nara University of Education, Nara-shi, Nara
Japan, 3 Developmental Center for Child and Adult, Shigisan Hospital, Ikoma-gun, Nara, 636-0815, Japan
('3162048', 'Hiroki Tanaka', 'hiroki tanaka')
('1867578', 'Hideki Negoro', 'hideki negoro')
('35238212', 'Hidemi Iwasaka', 'hidemi iwasaka')
('40285672', 'Satoshi Nakamura', 'satoshi nakamura')
* hiroki-tan@is.naist.jp
58bb77dff5f6ee0fb5ab7f5079a5e788276184ccFacial Expression Recognition with PCA and LBP
Features Extracting from Active Facial Patches
('7895427', 'Yanpeng Liu', 'yanpeng liu')
('16879896', 'Yuwen Cao', 'yuwen cao')
('29275442', 'Yibin Li', 'yibin li')
('1686211', 'Ming Liu', 'ming liu')
('1772484', 'Rui Song', 'rui song')
('1706513', 'Yafang Wang', 'yafang wang')
('40395865', 'Zhigang Xu', 'zhigang xu')
('1708045', 'Xin Ma', 'xin ma')
585260468d023ffc95f0e539c3fa87254c28510bCardea: Context–Aware Visual Privacy Protection
from Pervasive Cameras
HKUST-DT System and Media Laboratory
Hong Kong University of Science and Technology, Hong Kong
('3432205', 'Jiayu Shu', 'jiayu shu')
('2844817', 'Rui Zheng', 'rui zheng')
('2119751', 'Pan Hui', 'pan hui')
Email: ∗jshuaa@ust.hk, †rzhengac@ust.hk, ‡panhui@ust.hk
58cb1414095f5eb6a8c6843326a6653403a0ee17
58db008b204d0c3c6744f280e8367b4057173259International Journal of Current Engineering and Technology
ISSN 2277 - 4106
© 2012 INPRESSCO. All Rights Reserved.
Available at http://inpressco.com/category/ijcet
Research Article
Facial Expression Recognition
Jaipur, Rajasthan, India
Accepted 3June 2012, Available online 8 June 2012
('40621542', 'Riti Kushwaha', 'riti kushwaha')
('2117075', 'Neeta Nain', 'neeta nain')
58628e64e61bd2776a2a7258012eabe3c79ca90cActive Grounding of Visual Situations
Portland State University
Santa Fe Institute
Unpublished Draft
('3438473', 'Max H. Quinn', 'max h. quinn')
('27572284', 'Erik Conser', 'erik conser')
('38388831', 'Jordan M. Witte', 'jordan m. witte')
('4421478', 'Melanie Mitchell', 'melanie mitchell')
676a136f5978783f75b5edbb38e8bb588e8efbbeMatrix Completion for Resolving Label Ambiguity
UMIACS, University of Maryland, College Park, USA
Learning a visual classifier requires a large amount of labeled images
and videos. However, labeling images is expensive and time-consuming
due to the significant amount of human efforts involved. As a result, brief
descriptions such as tags, captions and screenplays accompanying the im-
ages and videos become important for training classifiers. Although such
information is publicly available, it is not as explicitly labeled as human
annotation. For instance, names in the caption of a news photo provide
possible candidates for faces appearing in the image [1]. The names in the
screenplays are only weakly associated with faces in the shots [4]. The prob-
lem in which instead of a single label per instance, one is given a candidate
set of labels, of which only one is correct is known as ambiguously labeled
learning [2, 6].
Ambiguous Labels
Disambiguated Labels
Class 2
MCar
Class 1
L={1}
L={2}
L={3}
L={1, 2}
L={2, 3}
L={1, 3}
Class 3
The ambiguously labeled data is denoted as L = {(x j , L j), j = 1, 2, . . . , N},
Figure 1: MCar reassigns the labels for those ambiguously labeled in-
stances such that instances of the same subjects cohesively form potentially-
separable convex hulls.
where N is the number of instances. There are c classes, and the class labels
are denoted as Y = {1, 2, . . . , c}. Note that x j is the feature vector of the jth
instance, and its ambiguous labeling set L j ⊆ Y consists of the candidate
labels associated with the jth instance. The true label of the jth instance is
l j ∈ L j. In other words, one of the labels in L j is the true label of x j. The
objective is to resolve the ambiguity in L such that each predicted label ˆl j
of x j matches its true label l j.
We interpret the ambiguous labeling set L j with soft labeling vector p j,
where pi, j indicates the probability that instance j belongs to class i. This
allows us to quantitatively assign the likelihood of each class the instance
belongs to if such information is provided. Without any prior knowledge,
we assume equal probability for each candidate label. Let P ∈ Rc×N denotes
the ambiguous labeling matrix with p j in its jth column. With this, one can
model the ambiguous labeling as P = P0 + EP, where P0 and EP denote the
true labeling matrix and the labeling noise, respectively. The jth column
vector of P0 is p0
j = el j , where el j is the canonical vector corresponding to
the 1-of-K coding of its true label l j. Similarly, assuming that the feature
vectors are corrupted by some noise or occlusion, the feature matrix X with
x j in its jth column can be modeled as X = X0 + EX , where X ∈ Rm×N con-
sists of N feature vectors of dimension m, X0 represents the feature matrix
in the absence of noise and EX accounts for the noise.
Figure 1 shows the geometric interpretation of our proposed method,
Matrix Completion for Ambiguity Resolving (MCar). When each element
in the ambiguous labeling set is trivially treated as the true label, the convex
hulls of each class are erroneously expanded. MCar reassigns the ambiguous
labels such that each over-expanded convex hull shrinks to its actual contour,
and the convex hulls becomes potentially separable.
In the paper, we show that the heterogeneous feature matrix, which is
the concatenation of the labeling matrix P and feature matrix X, is ideally
low-rank in the absence of noise (Figure 2), which allows us to convert the
aforementioned label reassignment problem as a matrix completion prob-
lem [5]. The proposed MCar takes the heterogeneous feature matrix as in-
put, and returns the predicted labeling matrix Y by solving the following
optimization problem
=
=
+
+
۾଴
܆଴
۾
܆

۳௉
۳௑
Figure 2: Ideal decomposition of heterogeneous feature matrix using MCar.
The underlying low-rank structure and the ambiguous labeling are recovered
simultaneously.
The proposed method inherits the benefit of low-rank recovery and pos-
sesses the capability to resolve the label ambiguity via low-rank approxima-
tion of the heterogeneous matrix. As a result, our method is more robust
compared to some of the existing discriminative ambiguous learning meth-
ods [3, 7], sparsity/dictionary-based method [2], and low-rank representation-
based method [8]. Moreover, we generalize MCar to include the labeling
constraints between the instances for practical applications. Compared to
the state of the arts, our proposed framework achieves 2.9% improvement
on the labeling accuracy of the Lost dataset and performs comparably on the
Labeled Yahoo! News dataset.
[1] T. L. Berg, A. C. Berg, J. Edwards, M. Maire, R. White, Y.-W. Teh,
E. Learned-Miller, and D. A. Forsyth. Names and faces in the news. In
CVPR, 2004.
[2] Y.-C. Chen, V. M. Patel, J. K. Pillai, R. Chellappa, and P. J. Phillips.
Dictionary learning from ambiguously labeled data. In CVPR, 2013.
[3] T. Cour, B. Sapp, C. Jordan, and B. Taskar. Learning from ambiguously
labeled images. In CVPR, 2009.
[4] M. Everingham, J. Sivic, and A. Zisserman. Hello! My name is... Buffy
(1)
- Automatic naming of characters in TV video. In BMVC, 2006.
[5] A. B. Goldberg, X. Zhu, B. Recht, J.-M. Xu, and R. D. Nowak. Trans-
duction with matrix completion: Three birds with one stone. In NIPS,
2010.
[6] E. Hüllermeier and J. Beringer. Learning from ambiguously labeled
examples. In Intell. Data Anal., 2006.
[7] J. Luo and F. Orabona. Learning from candidate labeling sets. In NIPS,
2010.
min
Y,EX
rank(H) + λ kEX k0 + γkYk0
Z(cid:21) =(cid:20)P
X(cid:21) −(cid:20)EP
EX(cid:21) ,
N , Y ∈ Rc×N
+ ,
s.t. H =(cid:20)Y
1T
c Y = 1T
yi, j = 0 if pi, j = 0,
where λ ∈ R+ and γ ∈ R+ control the sparsity of data noise and predicted
labeling matrix, respectively. Consequently, the predicted label of instance
j can be obtained as
ˆl j = arg max
i∈Y
yi, j .
(2)
('2682056', 'Ching-Hui Chen', 'ching-hui chen')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
677585ccf8619ec2330b7f2d2b589a37146ffad7A flexible model for training action localization
with varying levels of supervision
('1902524', 'Guilhem Chéron', 'guilhem chéron')
('2285263', 'Jean-Baptiste Alayrac', 'jean-baptiste alayrac')
('1785596', 'Ivan Laptev', 'ivan laptev')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
676f9eabf4cfc1fd625228c83ff72f6499c67926FACE IDENTIFICATION AND CLUSTERING
A thesis submitted to the
Graduate School—New Brunswick
Rutgers, The State University of New Jersey
in partial fulfillment of the requirements
for the degree of
Master of Science
Graduate Program in Computer Science
Written under the direction of
Dr. Vishal Patel, Dr. Ahmed Elgammal
and approved by
New Brunswick, New Jersey
May, 2017
('34805991', 'Atul Dhingra', 'atul dhingra')
677477e6d2ba5b99633aee3d60e77026fb0b9306
6789bddbabf234f31df992a3356b36a47451efc7Unsupervised Generation of Free-Form and
Parameterized Avatars
('33964593', 'Adam Polyak', 'adam polyak')
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('1776343', 'Lior Wolf', 'lior wolf')
679b7fa9e74b2aa7892eaea580def6ed4332a228Communication and automatic
interpretation of affect from facial
expressions1
University of Amsterdam, the Netherlands
University of Trento, Italy
University of Amsterdam, the Netherlands
('1764521', 'Albert Ali Salah', 'albert ali salah')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1695527', 'Theo Gevers', 'theo gevers')
675b2caee111cb6aa7404b4d6aa371314bf0e647AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
Carl Vondrick∗
('39599498', 'Chunhui Gu', 'chunhui gu')
('1758054', 'Yeqing Li', 'yeqing li')
('1726241', 'Chen Sun', 'chen sun')
('48536531', 'David A. Ross', 'david a. ross')
('2259154', 'Sudheendra Vijayanarasimhan', 'sudheendra vijayanarasimhan')
('1805076', 'George Toderici', 'george toderici')
('2997956', 'Caroline Pantofaru', 'caroline pantofaru')
('2262946', 'Susanna Ricco', 'susanna ricco')
('1694199', 'Rahul Sukthankar', 'rahul sukthankar')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
('1689212', 'Jitendra Malik', 'jitendra malik')
679b72d23a9cfca8a7fe14f1d488363f2139265f
67484723e0c2cbeb936b2e863710385bdc7d5368Anchor Cascade for Efficient Face Detection ('2425630', 'Baosheng Yu', 'baosheng yu')
('1692693', 'Dacheng Tao', 'dacheng tao')
670637d0303a863c1548d5b19f705860a23e285cFace Swapping: Automatically Replacing Faces in Photographs
Columbia University
Peter Belhumeur
Figure 1: We have developed a system that automatically replaces faces in an input image with ones selected from a large collection of
face images, obtained by applying face detection to publicly available photographs on the internet. In this example, the faces of (a) two
people are shown after (b) automatic replacement with the top three ranked candidates. Our system for face replacement can be used for face
de-identification, personalized face replacement, and creating an appealing group photograph from a set of “burst” mode images. Original
images in (a) used with permission from Retna Ltd. (top) and Getty Images Inc. (bottom).
Rendering, Computational Photography
1 Introduction
it
Advances in digital photography have made it possible to cap-
ture large collections of high-resolution images and share them
on the internet. While the size and availability of these col-
lections is leading to many exciting new applications,
is
also creating new problems. One of the most
important of
these problems is privacy. Online systems such as Google
Street View (http://maps.google.com/help/maps/streetview) and
EveryScape (http://everyscape.com) allow users to interactively
navigate through panoramic images of public places created using
thousands of photographs. Many of the images contain people who
have not consented to be photographed, much less to have these
photographs publicly viewable. Identity protection by obfuscating
the face regions in the acquired photographs using blurring, pixela-
tion, or simply covering them with black pixels is often undesirable
as it diminishes the visual appeal of the image. Furthermore, many
('2085183', 'Dmitri Bitouk', 'dmitri bitouk')
('40631426', 'Neeraj Kumar', 'neeraj kumar')
('2057606', 'Samreen Dhillon', 'samreen dhillon')
('1750470', 'Shree K. Nayar', 'shree k. nayar')
6742c0a26315d7354ab6b1fa62a5fffaea06da14BAS AND SMITH: WHAT DOES 2D GEOMETRIC INFORMATION REALLY TELL US ABOUT 3D FACE SHAPE?
What does 2D geometric information
really tell us about 3D face shape?
('39180407', 'Anil Bas', 'anil bas')
('1687021', 'William A. P. Smith', 'william a. p. smith')
67a50752358d5d287c2b55e7a45cc39be47bf7d0
67c3c1194ee72c54bc011b5768e153a035068c43StreetScenes: Towards Scene Understanding in
Still Images
by
Stanley Michael Bileschi
Submitted to the Department of Electrical Engineering and Computer
Science
in partial fulflllment of the requirements for the degree of
Doctor of Philosophy in Computer Science and Engineering
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
May 2006
c(cid:176) Massachusetts Institute of Technology 2006. All rights reserved
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Electrical Engineering and Computer Science
May 5, 2006
Certifled by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tomaso A. Poggio
McDermott Professor
Thesis Supervisor
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Arthur C. Smith
Chairman, Department Committee on Graduate Students
673d4885370b27c863e11a4ece9189a6a45931ccRecurrent Residual Module for Fast Inference in Videos
Shanghai Jiao Tong University, 2Zhejiang University, 3Massachusetts Institute of Technology
networks for video recognition are more challenging. For
example, for Youtube-8M dataset [1] with over 8 million
video clips, it will take 50 years for a CPU to extract the
deep features using a standard CNN model.
('35654996', 'Bowen Pan', 'bowen pan')
('35992009', 'Wuwei Lin', 'wuwei lin')
('2126444', 'Xiaolin Fang', 'xiaolin fang')
('35933894', 'Chaoqin Huang', 'chaoqin huang')
('1804424', 'Bolei Zhou', 'bolei zhou')
('1830034', 'Cewu Lu', 'cewu lu')
†{googletornado,linwuwei13, huangchaoqin}@sjtu.edu.cn, ¶fxlfang@gmail.com
§bzhou@csail.mit.edu; ‡lu-cw@cs.sjtu.edu.cn
67c703a864aab47eba80b94d1935e6d244e00bcb (IJACSA) International Journal of Advanced Computer Science and Applications
Vol. 7, No. 6, 2016
Face Retrieval Based On Local Binary Pattern and Its
Variants: A Comprehensive Study
University of Science, VNU-HCM, Viet Nam
face searching,
('3911040', 'Phan Khoi', 'phan khoi')
6754c98ba73651f69525c770fb0705a1fae78eb5Joint Cascade Face Detection and Alignment
University of Science and Technology of China
2 Microsoft Research
('39447786', 'Dong Chen', 'dong chen')
('3080683', 'Shaoqing Ren', 'shaoqing ren')
('1732264', 'Yichen Wei', 'yichen wei')
('47300766', 'Xudong Cao', 'xudong cao')
('40055995', 'Jian Sun', 'jian sun')
{chendong,sqren}@mail.ustc.edu.cn
{yichenw,xudongca,jiansun}@microsoft.com
672fae3da801b2a0d2bad65afdbbbf1b2320623ePose-Selective Max Pooling for Measuring Similarity
1Dept. of Computer Science
2Dept. of Electrical & Computer Engineering
Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA
('40031188', 'Xiang Xiang', 'xiang xiang')
('1709073', 'Trac D. Tran', 'trac d. tran')
xxiang@cs.jhu.edu
677ebde61ba3936b805357e27fce06c44513a455Facial Expression Recognition Based on Facial
Components Detection and HOG Features
The Hong Kong Polytechnic University, Hong Kong
Chu Hai College of Higher Education, Hong Kong
('2366262', 'Junkai Chen', 'junkai chen')
('1715231', 'Zenghai Chen', 'zenghai chen')
('8590720', 'Zheru Chi', 'zheru chi')
('1965426', 'Hong Fu', 'hong fu')
Email: Junkai.Chen@connect.polyu.hk
67ba3524e135c1375c74fe53ebb03684754aae56978-1-5090-4117-6/17/$31.00 ©2017 IEEE
1767
ICASSP 2017
6769cfbd85329e4815bb1332b118b01119975a95Tied factor analysis for face recognition across
large pose changes
0be43cf4299ce2067a0435798ef4ca2fbd255901Title
A temporal latent topic model for facial expression recognition
Author(s)
Shang, L; Chan, KP
Citation
The 10th Asian Conference on Computer Vision (ACCV 2010),
Queenstown, New Zealand, 8-12 November 2010. In Lecture
Notes in Computer Science, 2010, v. 6495, p. 51-63
Issued Date
2011
URL
http://hdl.handle.net/10722/142604
Rights
Creative Commons: Attribution 3.0 Hong Kong License
0bc53b338c52fc635687b7a6c1e7c2b7191f42e5ZHANG, BHALERAO: LOGLET SIFT FOR PART DESCRIPTION
Loglet SIFT for Part Description in
Deformable Part Models: Application to Face
Alignment
Department of Computer Science
University of Warwick
Coventry, UK
('39900385', 'Qiang Zhang', 'qiang zhang')
('2227351', 'Abhir Bhalerao', 'abhir bhalerao')
q.zhang.13@warwick.ac.uk
abhir.bhalerao@warwick.ac.uk
0b2277a0609565c30a8ee3e7e193ce7f79ab48b0944
Cost-Sensitive Semi-Supervised Discriminant
Analysis for Face Recognition
('1697700', 'Jiwen Lu', 'jiwen lu')
('3353607', 'Xiuzhuang Zhou', 'xiuzhuang zhou')
('1689805', 'Yap-Peng Tan', 'yap-peng tan')
('38152390', 'Yuanyuan Shang', 'yuanyuan shang')
('39491387', 'Jie Zhou', 'jie zhou')
0b9ce839b3c77762fff947e60a0eb7ebbf261e84Proceedings of the IASTED International Conference
Computer Vision (CV 2011)
June 1 - 3, 2011 Vancouver, BC, Canada
LOGARITHMIC FOURIER PCA: A NEW APPROACH TO FACE
RECOGNITION
1 Lakshmiprabha Nattamai Sekar,
omjyoti
Majumder
Surface Robotics Lab
Central Mechanical Engineering Research Institute
Mahatma Gandhi Avenue,
Durgapur - 713209, West Bengal, India.
('9155672', 'Jhilik Bhattacharya', 'jhilik bhattacharya')email: 1 n prabha mech@cmeri.res.in, 2 bjhilik@cmeri.res.in, 3 sjm@cmeri.res.in
0b8b8776684009e537b9e2c0d87dbd56708ddcb4Adversarial Discriminative Heterogeneous Face Recognition
National Laboratory of Pattern Recognition, CASIA
Center for Research on Intelligent Perception and Computing, CASIA
Center for Excellence in Brain Science and Intelligence Technology, CAS
University of Chinese Academy of Sciences, Beijing 100190, China
('3051419', 'Lingxiao Song', 'lingxiao song')
('2567523', 'Man Zhang', 'man zhang')
('2225749', 'Xiang Wu', 'xiang wu')
('1705643', 'Ran He', 'ran he')
0ba64f4157d80720883a96a73e8d6a5f5b9f1d9b
0b6a5200c33434cbfa9bf24ba482f6e06bf5fff71
The Use of Deep Learning in Image
Segmentation, Classification and Detection
The Image Processing and Analysis Lab (LAPI), Politehnica University of Bucharest, Romania
('33789881', 'Mihai-Sorin Badea', 'mihai-sorin badea')
('3407753', 'Laura Maria Florea', 'laura maria florea')
('2905899', 'Constantin Vertan', 'constantin vertan')
0b605b40d4fef23baa5d21ead11f522d7af1df06Label-Embedding for Attribute-Based Classification
a Computer Vision Group∗, XRCE, France
b LEAR†, INRIA, France
('2893664', 'Zeynep Akata', 'zeynep akata')
('1723883', 'Florent Perronnin', 'florent perronnin')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
0b0eb562d7341231c3f82a65cf51943194add0bb> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
Facial Image Analysis Based on Local Binary
Patterns: A Survey
('40451093', 'Di Huang', 'di huang')
('10795229', 'Caifeng Shan', 'caifeng shan')
('40703561', 'Mohsen Ardebilian', 'mohsen ardebilian')
('40231048', 'Liming Chen', 'liming chen')
0b3a146c474166bba71e645452b3a8276ac05998Who’s in the Picture?
Berkeley, CA 94720
Computer Science Division
U.C. Berkeley
('1685538', 'Tamara L. Berg', 'tamara l. berg')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('34497462', 'Jaety Edwards', 'jaety edwards')
millert@cs.berkeley.edu
0b78fd881d0f402fd9b773249af65819e48ad36dANALYSIS AND MODELING OF AFFECTIVE AUDIO VISUAL SPEECH
BASED ON PAD EMOTION SPACE
Tsinghua University
('2180849', 'Shen Zhang', 'shen zhang')
('1856341', 'Yingjin Xu', 'yingjin xu')
('25714033', 'Jia Jia', 'jia jia')
('7239047', 'Lianhong Cai', 'lianhong cai')
{zhangshen05, xuyj03, jiajia}@mails.tsinghua.edu.cn, clh-dcs@tsinghua.edu.cn
0b835284b8f1f45f87b0ce004a4ad2aca1d9e153Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos
David Crandall
School of Informatics and Computing
Indiana University Bloomington
('3053390', 'Eman T. Hassan', 'eman t. hassan')
('2221434', 'Rakibul Hasan', 'rakibul hasan')
('34507388', 'Patrick Shaffer', 'patrick shaffer')
('1996617', 'Apu Kapadia', 'apu kapadia')
{emhassan, rakhasan, patshaff, djcran, kapadia}@indiana.edu
0b5bd3ce90bf732801642b9f55a781e7de7fdde0
0b0958493e43ca9c131315bcfb9a171d52ecbb8aA Unified Neural Based Model for Structured Output Problems
Soufiane Belharbi∗1, Cl´ement Chatelain∗1, Romain H´erault∗1, and S´ebastien Adam∗2
1LITIS EA 4108, INSA de Rouen, Saint ´Etienne du Rouvray 76800, France
2LITIS EA 4108, UFR des Sciences, Universit´e de Rouen, France.
April 13, 2015
0b51197109813d921835cb9c4153b9d1e12a9b34THE UNIVERSITY OF CHICAGO
JOINTLY LEARNING MULTIPLE SIMILARITY METRICS FROM TRIPLET
CONSTRAINTS
A DISSERTATION SUBMITTED TO
THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES
IN CANDIDACY FOR THE DEGREE OF
MASTER OF SCIENCE
DEPARTMENT OF COMPUTER SCIENCE
BY
CHICAGO, ILLINOIS
WINTER, 2015
('40504838', 'LIWEN ZHANG', 'liwen zhang')
0bf3513d18ec37efb1d2c7934a837dabafe9d091Robust Subspace Clustering via Thresholding Ridge Regression
Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), Singapore
College of Computer Science, Sichuan University, Chengdu 610065, P.R. China
('8249791', 'Xi Peng', 'xi peng')
('9276020', 'Zhang Yi', 'zhang yi')
('3134548', 'Huajin Tang', 'huajin tang')
pangsaai@gmail.com, zhangyi@scu.edu.cn, htang@i2r.a-star.edu.sg.
0b20f75dbb0823766d8c7b04030670ef7147ccdd1
Feature selection using nearest attributes
('1744784', 'Alex Pappachen James', 'alex pappachen james')
('1697594', 'Sima Dimitrijev', 'sima dimitrijev')
0b5a82f8c0ee3640503ba24ef73e672d93aeebbfOn Learning 3D Face Morphable Model
from In-the-wild Images
('1849929', 'Luan Tran', 'luan tran')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
0b174d4a67805b8796bfe86cd69a967d357ba9b6 Research Journal of Recent Sciences _________________________________________________ ISSN 2277-2502
Vol. 3(4), 56-62, April (2014)
Res.J.Recent Sci.
0ba449e312894bca0d16348f3aef41ca01872383
0b87d91fbda61cdea79a4b4dcdcb6d579f063884The Open Automation and Control Systems Journal, 2015, 7, 569-579
569
Open Access
Research on Theory and Method for Facial Expression Recognition Sys-
tem Based on Dynamic Image Sequence
School of Computer and Information Engineering, Nanyang Institute of Technology, Henan, Nanyang, 473000, P.R
China
Henan University of Traditional Chinese Medicine, Henan, Zhengzhou, 450000, P.R. China
('9296838', 'Yang Xinfeng', 'yang xinfeng')
('2083303', 'Jiang Shan', 'jiang shan')
Send Orders for Reprints to reprints@benthamscience.ae
0be2245b2b016de1dcce75ffb3371a5e4b1e731bOn the Variants of the Self-Organizing Map That Are
Based on Order Statistics
Aristotle University of Thessaloniki
Box 451, Thessaloniki 54124, Greece
('1762248', 'Vassiliki Moschou', 'vassiliki moschou')
('1711062', 'Dimitrios Ververidis', 'dimitrios ververidis')
('1736143', 'Constantine Kotropoulos', 'constantine kotropoulos')
{vmoshou, jimver, costas}@aiia.csd.auth.gr
0b79356e58a0df1d0efcf428d0c7c4651afa140dAppears In: Advances in Neural Information Processing Systems , MIT Press,  .
Bayesian Modeling of Facial Similarity
Mitsubishi Electric Research Laboratory
 Broadway
Cambridge, MA  , USA
Massachusettes Institute of Technology
 Ames St.
Cambridge, MA  , USA
('1780935', 'Baback Moghaddam', 'baback moghaddam')
('1768120', 'Tony Jebara', 'tony jebara')
('1682773', 'Alex Pentland', 'alex pentland')
baback@merl.com
fjebara,sandyg@media.mit.edu
0b572a2b7052b15c8599dbb17d59ff4f02838ff7Automatic Subspace Learning via Principal
Coefficients Embedding
('8249791', 'Xi Peng', 'xi peng')
('1697700', 'Jiwen Lu', 'jiwen lu')
('1709367', 'Zhang Yi', 'zhang yi')
('1680126', 'Rui Yan', 'rui yan')
0b85b50b6ff03a7886c702ceabad9ab8c8748fdchttp://www.journalofvision.org/content/11/3/17
Is there a dynamic advantage for facial expressions?
Institute of Child Health, University College London, UK
Laboratory of Neuromotor Physiology, Santa Lucia
Foundation, Rome, Italy
Some evidence suggests that it is easier to identify facial expressions (FEs) shown as dynamic displays than as photographs
(dynamic advantage hypothesis). Previously, this has been tested by using dynamic FEs simulated either by morphing a
neutral face into an emotional one or by computer animations. For the first time, we tested the dynamic advantage hypothesis
by using high-speed recordings of actors’ FEs. In the dynamic condition, stimuli were graded blends of two recordings
(duration: 4.18 s), each describing the unfolding of an expression from neutral to apex. In the static condition, stimuli (duration:
3 s) were blends of just the apex of the same recordings. Stimuli for both conditions were generated by linearly morphing one
expression into the other. Performance was estimated by a forced-choice task asking participants to identify which prototype
the morphed stimulus was more similar to. Identification accuracy was not different between conditions. Response times (RTs)
measured from stimulus onset were shorter for static than for dynamic stimuli. Yet, most responses to dynamic stimuli were
given before expressions reached their apex. Thus, with a threshold model, we tested whether discriminative information is
integrated more effectively in dynamic than in static conditions. We did not find any systematic difference. In short, neither
identification accuracy nor RTs supported the dynamic advantage hypothesis.
Keywords: facial expressions, dynamic advantage, emotion, identification
1–15, http://www.journalofvision.org/content/11/3/17, doi:10.1167/11.3.17.
Introduction
Research on emotion recognition has relied primarily on
static images of intense facial expressions (FEs), which—
despite being accurately identified (Ekman & Friesen,
1982)—are fairly impoverished representations of real-life
FEs. As a motor behavior determined by facial muscle
actions, expressions are intrinsically dynamic. Insofar as
detecting moment-to-moment changes in others’ affective
states is fundamental for regulating social
interactions
(Yoshikawa & Sato, 2008), visual sensitivity to the
dynamic properties of FEs might be an important aspect
of our emotion recognition abilities.
There is considerable evidence that dynamic information
is not redundant and may be beneficial for various aspect of
face processing, including age (Berry, 1990), sex (Hill
Johnston, 2001; Mather & Murdoch, 1994), and identity
(Hill & Johnston, 2001; Lander, Christie, & Bruce, 1999;
see O’Toole, Roark, & Abdi, 2002 for a review) recogni-
tion. In real life, static information—such as the invariant
geometrical parameters of
features—and
dynamic information describing the contraction of the
expressive muscles are closely intertwined and contribute
jointly to the overall perception. The relative contribution
of either type of cues, which is likely to depend on the
meaning that one is asked to extract from the stimulus, is
still poorly understood. Pure motion information is suffi-
cient to recognize a person’s identity and sex (Hill &
the facial
Johnston, 2001). Other studies have shown that face
identity is better recognized from dynamic than static
displays when the stimuli are degraded (e.g., shown as
negatives, upside down, thresholded, pixilated, or blurred).
However,
the advantage disappears with unmodified
stimuli (Knight & Johnston, 1997; Lander et al., 1999). In
short, insofar as recognition of identity from complete
static images is already close to perfect, motion appears to
be beneficial only when static information is insufficient or
has been manipulated (Katsiri, 2006; O’Toole et al., 2002).
In comparison to face identity, fewer studies have
investigated the role of dynamic information in FE recog-
nition (see Katsiri, 2006, for a review). Taken together,
they seem to suggest that the process of emotion identi-
fication is facilitated when expressions are dynamic rather
than static. However, because of various methodological
issues and conceptual inconsistencies across studies, this
suggestion needs to be qualified. We can divide the avail-
able studies in three main groups.
First, there are studies showing that dynamic information
improves expression recognition in a variety of suboptimal
conditions, i.e., when static information is either unavail-
able or is only partially accessible. As in the case of
identity recognition, emotions can be inferred from
animated point-light descriptions of the faces that neglect
facial features (Bassili, 1978, 1979; see also Bruce &
Valentine, 1988). Furthermore, in various neuropsycholog-
ical and developmental conditions, there is evidence that
dynamic presentation improves emotion recognition with
doi: 10.1167/11.3.17
Received November 18, 2010; published March 22, 2011
ISSN 1534-7362 * ARVO
Downloaded From: http://jov.arvojournals.org/pdfaccess.ashx?url=/data/journals/jov/933483/ on 03/30/2017
('34569930', 'Chiara Fiorentini', 'chiara fiorentini')
('32709245', 'Paolo Viviani', 'paolo viviani')
0b84f07af44f964817675ad961def8a51406dd2ePerson Re-identification in the Wild
3USTC
4UCSD
University of Technology Sydney
2UTSA
('14904242', 'Liang Zheng', 'liang zheng')
('1983351', 'Hengheng Zhang', 'hengheng zhang')
('3141359', 'Shaoyan Sun', 'shaoyan sun')
('1698559', 'Yi Yang', 'yi yang')
('1713616', 'Qi Tian', 'qi tian')
{liangzheng06,manu.chandraker,yee.i.yang,wywqtian}@gmail.com
0b242d5123f79defd5f775d49d8a7047ad3153bcCBMM Memo No. 36
September 15, 2015
How Important is Weight Symmetry in
Backpropagation?
by
Center for Brains, Minds and Machines, McGovern Institute, MIT
('1694846', 'Qianli Liao', 'qianli liao')
('1700356', 'Joel Z. Leibo', 'joel z. leibo')
0ba1d855cd38b6a2c52860ae4d1a85198b304be4Variable-state Latent Conditional Random Fields
for Facial Expression Recognition and Action Unit Detection
Imperial College London, UK
Rutgers University, USA
('2616466', 'Robert Walecki', 'robert walecki')
('1729713', 'Ognjen Rudovic', 'ognjen rudovic')
('1736042', 'Vladimir Pavlovic', 'vladimir pavlovic')
('1694605', 'Maja Pantic', 'maja pantic')
0b50e223ad4d9465bb92dbf17a7b79eccdb997fbImplicit Elastic Matching with Random Projections for Pose-Variant Face
Recognition
Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
Microsoft Live Labs Research
('1738310', 'John Wright', 'john wright')
('1745420', 'Gang Hua', 'gang hua')
ganghua@microsoft.com
jnwright@uiuc.edu
0badf61e8d3b26a0d8b60fe94ba5c606718daf0bRev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 2, 384 - 392, 2016
Facial Expression Recognition Using Deep Belief Network
School of Information Science and Technology, Northwestern University, Xi an710127, Shanxi, China
Teaching Affairs Office, Chongqing Normal University, Chongqing 401331, China
School of Information Science and Technology, Northwestern University, Xi an710127, Shanxi, China
School of Computer and Information Science, Chongqing Normal University 401331, China
Deli Zhu
('3439338', 'Yunong Yang', 'yunong yang')
('2068791', 'Dingyi Fang', 'dingyi fang')
0b02bfa5f3a238716a83aebceb0e75d22c549975Learning Probabilistic Models for Recognizing Faces
under Pose Variations
Computer vision and Remote Sensing, Berlin university of Technology
Sekr. FR-3-1, Franklinstr. 28/29, Berlin, Germany
('2326207', 'M. Saquib', 'm. saquib')
('2962236', 'Olaf Hellwich', 'olaf hellwich')
{saquib;hellwich}@fpk.tu-berlin.de
0bce54bfbd8119c73eb431559fc6ffbba741e6aaPublished as a conference paper at ICLR 2018
SKIP RNN: LEARNING TO SKIP STATE UPDATES IN
RECURRENT NEURAL NETWORKS
†Barcelona Supercomputing Center, ‡Google Inc,
Universitat Polit`ecnica de Catalunya, Columbia University
('2447185', 'Brendan Jou', 'brendan jou')
('1711068', 'Jordi Torres', 'jordi torres')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{victor.campos, jordi.torres}@bsc.es, bjou@google.com,
xavier.giro@upc.edu, shih.fu.chang@columbia.edu
0b2966101fa617b90510e145ed52226e79351072Beyond Verbs: Understanding Actions in Videos
with Text
Department of Computer Science
University of Manitoba
Winnipeg, MB, Canada
Department of Computer Science
University of Manitoba
Winnipeg, MB, Canada
('3056962', 'Shujon Naha', 'shujon naha')
('2295608', 'Yang Wang', 'yang wang')
Email: shujon@cs.umanitoba.ca
Email: ywang@cs.umanitoba.ca
0ba0f000baf877bc00a9e144b88fa6d373db2708Facial Expression Recognition Based on Local
Directional Pattern Using SVM Decision-level Fusion
1. Key Laboratory of Education Informalization for Nationalities, Ministry of
Education, Yunnan NormalUniversity, Kunming, China2. College of Information, Yunnan
Normal University, Kunming, China
('2535958', 'Juxiang Zhou', 'juxiang zhou')
('3305175', 'Tianwei Xu', 'tianwei xu')
('2411704', 'Jianhou Gan', 'jianhou gan')
{zjuxiang@126.com,xutianwei@ynnu.edu.cn,kmganjh@yahoo.com.cn}
0be80da851a17dd33f1e6ffdd7d90a1dc7475b96Hindawi Publishing Corporation
Computational Intelligence and Neuroscience
Volume 2016, Article ID 7696035, 7 pages
http://dx.doi.org/10.1155/2016/7696035
Research Article
Weighted Feature Gaussian Kernel SVM for
Emotion Recognition
School of Automation, Beijing University of Posts and Telecommunications, Beijing 100876, China
Received 26 June 2016; Revised 14 August 2016; Accepted 14 September 2016
Academic Editor: Francesco Camastra
Copyright © 2016 W. Wei and Q. Jia. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great
attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function.
First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight.
Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At
last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance
on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method
has achieved encouraging recognition results compared to the state-of-the-art methods.
1. Introduction
Emotion recognition has necessary applications in the real
world. Its applications include but are not limited to artificial
intelligence and human computer interaction. It remains a
challenging and attractive topic. There are many methods
which have been proposed for handling problems in emotion
recognition. Speech [1, 2], physiological [3–5], and visual
signals have been explored for emotion recognition. Speech
signals are discontinuous signals, since they can be captured
only when people are talking. Acquirement of physiological
signal needs some special physiological sensors. Visual signal
is the best choice for emotion recognition based on the above
reasons. Although the visual information provided is useful,
there are challenges regarding how to utilize this information
reliably and robustly. According to Albert Mehrabian’s 7%–
38%–55% rule, facial expression is an important mean of
detecting emotions [6].
Further studies have been carried out on emotion recog-
nition problems in facial expression images during the last
decade [7, 8]. Given a facial expression image, estimate the
correct emotional state, such as anger, happiness, sadness, and
surprise. The general process has two steps: feature extraction
and classification. For feature extraction, geometric feature,
texture feature, motion feature, and statistical feature are in
common use. For classification, methods based on machine
learning algorithm are frequently used. According to special-
ity of features, applying weighted features to machine learning
algorithm has become an active research topic.
In recent years, emotion recognition with weighted fea-
ture based on facial expression has become a new research
topic and received more and more attention [9, 10]. The
aim is to estimate emotion type from a facial expression
image captured during physical facial expression process of
a subject. But the emotion features captured from the facial
expression image are strongly linked to not the whole face
but some specific regions in the face. For instance, features
of eyebrow, eye, nose, and mouth areas are closely related
to facial expression [11]. Besides, the effect of each feature
on recognition result is different. In order to make the best
of feature, using feature weighting technique can further
enhance recognition performance. While there are several
approaches of confirming weight, it remains an open issue
on how to select feature and calculate corresponding weight
effectively.
In this paper, a new emotion recognition method based
on weighted feature facial expression is presented. It is
motivated by the fact that emotion can be described by facial
expression and each facial expression feature has different
impact on recognition results. Different from previous works
('39248132', 'Wei Wei', 'wei wei')
('2301733', 'Qingxuan Jia', 'qingxuan jia')
Correspondence should be addressed to Wei Wei; wei wei@bupt.edu.cn
0b183f5260667c16ef6f640e5da50272c36d599bSpatio-temporal Event Classification Using
Time-Series Kernel Based Structured Sparsity(cid:2)
L´aszl´o A. Jeni1, Andr´as L˝orincz2, Zolt´an Szab´o3,
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
Faculty of Informatics, E otv os Lor and University, Budapest, Hungary
Gatsby Computational Neuroscience Unit, University College London, London, UK
University of Pittsburgh, Pittsburgh, PA, USA
('1733113', 'Takeo Kanade', 'takeo kanade')laszlo.jeni@ieee.org, andras.lorincz@elte.hu,
zoltan.szabo@gatsby.ucl.ac.uk, {jeffcohn,tk}@cs.cmu.edu
0b4c4ea4a133b9eab46b217e22bda4d9d13559e6MORF: Multi-Objective Random Forests for Face Characteristic Estimation
MICC - University of Florence
2CVC - Universitat Autonoma de Barcelona
DVMM Lab - Columbia University
('37822746', 'Dario Di Fina', 'dario di fina')
('2602265', 'Svebor Karaman', 'svebor karaman')
('1749498', 'Andrew D. Bagdanov', 'andrew d. bagdanov')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
{dario.difina, alberto.delbimbo}@unifi.it, svebor.karaman@columbia.edu, bagdanov@cvc.uab.es
0ba99a709cd34654ac296418a4f41a9543928149
0be764800507d2e683b3fb6576086e37e56059d1Learning from Geometry
by
Department of Electrical and Computer Engineering
Duke University
Date:
Approved:
Robert Calderbank, Supervisor
Lawrence Carin
Ingrid Daubechies
Gallen Reeves
Guillermo Sapiro
Dissertation submitted in partial fulfillment of the requirements for the degree of
Doctor of Philosophy in the Department of Electrical and Computer Engineering
in the Graduate School of Duke University
2016
('34060310', 'Jiaji Huang', 'jiaji huang')
0b642f6d48a51df64502462372a38c50df2051b1A Domain Adaptation Approach to Improve
Speaker Turn Embedding Using Face Representation
Idiap Research Institute, Martigny, Switzerland
École Polytechnique Fédéral de Lausanne, Switzerland
('39560344', 'Nam Le', 'nam le')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
nle@idiap.ch,odobez@idiap.ch
0b7d1386df0cf957690f0fe330160723633d2305Learning American English Accents Using Ensemble Learning with GMMs
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
('38769302', 'Jonathan T. Purnell', 'jonathan t. purnell')
('1705107', 'Malik Magdon-Ismail', 'malik magdon-ismail')
purnej@cs.rpi.edu
magdon@cs.rpi.edu
0b6616f3ebff461e4b6c68205fcef1dae43e2a1aRectifying Self Organizing Maps
for Automatic Concept Learning from Web Images
Bilkent University
06800 Ankara/Turkey
Pinar Duygulu
Bilkent University
06800 Ankara/Turkey
('2540074', 'Eren Golge', 'eren golge')eren.golge@bilkent.edu.tr
pinar.duygulu@gmail.com
0b8c92463f8f5087696681fb62dad003c308ebe2On Matching Sketches with Digital Face Images
in local
('2559473', 'Himanshu S. Bhatt', 'himanshu s. bhatt')
('34173298', 'Samarth Bharadwaj', 'samarth bharadwaj')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
0bc0f9178999e5c2f23a45325fa50300961e0226Recognizing facial expressions from videos using Deep
Belief Networks
CS 229 Project
('34699434', 'Andrew Ng', 'andrew ng')Adithya Rao (adithyar@stanford.edu), Narendran Thiagarajan (naren@stanford.edu)
0ba402af3b8682e2aa89f76bd823ddffdf89fa0aSquared Earth Mover’s Distance-based Loss for Training Deep Neural Networks
Computer Science Department
Stony Brook University
Cognitive Neuroscience Lab
Computer Science Department
Harvard University
Stony Brook University
('2321406', 'Le Hou', 'le hou')
('2576295', 'Chen-Ping Yu', 'chen-ping yu')
('1686020', 'Dimitris Samaras', 'dimitris samaras')
lehhou@cs.stonybrook.edu
chenpingyu@fas.harvard.edu
samaras@cs.stonybrook.edu
0bf0029c9bdb0ac61fda35c075deb1086c116956Article
Modelling of Orthogonal Craniofacial Profiles
University of York, Heslington, York YO10 5GH, UK
Received: 20 October 2017; Accepted: 23 November 2017; Published: 30 November 2017
('1694260', 'Hang Dai', 'hang dai')
('1737428', 'Nick Pears', 'nick pears')
('1678859', 'Christian Duncan', 'christian duncan')
nick.pears@york.ac.uk
2 Alder Hey Children’s Hospital, Liverpool L12 2AP, UK; Christian.Duncan@alderhey.nhs.uk
* Correspondence: hd816@york.ac.uk; Tel.: +44-1904-325-643
0b3f354e6796ef7416bf6dde9e0779b2fcfabed2
9391618c09a51f72a1c30b2e890f4fac1f595ebdGlobally Tuned Cascade Pose Regression via
Back Propagation with Application in 2D Face
Pose Estimation and Heart Segmentation in 3D
CT Images
Dalio Institute of Cardiovascular Imaging, Weill Cornell Medical College
April 1, 2015
This work was submitted to ICML 2015 but got rejected. We put the initial
submission ”as is” in Page 2 - 11 and add updated contents at the tail. The
code of this work is available at https://github.com/pengsun/bpcpr5.
Peng Sun pes2021@med.cornell.edu
James K Min jkm2001@med.cornell.edu
Guanglei Xiong gux2003@med.cornell.edu
93675f86d03256f9a010033d3c4c842a732bf661Universit´edesSciencesetTechnologiesdeLilleEcoleDoctoraleSciencesPourl’ing´enieurUniversit´eLilleNord-de-FranceTHESEPr´esent´ee`al’Universit´edesSciencesetTechnologiesdeLillePourobtenirletitredeDOCTEURDEL’UNIVERSIT´ESp´ecialit´e:MicroetNanotechnologieParTaoXULocalizedgrowthandcharacterizationofsiliconnanowiresSoutenuele25Septembre2009Compositiondujury:Pr´esident:TuamiLASRIRapporteurs:ThierryBARONHenriMARIETTEExaminateurs:EricBAKKERSXavierWALLARTDirecteurdeth`ese:BrunoGRANDIDIER
935a7793cbb8f102924fa34fce1049727de865c2AGE ESTIMATION UNDER CHANGES IN IMAGE QUALITY: AN EXPERIMENTAL STUDY
ISLA Lab, Informatics Institute, University of Amsterdam
('1765602', 'Fares Alnajar', 'fares alnajar')
('1695527', 'Theo Gevers', 'theo gevers')
('1968574', 'Sezer Karaoglu', 'sezer karaoglu')
9326d1390e8601e2efc3c4032152844483038f3fLandmark Based Facial Component Reconstruction
for Recognition Across Pose
Department of Mechanical Engineering
National Taiwan University of Science and Technology
Taipei, Taiwan
('38801529', 'Gee-Sern Hsu', 'gee-sern hsu')
('3329222', 'Hsiao-Chia Peng', 'hsiao-chia peng')
('2329565', 'Kai-Hsiang Chang', 'kai-hsiang chang')
Email: ∗jison@mail.ntust.edu.tw
93747de3d40376761d1ef83ffa72ec38cd385833COGNITION AND EMOTION, 2015
http://dx.doi.org/10.1080/02699931.2015.1039494
Team members’ emotional displays as indicators
of team functioning
University of Amsterdam, Amsterdam, The
Netherlands
University of Amsterdam, Amsterdam, The Netherlands
Ross School of Business, University of Michigan, Ann Arbor, MI, USA
(Received 18 August 2014; accepted 6 April 2015)
Emotions are inherent to team life, yet it is unclear how observers use team members’ emotional
expressions to make sense of team processes. Drawing on Emotions as Social Information theory, we
propose that observers use team members’ emotional displays as a source of information to predict the
team’s trajectory. We argue and show that displays of sadness elicit more pessimistic inferences
regarding team dynamics (e.g., trust, satisfaction, team effectiveness, conflict) compared to displays of
happiness. Moreover, we find that this effect is strengthened when the future interaction between the
team members is more ambiguous (i.e., under ethnic dissimilarity; Study 1) and when emotional
displays can be clearly linked to the team members’ collective experience (Study 2). These studies shed
light on when and how people use others’ emotional expressions to form impressions of teams.
Keywords: Emotions as social information; Impression formation; Team functioning; Sense-making.
How do people make sense of social collectives? This
question has a long-standing interest in the social
sciences (Hamilton & Sherman, 1996), because
observers’ understanding of what goes on between
other individuals informs their behavioural responses
(Abelson, Dasgupta, Park, & Banaji, 1998; Magee &
Tiedens, 2006). A special type of social collective is
the team, in which individuals work together on a
joint task (Ilgen, 1999). There are many reasons why
outside observers may want to develop an under-
standing of a team’s functioning and future trajectory,
for instance because their task is to supervise the team
or because they are considering sponsoring or poten-
tially joining the team as a member. However,
making sense of a team’s trajectory is an uncertain
endeavour because explicit information about team
functioning is often not available. This problem is
further exacerbated by the fact that team ventures are
simultaneously potent and precarious. When indivi-
duals join forces in teams, great achievements can be
obtained (Guzzo & Dickson, 1996), but teams are
also a potential breeding ground for myriad negative
outcomes such as intra-team conflicts, social inhibi-
tion, decision-making biases and productivity losses
(Jehn, 1995; Kerr & Tindale, 2004). We propose
that, in their sense-making efforts, observers there-
fore make use of dynamic signals that provide up-to-
date diagnostic information about the likely trajectory
Correspondence should be addressed to: Astrid C. Homan, University of Amsterdam, Weesperplein
© 2015 Taylor & Francis
('2863272', 'Jeffrey Sanchez-Burks', 'jeffrey sanchez-burks')1018 XA Amsterdam, The Netherlands. E-mail: ac.homan@uva.nl
936c7406de1dfdd22493785fc5d1e5614c6c28822012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 762–772,
Montr´eal, Canada, June 3-8, 2012. c(cid:13)2012 Association for Computational Linguistics
762
93721023dd6423ab06ff7a491d01bdfe83db7754ROBUST FACE ALIGNMENT USING CONVOLUTIONAL NEURAL
NETWORKS
Orange Labs, 4, Rue du Clos Courtel, 35512 Cesson-S´evign´e, France
Keywords:
Face alignment, Face registration, Convolutional Neural Networks.
('1762557', 'Stefan Duffner', 'stefan duffner')
('34798028', 'Christophe Garcia', 'christophe garcia')
{stefan.duffner, christophe.garcia}@orange-ftgroup.com
93971a49ef6cc88a139420349a1dfd85fb5d3f5cScalable Probabilistic Models:
Applied to Face Identification in the Wild
Biometric Person Recognition Group
Idiap Research Institute
Rue Marconi 19 PO Box 592
1920 Martigny
('2121764', 'Laurent El Shafey', 'laurent el shafey')laurent.el-shafey@idiap.ch
sebastien.marcel@idiap.ch
93420d9212dd15b3ef37f566e4d57e76bb2fab2fAn All-In-One Convolutional Neural Network for Face Analysis
Center for Automation Research, UMIACS, University of Maryland, College Park, MD
('48467498', 'Rajeev Ranjan', 'rajeev ranjan')
('2716670', 'Swami Sankaranarayanan', 'swami sankaranarayanan')
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')
('9215658', 'Rama Chellappa', 'rama chellappa')
{rranjan1,swamiviv,carlos,rama}@umiacs.umd.edu
93af36da08bf99e68c9b0d36e141ed8154455ac2Workshop track - ICLR 2018
ADDITIVE MARGIN SOFTMAX
FOR FACE VERIFICATION
Department of Information and Communication Engineering
University of Electronic Science and Technology of China
Chengdu, Sichuan 611731 China
College of Computing
Georgia Institute of Technology
Atlanta, United States.
Department of Information and Communication Engineering
University of Electronic Science and Technology of China
Chengdu, Sichuan 611731 China
('47939378', 'Feng Wang', 'feng wang')
('51094998', 'Weiyang Liu', 'weiyang liu')
('8424682', 'Haijun Liu', 'haijun liu')
feng.wff@gmail.com
{wyliu, hanjundai}@gatech.edu
haijun liu@126.com chengjian@uestc.edu.cn
93cbb3b3e40321c4990c36f89a63534b506b6dafIEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 35, NO. 3, JUNE 2005
477
Learning From Examples in the Small Sample Case:
Face Expression Recognition
('1822413', 'Guodong Guo', 'guodong guo')
('1724754', 'Charles R. Dyer', 'charles r. dyer')
937ffb1c303e0595317873eda5ce85b1a17f9943Eyes Do Not Lie: Spontaneous versus Posed Smiles
Intelligent Systems Lab Amsterdam, University of Amsterdam
Science Park 107, Amsterdam, The Netherlands
('9301018', 'Roberto Valenti', 'roberto valenti')
('1764521', 'Albert Ali Salah', 'albert ali salah')
('1695527', 'Theo Gevers', 'theo gevers')
h.dibeklioglu@uva.nl, r.valenti@uva.nl, a.a.salah@uva.nl, th.gevers@uva.nl
93f37c69dd92c4e038710cdeef302c261d3a4f92Compressed Video Action Recognition
Philipp Kr¨ahenb¨uhl1
The University of Texas at Austin, 2Carnegie Mellon University
University of Southern California, 4A9, 5Amazon
('2978413', 'Chao-Yuan Wu', 'chao-yuan wu')
('1771307', 'Manzil Zaheer', 'manzil zaheer')
('2804000', 'Hexiang Hu', 'hexiang hu')
('1691629', 'Alexander J. Smola', 'alexander j. smola')
('1758550', 'R. Manmatha', 'r. manmatha')
cywu@cs.utexas.edu
manzil@cmu.edu
smola@amazon.com
hexiangh@usc.edu
philkr@cs.utexas.edu
manmatha@a9.com
936227f7483938097cc1cdd3032016df54dbd5b6Learning to generalize to new compositions in image understanding
Gonda Brain Research Center, Bar Ilan University, Israel
3Google Research, Mountain View CA, USA
Tel Aviv University, Israel
('34815079', 'Yuval Atzmon', 'yuval atzmon')
('1750652', 'Jonathan Berant', 'jonathan berant')
('3451674', 'Vahid Kezami', 'vahid kezami')
('1786843', 'Amir Globerson', 'amir globerson')
('1732280', 'Gal Chechik', 'gal chechik')
yuval.atzmon@biu.ac.il
939123cf21dc9189a03671484c734091b240183eWithin- and Cross- Database Evaluations for Gender
Classification via BeFIT Protocols
Idiap Research Institute
Centre du Parc, Rue Marconi 19, CH-1920, Martigny, Switzerland
('2128163', 'Nesli Erdogmus', 'nesli erdogmus')
('2059725', 'Matthias Vanoni', 'matthias vanoni')
Email: nesli.erdogmus, matthias.vanoni, marcel@idiap.ch
938ae9597f71a21f2e47287cca318d4a2113feb2Classifier Learning with Prior Probabilities
for Facial Action Unit Recognition
1National Laboratory of Pattern Recognition, CASIA
University of Chinese Academy of Sciences
Rensselaer Polytechnic Institute
('49889545', 'Yong Zhang', 'yong zhang')
('38690089', 'Weiming Dong', 'weiming dong')
('39495638', 'Bao-Gang Hu', 'bao-gang hu')
('1726583', 'Qiang Ji', 'qiang ji')
zhangyong201303@gmail.com, weiming.dong@ia.ac.cn, hubg@nlpr.ia.ac.cn, qji@ecse.rpi.edu
94b9c0a6515913bad345f0940ee233cdf82fffe1International Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Impact Factor (2012): 3.358
Face Recognition using Local Ternary Pattern for
Low Resolution Image
Research Scholar, CGC Group of Colleges, Gharuan, Punjab, India
Chandigarh University, Gharuan, Punjab, India
('40440964', 'Amanpreet Kaur', 'amanpreet kaur')
946017d5f11aa582854ac4c0e0f1b18b06127ef1Tracking Persons-of-Interest
via Adaptive Discriminative Features
Xi an Jiaotong University
Hanyang University
University of Illinois, Urbana-Champaign
University of California, Merced
http://shunzhang.me.pn/papers/eccv2016/
('2481388', 'Shun Zhang', 'shun zhang')
('1698965', 'Yihong Gong', 'yihong gong')
('3068086', 'Jia-Bin Huang', 'jia-bin huang')
('33047058', 'Jongwoo Lim', 'jongwoo lim')
('32014778', 'Jinjun Wang', 'jinjun wang')
('1752333', 'Narendra Ahuja', 'narendra ahuja')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
94eeae23786e128c0635f305ba7eebbb89af0023Journal of Machine Learning Research 18 (2018) 1-34
Submitted 01/17; Revised 4/18; Published 6/18
Emergence of Invariance and Disentanglement
in Deep Representations∗
Department of Computer Science
University of California
Los Angeles, CA 90095, USA
Department of Computer Science
University of California
Los Angeles, CA 90095, USA
Editor: Yoshua Bengio
('16163297', 'Alessandro Achille', 'alessandro achille')
('1715959', 'Stefano Soatto', 'stefano soatto')
achille@cs.ucla.edu
soatto@cs.ucla.edu
944faf7f14f1bead911aeec30cc80c861442b610Action Tubelet Detector for Spatio-Temporal Action Localization ('1881509', 'Vicky Kalogeiton', 'vicky kalogeiton')
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
9458c518a6e2d40fb1d6ca1066d6a0c73e1d6b735967
A Benchmark and Comparative Study of
Video-Based Face Recognition
on COX Face Database
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1705483', 'Haihong Zhang', 'haihong zhang')
('1710195', 'Shihong Lao', 'shihong lao')
('2378840', 'Alifu Kuerban', 'alifu kuerban')
('1710220', 'Xilin Chen', 'xilin chen')
948af4b04b4a9ae4bff2777ffbcb29d5bfeeb494Available online at www.sciencedirect.com
Procedia Engineering 41 ( 2012 ) 465 – 472
International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012)
Face Recognition From Single Sample Per Person by Learning of
Generic Discriminant Vectors
aFaculty of Electrical Engineering, University of Technology MARA, Shah Alam, 40450 Selangor, Malaysia
bFaculty of Engineering, International Islamic University, Jalan Gombak, 53100 Kuala Lumpur, Malaysia
('7453141', 'Fadhlan Hafiz', 'fadhlan hafiz')
('2412523', 'Amir A. Shafie', 'amir a. shafie')
('9146253', 'Yasir Mohd Mustafah', 'yasir mohd mustafah')
94aa8a3787385b13ee7c4fdd2b2b2a574ffcbd81
94325522c9be8224970f810554611d6a73877c13
9487cea80f23afe9bccc94deebaa3eefa6affa99Fast, Dense Feature SDM on an iPhone
Queensland University of Technology, Brisbane, Queensland, Australia
Carnegie Mellon University, Pittsburgh, PA, USA
('3231493', 'Ashton Fagg', 'ashton fagg')
('1820249', 'Simon Lucey', 'simon lucey')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
9441253b638373a0027a5b4324b4ee5f0dffd670A Novel Scheme for Generating Secure Face
Templates Using BDA
P.G. Student, Department of Computer Engineering,
Associate Professor, Department of Computer
MCERC,
Nashik (M.S.), India
('40075681', 'Shraddha S. Shinde', 'shraddha s. shinde')
('2590072', 'Anagha P. Khedkar', 'anagha p. khedkar')
e-mail: shraddhashinde@gmail.com
949699d0b865ef35b36f11564f9a4396f5c9cddbAnders, Ende, Junghofer, Kissler & Wildgruber (Eds.)
ISSN 0079-6123
CHAPTER 18
Processing of facial identity and expression: a
psychophysical, physiological and computational
perspective
Sarah D. Chiller-Glaus2
Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 T bingen, Germany
University of Zurich, Zurich, Switzerland
('2388249', 'Adrian Schwaninger', 'adrian schwaninger')
('1793750', 'Christian Wallraven', 'christian wallraven')
94ac3008bf6be6be6b0f5140a0bea738d4c75579
94e259345e82fa3015a381d6e91ec6cded3971b4Classiflcation of Photometric Factors
Based on Photometric Linearization
The Institute of Scienti c and Industrial Research, Osaka University
8-1 Mihogaoka, Ibaraki-shi, Osaka 567-0047, JAPAN
2 Matsushita Electric Industrial Co., Ltd.
Okayama University
Okayama-shi, Okayama 700-8530, JAPAN
('3155610', 'Yasuhiro Mukaigawa', 'yasuhiro mukaigawa')
('2740479', 'Yasunori Ishii', 'yasunori ishii')
('1695509', 'Takeshi Shakunaga', 'takeshi shakunaga')
mukaigaw@am.sanken.osaka-u.ac.jp
94a11b601af77f0ad46338afd0fa4ccbab909e82
0efdd82a4753a8309ff0a3c22106c570d8a84c20LDA WITH SUBGROUP PCA METHOD FOR FACIAL IMAGE RETRIEVAL
Human Computer Interaction Lab., Samsung Advanced Institute of Technology, Korea
('34600044', 'Wonjun Hwang', 'wonjun hwang')
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
('37980373', 'Seokcheol Kee', 'seokcheol kee')
wjhwang@sait.samsung.co.kr
0e5dcc6ae52625fd0637c6bba46a973e46d58b9cPareto Models for Multiclass Discriminative Linear
Dimensionality Reduction
University of Alberta, Edmonton, AB T6G 2E8, Canada
bRobotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A
cCentre of Intelligent Machines, McGill University, Montr eal, QC H3A 0E9, Canada
('3141839', 'Fernando De La Torre', 'fernando de la torre')
('1701344', 'Frank P. Ferrie', 'frank p. ferrie')
0e73d2b0f943cf8559da7f5002414ccc26bc77cdSimilarity Comparisons for Interactive Fine-Grained Categorization
California Institute of Technology
vision.caltech.edu
Serge Belongie4
Toyota Technological Institute at Chicago
ttic.edu
4 Cornell Tech
tech.cornell.edu
Approach
1) Image
Database w/
Class Labels
2) Collect Similarity
Comparisons
3) Learn Perceptual
Embedding
A
Mallard
Cardinal
?
1) Query Image
2) Computer
Vision
B
3) Human-in-the-Loop Categorization
C
(cid:1876)
(cid:1855)
(cid:1868)
C
D
perceptual space
where
(cid:1826) True location of (cid:1876) in
(cid:1872) Time step
(cid:1847)(cid:3047) User responses at (cid:1872)
(cid:1876) Query image
(cid:1855) Class
INTERACTIVE
CATEGORIZATION
• Compute per-class probabilities as:
(cid:1826)
(cid:1868)(cid:1855),|(cid:1876),(cid:1847)(cid:3047) (cid:1503)(cid:1868)(cid:1855),(cid:1847)(cid:3047)|(cid:1876) = (cid:3505) (cid:1868)(cid:1855),(cid:1826),(cid:1847)(cid:3047)|(cid:1876)(cid:1856)(cid:1826)
(cid:1875)(cid:3047)=(cid:1868)(cid:1855),(cid:1826),(cid:1847)(cid:3047)|(cid:1876) =(cid:1868)(cid:1847)(cid:3047)| (cid:1855),(cid:1826),(cid:1876) (cid:1868)(cid:1855),(cid:1826)(cid:1876)
(cid:1868)(cid:1855),|(cid:1876),(cid:1847)(cid:3047) (cid:3406)(cid:963)
(cid:1875)(cid:3038)(cid:3047)
(cid:3038),(cid:3030)(cid:3286)(cid:2880)(cid:3030)(cid:963) (cid:1875)(cid:3038)(cid:3047)
i.e. sum of weights of examples of class (cid:1855)
(cid:3038)
where (cid:1863) enumerates training examples
• Weight (cid:1875)(cid:3038) represents how likely (cid:1826)(cid:3038) is
true location (cid:1826):
(cid:1875)(cid:3038)(cid:3047)=(cid:1868)(cid:1855)(cid:3038),(cid:1826)(cid:3038),(cid:1847)(cid:3047)|(cid:1876) =(cid:1868)(cid:1847)(cid:3047)| (cid:1855)(cid:3038),(cid:1826)(cid:3038),(cid:1876) (cid:1868)(cid:1855)(cid:3038),(cid:1826)(cid:3038)(cid:1876)
Efficient computation
• Approximate per-class probabilities as:
such that
(cid:1875)(cid:3038)(cid:3047)(cid:2878)(cid:2869)=(cid:1868)(cid:1873)(cid:3047)(cid:2878)(cid:2869)(cid:1826)(cid:3038)(cid:1875)(cid:3038)(cid:3047)
= (cid:2038)(cid:1845)(cid:3036)(cid:3038)
(cid:1875)(cid:3038)(cid:3047)
(cid:963)
(cid:2038)(cid:1845)(cid:3037)(cid:3038)
(cid:3037)(cid:1488)(cid:3005)
(cid:3513) Initialize weights (cid:1875)(cid:3038)(cid:2868)= (cid:1868)(cid:1855)(cid:3038),(cid:1826)(cid:3038)(cid:1876)
(cid:3514) Update weights (cid:1875)(cid:3038)(cid:3047)(cid:2878)(cid:2869) when user answers
Efficient update rule:
a similarity question
(cid:3515) Update per-class probabilities
?
(cid:3047)
(cid:1847)
(cid:1876)
(cid:1855)
(cid:1868)
D
A
Learning a Metric
• Given set of triplet comparisons (cid:2286), learn
embedding (cid:1800) of (cid:1840) training images with
From (cid:1800), generate similarity matrix
(cid:1845)(cid:1488)(cid:1840)×(cid:1840)
stochastic triplet embedding [van der Maaten
& Weinberger 2012]
B
D
D
Computer Vision
• Easy to map off-the-shelf CV
algorithms into framework, e.g.,
multiclass classification scores
(cid:1868)(cid:1855),(cid:1826)(cid:1876) (cid:1503)(cid:1868)(cid:1855)|(cid:1876)
Incorporate independent user
response as:
Incorporating Users
• (cid:1830) is grid of images for each question
(cid:1868)(cid:1873)(cid:1826) = (cid:2038)(cid:1871)((cid:1826),(cid:1826)(cid:3036))
(cid:963)
(cid:2038)(cid:1871)((cid:1826),(cid:1826)(cid:3037))
(cid:3037)(cid:1488)(cid:3005)
entropy of (cid:1868)(cid:1855),(cid:1826)(cid:3038),(cid:1847)(cid:3047)|(cid:1876)
largest (cid:1875)(cid:3038)(cid:3047)
Selecting the Display
• Approximate solution: maximizes
[Fang & Geman 2005]
From each cluster, select image with
expected information gain in terms of
• Group images into equal-weight clusters
Results
Learned Embedding
Learn category-level embedding of
• Category-level embedding requires
(cid:1840)=200 nodes
Simulated noisy users
With computer vision
Deterministic users
No computer vision
Deterministic users
With computer vision
Interactive Categorization
• Using computer vision reduces the burden on the user
• The system is robust to user noise
much fewer comparisons compared to
at the instance-level
Similarity comparisons are advantageous compared to part/attribute questions
Intelligently selecting image displays reduces effort
System supports multiple similarity
metrics as different types of
questions
Simulate perceptual spaces using
CUB-200-2011 attribute
annotations
Multiple Metrics
CV, Color Similarity
CV, Shape Similarity
CV, Pattern Similarity
No CV, Color/Shape/Pattern Similarity
CV, Color/Shape/Pattern Similarity
Method
Avg. #Qs
2.70
2.67
2.67
2.64
4.21
Qualitative Results
Vermilion
Fly-
catcher
Query Image
Q1: Most Similar?
Q2: Most Similar?
Query Image
Q1: Most Similar By Color?
Q2: Most Similar By Pattern?
Hooded
Merganser
University of California, San Diego
vision.ucsd.edu
Overview
Problem
• Parts and attributes exhibit weaknesses
(cid:190) Scalability issues; costly; reliance on experts, but experts are scarce
Proposed Solution
• Use relative similarity comparisons to reduce dependence on expert-
derived part and attribute vocabularies
Contributions
• We present an efficient, flexible, and scalable system for interactive
fine-grained visual categorization
(cid:190) Based on perceptual similarity
(cid:190) Combines similarity metrics and computer vision methods in a
unified framework
• Outperforms state-of-the-art relevance feedback-based and
part/attribute-based approaches
Similarity Comparisons
A
A. Collect grid-based
similarity comp-
arisons that do not
require prior expertise
B. Broadcast grid-based
comparisons to triplet
comparisons
B
(cid:2286)= (cid:1861),(cid:1862),(cid:1864) (cid:1876)(cid:3036) more similar to (cid:1876)(cid:3037) than (cid:1876)(cid:3039)
Is this more similar to… (cid:1876)(cid:3036)
(cid:1876)(cid:3037)
This one?
(cid:1876)(cid:3039)
Or this one?
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
(cid:1871) , > (cid:1871) ,
?
(cid:1871)((cid:1861),(cid:1862)): perceptual similarity
between images (cid:1876)(cid:3036) and (cid:1876)(cid:3037)
('2367820', 'Catherine Wah', 'catherine wah')
('2996914', 'Grant Van Horn', 'grant van horn')
('3251767', 'Steve Branson', 'steve branson')
('35208858', 'Subhransu Maji', 'subhransu maji')
('1690922', 'Pietro Perona', 'pietro perona')
{sbranson,perona}@caltech.edu
smaji@ttic.edu
sjb@cs.cornell.edu
{cwah@cs,gvanhorn@}ucsd.edu
0ed0e48b245f2d459baa3d2779bfc18fee04145bSemi-Supervised Dimensionality Reduction∗
1National Laboratory for Novel Software Technology
Nanjing University, Nanjing 210093, China
2Department of Computer Science and Engineering
Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
('1772283', 'Daoqiang Zhang', 'daoqiang zhang')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
('1680768', 'Songcan Chen', 'songcan chen')
dqzhang@nuaa.edu.cn
zhouzh@nju.edu.cn
s.chen@nuaa.edu.cn
0eac652139f7ab44ff1051584b59f2dc1757f53bEfficient Branching Cascaded Regression
for Face Alignment under Significant Head Rotation
University of Wisconsin Madison
('2721523', 'Brandon M. Smith', 'brandon m. smith')
('1724754', 'Charles R. Dyer', 'charles r. dyer')
bmsmith@cs.wisc.edu
dyer@cs.wisc.edu
0ef96d97365899af797628e80f8d1020c4c7e431Improving the Speed of Kernel PCA on Large Scale Datasets
Institute for Vision Systems Engineering
Monash University, Victoria, Australia
('2451050', 'Tat-Jun Chin', 'tat-jun chin')
('2220700', 'David Suter', 'david suter')
{ tat.chin | d.suter }@eng.monash.edu.au
0e7f277538142fb50ce2dd9179cffdc36b794054Combining Image Captions and Visual Analysis
for Image Concept Classification
Department of Information and
Knowledge Engineering
Faculty of Informatics and
Statistics, University of
Economics, Prague
Multimedia and Vision
Research Group
Queen Mary University
Mile End Road, London
United Kingdom
Department of Information and
Knowledge Engineering
Faculty of Informatics and
Statistics, University of
Economics, Prague
Department of Information and
Knowledge Engineering
Faculty of Informatics and
Statistics, University of
Economics, Prague
Multimedia and Vision
Research Group
Queen Mary University
Mile End Road, London
United Kingdom
('2005670', 'Tomas Kliegr', 'tomas kliegr')
('3183509', 'Krishna Chandramouli', 'krishna chandramouli')
('2073485', 'Jan Nemrava', 'jan nemrava')
('1740821', 'Vojtech Svatek', 'vojtech svatek')
('1732655', 'Ebroul Izquierdo', 'ebroul izquierdo')
tomas.kliegr@vse.cz
krishna.c@ieee.org
nemrava@vse.cz
svatek@vse.cz
ebroul.izquierdo@elec.qmul.ac.uk
0e8760fc198a7e7c9f4193478c0e0700950a86cd
0ec0fc9ed165c40b1ef4a99e944abd8aa4e38056HHS Public Access
Author manuscript
Curr Res Psychol. Author manuscript; available in PMC 2017 January 17.
Published in final edited form as:
Curr Res Psychol. 2016 ; 6(2): 22–30. doi:10.3844/crpsp.2015.22.30.
The Role of Perspective-Taking on Ability to Recognize Fear
Virginia Polytechnic Institute and State University, Blacksburg
Virginia, USA
Virginia Polytechnic Institute and State University, Blacksburg, Virginia
USA
Virginia Tech Carilion Research Institute
Roanoke, Virginia, USA
Virginia Polytechnic Institute and State University, Blacksburg
Virginia, USA
('2974674', 'Andrea Trubanova', 'andrea trubanova')
('2359365', 'Inyoung Kim', 'inyoung kim')
('3712207', 'Marika C. Coffman', 'marika c. coffman')
('6057482', 'Martha Ann Bell', 'martha ann bell')
('2294952', 'Stephen M. LaConte', 'stephen m. laconte')
('1709677', 'Denis Gracanin', 'denis gracanin')
('2197231', 'Susan W. White', 'susan w. white')
0e652a99761d2664f28f8931fee5b1d6b78c2a82BERGSTRA, YAMINS, AND COX: MAKING A SCIENCE OF MODEL SEARCH
Making a Science of Model Search
J. Bergstra1
D. Yamins2
D. D. Cox1
Rowland Institute at Harvard
100 Edwin H. Land Boulevard
Cambridge, MA 02142, USA
2 Department of Brain and Cognitive
Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139, USA
bergstra@rowland.harvard.edu
yamins@mit.edu
davidcox@fas.harvard.edu
0e50fe28229fea45527000b876eb4068abd6ed8cProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
2936
0eff410cd6a93d0e37048e236f62e209bc4383d1Anchorage Convention District
May 3-8, 2010, Anchorage, Alaska, USA
978-1-4244-5040-4/10/$26.00 ©2010 IEEE
4803
0ea7b7fff090c707684fd4dc13e0a8f39b300a97Integrated Face Analytics Networks through
Cross-Dataset Hybrid Training
School of Computing, National University of Singapore, Singapore
Electrical and Computer Engineering, National University of Singapore, Singapore
Beijing Institute of Technology University, P. R. China
4 SAP Innovation Center Network Singapore, Singapore
('2757639', 'Jianshu Li', 'jianshu li')
('2052311', 'Jian Zhao', 'jian zhao')
('1715286', 'Terence Sim', 'terence sim')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('3124720', 'Shengtao Xiao', 'shengtao xiao')
('33221685', 'Jiashi Feng', 'jiashi feng')
('40345914', 'Fang Zhao', 'fang zhao')
('1943724', 'Jianan Li', 'jianan li')
{jianshu,xiao_shengtao,zhaojian90}@u.nus.edu,lijianan15@gmail.com
{elezhf,elefjia,eleyans}@nus.edu.sg,tsim@comp.nus.edu.sg
0ee737085af468f264f57f052ea9b9b1f58d7222SiGAN: Siamese Generative Adversarial Network
for Identity-Preserving Face Hallucination
('3192517', 'Chih-Chung Hsu', 'chih-chung hsu')
('1685088', 'Chia-Wen Lin', 'chia-wen lin')
('3404171', 'Weng-Tai Su', 'weng-tai su')
('1705205', 'Gene Cheung', 'gene cheung')
0ee661a1b6bbfadb5a482ec643573de53a9adf5eJOURNAL OF LATEX CLASS FILES, VOL. X, NO. X, MONTH YEAR
On the Use of Discriminative Cohort Score
Normalization for Unconstrained Face Recognition
('1725688', 'Massimo Tistarelli', 'massimo tistarelli')
('2384894', 'Yunlian Sun', 'yunlian sun')
('2404207', 'Norman Poh', 'norman poh')
0e36ada8cb9c91f07c9dcaf196d036564e117536Much Ado About Time: Exhaustive Annotation of Temporal Data
Carnegie Mellon University
2Inria
University of Washington 4The Allen Institute for AI
http://allenai.org/plato/charades/
('34280810', 'Gunnar A. Sigurdsson', 'gunnar a. sigurdsson')
('2192178', 'Olga Russakovsky', 'olga russakovsky')
('2270286', 'Ali Farhadi', 'ali farhadi')
('1785596', 'Ivan Laptev', 'ivan laptev')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
0e986f51fe45b00633de9fd0c94d082d2be51406Face Detection, Pose Estimation, and Landmark Localization in the Wild
University of California, Irvine
('32542103', 'Xiangxin Zhu', 'xiangxin zhu'){xzhu,dramanan}@ics.uci.edu
0ebc50b6e4b01eb5eba5279ce547c838890b1418Similarity-Preserving Binary Signature for Linear Subspaces
∗State Key Laboratory of Intelligent Technology and Systems,
Tsinghua National Laboratory for Information Science and Technology (TNList),
Tsinghua University, Beijing 100084, China
National University of Singapore, Singapore
('1901939', 'Jianqiu Ji', 'jianqiu ji')
('38376468', 'Jianmin Li', 'jianmin li')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1713616', 'Qi Tian', 'qi tian')
('34997537', 'Bo Zhang', 'bo zhang')
jijq10@mails.tsinghua.edu.cn, {lijianmin, dcszb}@mail.tsinghua.edu.cn
‡Department of Computer Science, University of Texas at San Antonio, qi.tian@utsa.edu
eleyans@nus.edu.sg
0e49a23fafa4b2e2ac097292acf00298458932b4Theory and Applications of Mathematics & Computer Science 3 (1) (2013) 13–31
Unsupervised Detection of Outlier Images Using Multi-Order
Image Transforms
aLawrence Technological University, 21000 W Ten Mile Rd., South eld, MI 48075, United States
0ec1673609256b1e457f41ede5f21f05de0c054fBlessing of Dimensionality: High-dimensional Feature and Its Efficient
Compression for Face Verification
University of Science and Technology of China
Microsoft Research Asia
('39447786', 'Dong Chen', 'dong chen')
('2032273', 'Xudong Cao', 'xudong cao')
('1716835', 'Fang Wen', 'fang wen')
('40055995', 'Jian Sun', 'jian sun')
chendong@mail.ustc.edu.cn
{xudongca,fangwen,jiansun}@microsoft.com
0e3840ea3227851aaf4633133dd3cbf9bbe89e5b
0e5dad0fe99aed6978c6c6c95dc49c6dca601e6a
0ea38a5ba0c8739d1196da5d20efb13406bb6550Relative Attributes
Toyota Technological Institute Chicago (TTIC
University of Texas at Austin
('1713589', 'Devi Parikh', 'devi parikh')
('1794409', 'Kristen Grauman', 'kristen grauman')
dparikh@ttic.edu
grauman@cs.utexas.edu
0e21c9e5755c3dab6d8079d738d1188b03128a31Constrained Clustering and Its Application to Face Clustering in Videos
1NLPR, CASIA, Beijing 100190, China
Rensselaer Polytechnic Institute, Troy, NY 12180, USA
('2040015', 'Baoyuan Wu', 'baoyuan wu')
('40382978', 'Yifan Zhang', 'yifan zhang')
('39495638', 'Bao-Gang Hu', 'bao-gang hu')
('1726583', 'Qiang Ji', 'qiang ji')
0e78af9bd0f9a0ce4ceb5f09f24bc4e4823bd698Spontaneous Subtle Expression Recognition:
Imbalanced Databases & Solutions (cid:63)
1 Faculty of Engineering,
Multimedia University (MMU), Cyberjaya, Malaysia
2 Faculty of Computing & Informatics,
Multimedia University (MMU), Cyberjaya, Malaysia
('2339975', 'John See', 'john see')lengoanhcat@gmail.com, raphael@mmu.edu.my
johnsee@mmu.edu.my
0e93a5a7f6dbdb3802173dca05717d27d72bfec0Attribute Recognition by Joint Recurrent Learning of Context and Correlation
Queen Mary University of London
Vision Semantics Ltd.2
('48093957', 'Jingya Wang', 'jingya wang')
('2171228', 'Xiatian Zhu', 'xiatian zhu')
('2073354', 'Shaogang Gong', 'shaogang gong')
('47113208', 'Wei Li', 'wei li')
{jingya.wang, s.gong, wei.li}@qmul.ac.uk
eddy@visionsemantics.com
0e2ea7af369dbcaeb5e334b02dd9ba5271b10265
0ed1c1589ed284f0314ed2aeb3a9bbc760dcdeb5Max-Margin Early Event Detectors
Minh Hoai
Robotics Institute, Carnegie Mellon University
('1707876', 'Fernando De la Torre', 'fernando de la torre')
0e7c70321462694757511a1776f53d629a1b38f3NIST Special Publication 1136
2012 Proceedings of the
Performance Metrics for Intelligent
Systems (PerMI ‘12) Workshop

http://dx.doi.org/10.6028/NIST.SP.1136
('39737545', 'Rajmohan Madhavan', 'rajmohan madhavan')
('2105056', 'Elena R. Messina', 'elena r. messina')
('31797581', 'Brian A. Weiss', 'brian a. weiss')
0ec2049a1dd7ae14c7a4c22c5bcd38472214f44dFast Subspace Search via Grassmannian Based Hashing
University of Minnesota
Proto Labs, Inc
Columbia University
University of Minnesota
('1712593', 'Xu Wang', 'xu wang')
('1734862', 'Stefan Atev', 'stefan atev')
('1738310', 'John Wright', 'john wright')
('1919996', 'Gilad Lerman', 'gilad lerman')
wang1591@umn.edu
stefan.atev@gmail.com
johnwright@ee.columbia.edu
lerman@umn.edu
0ec67c69e0975cfcbd8ba787cc0889aec4cc5399Locating Salient Object Features
K.N.Walker, T.F.Cootes and C.J.Taylor
Dept. Medical Biophysics,
Manchester University, UK
knw@sv1.smb.man.ac.uk
0e1983e9d0e8cb4cbffef7af06f6bc8e3f191a64Estimating Illumination Parameters In Real Space
With Application To Image Relighting
Key Laboratory of Pervasive Computing (Tsinghua University), Ministry of Education
Guangyou Xu
Tsinghua University, Beijing 100084, P.R.China
Categories and Subject Descriptors
I.4.8 [Image Processing and Computer Vision]: Scene Analysis
– photometry, shading, shape.
General Terms
Algorithms
Keywords
Illumination parameters estimation, spherical harmonic, image
relighting.
1. INTRODUCTION
Illumination condition is a fundamental problem in both computer
vision and graphics. For instance, the estimation of lighting
condition is important in face relighting and recognition, since
synthesized realistic images can alleviate the small sample
problem in face recognition applications.
Recently Basri [2] and Ramamoorthi [3] independently apply the
spherical harmonics techniques to explain the low dimensionality
of differently illuminated images for convex Lambertian object.
Ramamoorthi even derives analytically the principal components
of this low dimensional image subspace. This method have
already been widely applied to the areas of inverse rendering,
image relighting, face recognition, etc.
One of the limitations of this method is that the cast shadows are
ignored. In the experiment results of [1], the cast shadows
improve the face recognition result on the most extreme light
directions. How to overcome this limitation is one of the
motivations of our work. Furthermore, rendering realistic image
need the real light direction. Although the spherical harmonics
coefficient of illumination could be easily estimated, how to
recover the real light direction from these coefficients is still a
problem.
We propose a novel algorithm for estimating the illumination
parameters including the direction and strength of point light with
the strength of ambient illumination. Images are projected into the
analytical subspace derived in [3] according to a known 3D
geometry, then the illumination parameters are estimated from
these projected coefficients. Our primary experiments proved the
stability and effectiveness of this method.
Copyright is held by the author/owner(s).
MM'05, November 6-11, 2005, Singapore.
ACM 1-59593-044-2/05/0011.
2. METHODOLOGY
Consider a convex Lambertian object of known geometry with
uniform albedo illuminated by distant isotropic light sources, the
irradiance could be expressed as a linear combination of the
spherical harmonic basis functions. In fact, 99% of the energy of
the Lambertian BRDF filter is constrained by the first 9 basis [3].
In this paper we consider a simple illumination model consisting
of one distant directional point light source and ambient
illumination. We could write the illumination coefficients as
formula of four illumination parameters (Azimuth and Elevation
angel for point light direction, Sp for point light strength and Sa
for ambient illumination strength).
One problem is that, although the spherical harmonic basis
functions are orthogonal in the sphere coordinates, they are not
orthogonal in the image space. This property causes the algorithm
unstable in some case. We choose the analytical subspace
constructed in [3], which requires no training data. The image is
projected to this subspace and the PCA coefficients are computed.
Then the illumination parameters could be estimated from these
PCA coefficients by solving a nonlinear least-square problem.
Finding a global extreme of nonlinear problem is very difficult.
We choose the popular Gauss-Newton method to solve this
minimal problem, which might stay on local minima. The
experimental results show that if we choose enough PCA
coefficients, the energy surface guarantee the local minima is
same as the global minima.(Note that we can use only a part of
the PCA coefficients to solve this nonlinear minimal problem.)
Actually the first five PCA coefficients are enough for estimate
these parameters stably. (For limited length of this paper, the
equations and stability analysis of the result is omitted.)
3. RESULTS
We experimented on both synthesized sphere images and real face
images in CMU PIE database [4] and Yale Database B [1].
3.1 Synthesized sphere images result
First, we randomly select the four illumination parameters and
synthesize 600 sphere images under the different illumination, in
which the incident directions are limited to the upper hemisphere
and the light strength parameters are normalized to sum to unity.
Then we test our algorithm on these synthesized sphere images.
Similar to the Yale Database B, we divide the images into 5
subsets (12°, 25°, 55°, 77°, 90°) according to the angle which the
light source direction makes with the camera's axis.
1039
('13801076', 'Feng Xie', 'feng xie')
('3265275', 'Linmi Tao', 'linmi tao')
xiefeng97@mails.tsinghua.edu.cn
{linmi, xgy-dcs}@tsinghua.edu.cn
0ee5c4112208995bf2bb0fb8a87efba933a94579Understanding Clothing Preference Based on Body Shape From Online Sources
Fashion is Taking Shape:
1Scalable Learning and Perception Group, 2Real Virtual Humans
Max Planck Institute for Informatics, Saarbr ucken, Germany
('26879574', 'Hosnieh Sattar', 'hosnieh sattar')
('1739548', 'Mario Fritz', 'mario fritz')
('2635816', 'Gerard Pons-Moll', 'gerard pons-moll')
{sattar,mfritz,gpons}@mpi-inf.mpg.de
0e1a18576a7d3b40fe961ef42885101f4e2630f8Automated Detection and Identification of
Persons in Video
Visual Geometry Group
Department of Engineering Science
University of Oxford
September 24, 2004
('3056091', 'Mark Everingham', 'mark everingham'){me|az}@robots.ox.ac.uk
6080f26675e44f692dd722b61905af71c5260af8
60a006bdfe5b8bf3243404fae8a5f4a9d58fa892A Reference-Based Framework for
Pose Invariant Face Recognition
1 HP Labs, Palo Alto, CA 94304, USA
2 Google Inc., Mountain View, CA 94043, USA
BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
Center for Research in Intelligent Systems, University of California, Riverside, CA 92521, USA
('1784929', 'Mehran Kafai', 'mehran kafai')
('1745657', 'Kave Eshghi', 'kave eshghi')
('39776603', 'Le An', 'le an')
('1707159', 'Bir Bhanu', 'bir bhanu')
mehran.kafai@hp.com, kave@google.com, lan004@unc.edu, bhanu@cris.ucr.edu
6043006467fb3fd1e9783928d8040ee1f1db1f3aFace Recognition with Learning-based Descriptor
The Chinese University of Hong Kong
ITCS, Tsinghua University
Shenzhen Institutes of Advanced Technology
4Microsoft Research Asia
Chinese Academy of Sciences, China
('2695115', 'Zhimin Cao', 'zhimin cao')
('2274228', 'Qi Yin', 'qi yin')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('40055995', 'Jian Sun', 'jian sun')
600025c9a13ff09c6d8b606a286a79c823d89db8Machine Learning and Applications: An International Journal (MLAIJ) Vol.1, No.1, September 2014
A REVIEW ON LINEAR AND NON-LINEAR
DIMENSIONALITY REDUCTION
TECHNIQUES
1Arunasakthi. K, 2KamatchiPriya. L
1 Assistant Professor
Department of Computer Science and Engineering
Ultra College of Engineering and Technology for Women, India
2Assistant Professor
Department of Computer Science and Engineering
Vickram College of Engineering, Enathi, Tamil Nadu, India
60d765f2c0a1a674b68bee845f6c02741a49b44e
60c24e44fce158c217d25c1bae9f880a8bd19fc3Controllable Image-to-Video Translation:
A Case Study on Facial Expression Generation
MIT CSAIL
Wenbing Huang
Tencent AI Lab
MIT-Waston Lab
Tencent AI Lab
Tencent AI Lab
('2548303', 'Lijie Fan', 'lijie fan')
('2551285', 'Chuang Gan', 'chuang gan')
('1768190', 'Junzhou Huang', 'junzhou huang')
('40206014', 'Boqing Gong', 'boqing gong')
60e2b9b2e0db3089237d0208f57b22a3aac932c1Frankenstein: Learning Deep Face Representations
using Small Data
('38819702', 'Guosheng Hu', 'guosheng hu')
('1766837', 'Xiaojiang Peng', 'xiaojiang peng')
('2653152', 'Yongxin Yang', 'yongxin yang')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
60542b1a857024c79db8b5b03db6e79f74ec8f9fLearning to Detect Human-Object Interactions
University of Michigan, Ann Arbor
Washington University in St. Louis
('2820136', 'Yu-Wei Chao', 'yu-wei chao')
('1860829', 'Yunfan Liu', 'yunfan liu')
('9539636', 'Xieyang Liu', 'xieyang liu')
('9344937', 'Huayi Zeng', 'huayi zeng')
('8342699', 'Jia Deng', 'jia deng')
{ywchao,yunfan,lxieyang,jiadeng}@umich.edu
{zengh}@wustl.edu
60d4cef56efd2f5452362d4d9ac1ae05afa970d1Learning End-to-end Video Classification with Rank-Pooling
Research School of Engineering, The Australian National University, ACT 2601, Australia
Research School of Computer Science, The Australian National University, ACT 2601, Australia
('1688071', 'Basura Fernando', 'basura fernando')
('2377076', 'Stephen Gould', 'stephen gould')
BASURA.FERNANDO@ANU.EDU.AU
STEPHEN.GOULD@ANU.EDU.AU
60ce4a9602c27ad17a1366165033fe5e0cf68078TECHNICAL NOTE
DIGITAL & MULTIMEDIA SCIENCES
J Forensic Sci, 2015
doi: 10.1111/1556-4029.12800
Available online at: onlinelibrary.wiley.com
Ph.D.
Combination of Face Regions in Forensic
Scenarios*
('1808344', 'Pedro Tome', 'pedro tome')
('1701431', 'Julian Fierrez', 'julian fierrez')
('1692626', 'Ruben Vera-Rodriguez', 'ruben vera-rodriguez')
('1732220', 'Javier Ortega-Garcia', 'javier ortega-garcia')
6097ea6fd21a5f86a10a52e6e4dd5b78a436d5bf
60c699b9ec71f7dcbc06fa4fd98eeb08e915eb09Long-Term Video Interpolation with Bidirectional
Predictive Network
Peking University
('8082703', 'Xiongtao Chen', 'xiongtao chen')
('1788029', 'Wenmin Wang', 'wenmin wang')
('3258842', 'Jinzhuo Wang', 'jinzhuo wang')
60970e124aa5fb964c9a2a5d48cd6eee769c73efSubspace Clustering for Sequential Data
School of Computing and Mathematics
Charles Sturt University
Bathurst, NSW 2795, Australia
Division of Computational Informatics
CSIRO
North Ryde, NSW 2113, Australia
('40635684', 'Stephen Tierney', 'stephen tierney')
('1750488', 'Junbin Gao', 'junbin gao')
('1767638', 'Yi Guo', 'yi guo')
{stierney, jbgao}@csu.edu.au
yi.guo@csiro.au
60efdb2e204b2be6701a8e168983fa666feac1beInt J Comput Vis
DOI 10.1007/s11263-017-1043-5
Transferring Deep Object and Scene Representations for Event
Recognition in Still Images
Received: 31 March 2016 / Accepted: 1 September 2017
© Springer Science+Business Media, LLC 2017
('33345248', 'Limin Wang', 'limin wang')
('1915826', 'Zhe Wang', 'zhe wang')
60824ee635777b4ee30fcc2485ef1e103b8e7af9Cascaded Collaborative Regression for Robust Facial
Landmark Detection Trained using a Mixture of Synthetic and
Real Images with Dynamic Weighting
Life Member, IEEE, William Christmas, and Xiao-Jun Wu
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('38819702', 'Guosheng Hu', 'guosheng hu')
('1748684', 'Josef Kittler', 'josef kittler')
60643bdab1c6261576e6610ea64ea0c0b200a28d
60a20d5023f2bcc241eb9e187b4ddece695c2b9bInvertible Nonlinear Dimensionality Reduction
via Joint Dictionary Learning
Department of Electrical and Computer Engineering
Technische Universit¨at M¨unchen, Germany
('30013158', 'Xian Wei', 'xian wei')
('1744239', 'Martin Kleinsteuber', 'martin kleinsteuber')
('36559760', 'Hao Shen', 'hao shen')
{xian.wei, kleinsteuber, hao.shen}@tum.de.
60cdcf75e97e88638ec973f468598ae7f75c59b486
Face Annotation Using Transductive
Kernel Fisher Discriminant
('1704030', 'Jianke Zhu', 'jianke zhu')
('1681775', 'Michael R. Lyu', 'michael r. lyu')
60040e4eae81ab6974ce12f1c789e0c05be00303Center for Energy Harvesting
Materials and Systems (CEHMS),
Bio-Inspired Materials and
Devices Laboratory (BMDL),
Center for Intelligent Material
Systems and Structure (CIMSS),
Department of Mechanical Engineering,
Virginia Tech,
Blacksburg, VA 24061
Graphical Facial Expression
Analysis and Design Method:
An Approach to Determine
Humanoid Skin Deformation
The architecture of human face is complex consisting of 268 voluntary muscles that perform
coordinated action to create real-time facial expression. In order to replicate facial expres-
sion on humanoid face by utilizing discrete actuators, the first and foremost step is the identi-
fication of a pair of origin and sinking points (SPs). In this paper, we address this issue and
present a graphical analysis technique that could be used to design expressive robotic faces.
The underlying criterion in the design of faces being deformation of a soft elastomeric skin
through tension in anchoring wires attached on one end to the skin through the sinking point
and the other end to the actuator. The paper also addresses the singularity problem of facial
control points and important phenomena such as slacking of actuators. Experimental charac-
terization on a prototype humanoid face was performed to validate the model and demon-
strate the applicability on a generic platform. [DOI: 10.1115/1.4006519]
Keywords: humanoid prototype, facial expression, artificial skin, contractile actuator,
graphical analysis
Introduction
Facial expression of humanoid is becoming a key research topic
in recent years in the areas of social robotics. The embodiment of
robotic head akin to that of human being promotes a more friendly
communication between the humanoid and the user. There are
many challenges in realizing human-like face such as material
suitable for artificial skin, muscles, sensors, supporting structures,
machine elements, vision, and audio systems. In addition to mate-
rials and their integration, computational tools, static and dynamic
analysis are required to fully understand the effect of each param-
eter on the overall performance of a prototype humanoid face and
provide optimum condition.
This paper is organized in eight sections. First, we introduce the
background and methodology for creating facial expression in
robotic heads. A thorough description of the overall problem asso-
ciated with expression analysis is presented along with pictorial
representation of the muscle arrangement on a prototype face.
Second, a literature survey is presented on facial expression analy-
sis techniques applied to humanoid head. Third, the description of
graphical facial expression analysis and design (GFEAD) method
is presented focusing on two generic cases. Fourth, application
of the GFEAD method on a prototype skull is presented and
important manifestations that could not be obtained with other
techniques are discussed. Fifth, results from experimental charac-
terization of facial movement with a skin layer are discussed.
Sixth, the effect of the skin properties and associated issues will
be discussed. Section 7 discusses the significance of GFEAD
method on practical platforms. Finally, the summary of this study
is presented in Sec. 8.
In the last few years, we have demonstrated humanoid heads
using a variety of actuation technologies including: piezoelectric
ultrasonic motors for actuation and macrofiber composite for sens-
ing [1]; electromagnetic RC servo motor for actuation and embed-
University of Texas at
Dallas 800 West Campbell Rd., Richardson, TX 75080.
2Corresponding author.
Contributed by the Mechanisms and Robotics Committee of ASME for publica-
tion in the JOURNAL OF MECHANISMS AND ROBOTICS. Manuscript received October 10,
2010; final manuscript received February 23, 2012; published online April 25, 2012.
Assoc. Editor: Qiaode Jeffrey Ge.
ded unimorph for sensing [2,3], and recently shape memory alloy
(SMA) based actuation for baby humanoid robot focusing on the
face and jaw movement [4]. We have also reported facial muscles
based on conducting polymer actuators to overcome the high
power requirement of current actuation technologies including
polypyrrole–polyvinylidene difluoride composite stripe and zig-
zag actuators [5] and axial type helically wounded polypyrrole–
platinum composite actuators [6]. All these studies have identified
the issues related to the design of facial structure and artificial
muscle requirements. Other types of actuators such as dielectric
elastomer were also studied for general robotics application [7].
There are several other studies reported in literature related to
humanoid facial expression. Facial expression generation and ges-
ture synthesis from sign language has been applied in the animation
of an avatar [8], expressive humanoid robot Albert-HUBO with 31
Degree of Freedom (DOF) head and 35 DOF body motions based
on servo motors [9], facial expression imitation system for face rec-
ognition and implementation on mascot type robotic system [10],
facial expressive humanoid robot SAYA based on McKibben pneu-
matic actuators [11], and android robot Repliee for studying psy-
chological aspects [12]. However, none of these studies address the
design strategy for humanoid head based on discrete actuators.
Computational tools for precise analysis of the effect of actuator
arrangement on the facial expression are missing.
Even though significant efforts have been made, there is little
fundamental understanding of the structural design questions.
How these facial expressions can be precisely designed? How are
the terminating points on the skull determined? What will be the
effect of variation in arrangement of actuators? The answer to
these questions requires the development of an accurate mathe-
matical model that can be easily coded and visualized. For this
purpose, we present a GFEAD method for application in human-
oid head development. This method will be briefly discussed for
generic cases to illustrate all the computational steps.
The prime motivation behind using the graphical approach is
that it provides both visual information as well as quantitative
data required for the design and analysis of humanoid face. The
deformation analysis and design is performed directly on the skull
surface, which ultimately forms the platform for actuation. The
graphical approach is simple to implement as it is conducted in
2D. Generally, the skull is created from a scanned model; thus,
Journal of Mechanisms and Robotics
Copyright VC 2012 by ASME
MAY 2012, Vol. 4 / 021010-1
('2248772', 'Yonas Tadesse', 'yonas tadesse')
('25310631', 'Shashank Priya', 'shashank priya')
e-mail: yonas@vt.edu;
yonas.tadesse@utdallas.edu
e-mail: spriya@vt.edu
60b3601d70f5cdcfef9934b24bcb3cc4dde663e7SUBMITTED TO IEEE TRANS. ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Binary Gradient Correlation Patterns
for Robust Face Recognition
('1739171', 'Weilin Huang', 'weilin huang')
('1709042', 'Hujun Yin', 'hujun yin')
60737db62fb5fab742371709485e4b2ddf64b7b2Crowdsourced Selection on Multi-Attribute Data
Tsinghua University
('39163188', 'Xueping Weng', 'xueping weng')
('23492509', 'Guoliang Li', 'guoliang li')
('1802748', 'Huiqi Hu', 'huiqi hu')
('33091680', 'Jianhua Feng', 'jianhua feng')
wxp15@mails.tsinghua.edu.cn, liguoliang@tsinghua.edu.cn, hqhu@sei.ecnu.edu.cn, fengjh@tsinghua.edu.cn
60496b400e70acfbbf5f2f35b4a49de2a90701b5Avoiding Boosting Overfitting by Removing Confusing
Samples
Moscow State University, dept. of Computational Mathematics and Cybernetics
Graphics and Media Lab
119992 Moscow, Russia
('2918740', 'Alexander Vezhnevets', 'alexander vezhnevets')
('3319972', 'Olga Barinova', 'olga barinova')
{avezhnevets, obarinova}@graphics.cs.msu.ru
60bffecd79193d05742e5ab8550a5f89accd8488PhD Thesis Proposal
Classification using sparse representation and applications to skin
lesion diagnosis
In only a few decades, sparse representation modeling has undergone a tremendous expansion with
successful applications in many fields including signal and image processing, computer science,
machine learning, statistics. Mathematically, it can be considered as the problem of finding the
sparsest solution (the one with the fewest non-zeros entries) to an underdetermined linear system
of equations [1]. Based on the observation for natural images (or images rich in textures) that small
scale structures tend to repeat themselves in an image or in a group of similar images, a signal
source can be sparsely represented over some well-chosen redundant basis (a dictionary). In other
words, it can be approximately representable by a linear combination of a few elements (also called
atoms or basis vectors) of a redundant/over-complete dictionary.
Such models have been proven successful in many tasks including denoising [2]-[5], compression
[6],[7], super-resolution [8],[9], classification and pattern recognition [10]-[16]. In the context of
classification, the objective is to find the class to which a test signal belongs, given training data
from multiple classes. Sparse representation has become a powerful technique in classification and
applications, including texture classification [16], face recognition [12], object detection [10], and
segmentation of medical images [17], [18]. In conventional Sparse Representation Classification
(SRC) schemes, learned dictionaries and sparse representation are involved to classify image pixels
(the image is divided into patches surrounding each image pixel). The performance of a SRC relies
on a good dictionary, and on the sparse representation optimization model. Typically, a dictionary
is learned for each signal class using training data, and classification of a new signal is achieved
by associating it with the class whose dictionary allows the best approximation of the signal via an
optimization problem that minimize the reconstruction error under some constraints including the
sparsity one. It is important to note that the dictionary may not be a trained one [12]. In [12], the
dictionary used for the face recognition is composed of many face images. Generally, the
classification methods consider sparse modeling of natural high-dimensional signals and assume
that the data belonging to the same class lie in the same subspace of a much lower dimension. Thus,
the data can be modeled as a union of low dimensional linear subspaces. Then a union of a small
subset of these linear subspaces is found to be a model of each class [19]. More advanced methods
take into account the multi-subspace structure of the data of a high dimensional space. That is the
case when data in multiple classes lie in multiple low-dimensional subspaces. Then, the
classification problem can be formulated via a structured sparsity-based model, or group sparsity
one [13, 20]. Other approach proposed to increase the performance of classification by using
multiple disjoint sparse representation for the dictionary of each class instead of a single signal
representation [21].
II. Objective
In this study, we focus on a highly accurate classification methods by sparse representation in order
to improve existing methods. More specifically, we aim to improve the result of classification for
-1-
601834a4150e9af028df90535ab61d812c45082cA short review and primer on using video for
psychophysiological observations in
human-computer interaction applications
Quanti ed Employee unit, Finnish Institute of Occupational Health
POBox 40, 00250, Helsinki, Finland
('2612057', 'Teppo Valtonen', 'teppo valtonen')teppo. valtonen @ttl. fi,
346dbc7484a1d930e7cc44276c29d134ad76dc3f
On: 21 November 2007
Access Details: [subscription number 785020433]
Publisher: Informa Healthcare
Informa Ltd Registered in England and Wales Registered Number: 1072954
Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Systems
Publication details, including instructions for authors and subscription information
http://www.informaworld.com/smpp/title~content=t713663148
Artists portray human faces with the Fourier statistics of
complex natural scenes
a Institute of Anatomy I, School of Medicine, Friedrich Schiller University, Germany
Friedrich Schiller University, D-07740 Jena
Germany
First Published on: 28 August 2007
To cite this Article: Redies, Christoph, Hänisch, Jan, Blickhan, Marko and Denzler,
Joachim (2007) 'Artists portray human faces with the Fourier statistics of complex
To link to this article: DOI: 10.1080/09548980701574496
URL: http://dx.doi.org/10.1080/09548980701574496
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf
This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction,
re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly
forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents will be
complete or accurate or up to date. The accuracy of any instructions,
formulae and drug doses should be
independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or
arising out of the use of this material.
('2485437', 'Christoph Redies', 'christoph redies')
34a41ec648d082270697b9ee264f0baf4ffb5c8d
34b3b14b4b7bfd149a0bd63749f416e1f2fc0c4cThe AXES submissions at TrecVid 2013
University of Twente 2Dublin City University 3Oxford University
4KU Leuven 5Fraunhofer Sankt Augustin 6INRIA Grenoble
('3157479', 'Robin Aly', 'robin aly')
('3271933', 'Matthijs Douze', 'matthijs douze')
('1688071', 'Basura Fernando', 'basura fernando')
('9401491', 'Zaid Harchaoui', 'zaid harchaoui')
('1767756', 'Kevin McGuinness', 'kevin mcguinness')
('3095774', 'Dan Oneata', 'dan oneata')
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('2319574', 'Danila Potapov', 'danila potapov')
('3428663', 'Jérôme Revaud', 'jérôme revaud')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
('1809436', 'Jochen Schwenninger', 'jochen schwenninger')
('1783430', 'David Scott', 'david scott')
('1704728', 'Tinne Tuytelaars', 'tinne tuytelaars')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
('40465030', 'Heng Wang', 'heng wang')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
34bb11bad04c13efd575224a5b4e58b9249370f3Towards Good Practices for Action Video Encoding
National Key Laboratory for Novel Software Technology
Nanyang Technological University
Shanghai Jiao Tong University
Nanjing University, China
Singapore
China
('1808816', 'Jianxin Wu', 'jianxin wu')
('22183596', 'Yu Zhang', 'yu zhang')
('8131625', 'Weiyao Lin', 'weiyao lin')
wujx2001@nju.edu.cn
roykimbly@hotmail.com
wylin@sjtu.edu.cn
3411ef1ff5ad11e45106f7863e8c7faf563f4ee1Image Retrieval and Ranking via Consistently
Reconstructing Multi-attribute Queries
School of Computer Science and Technology, Tianjin University, Tianjin, China
2 State Key Laboratory of Information Security, IIE, Chinese Academy of Sciences, China
National University of Singapore
4 State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science
and Engineering, Beihang University, Beijing, China
('1719250', 'Xiaochun Cao', 'xiaochun cao')
('38188331', 'Hua Zhang', 'hua zhang')
('33465926', 'Xiaojie Guo', 'xiaojie guo')
('2705801', 'Si Liu', 'si liu')
('33610144', 'Xiaowu Chen', 'xiaowu chen')
caoxiaochun@iie.ac.cn, huazhang@tju.edu.cn, xj.max.guo@gmail.com,
dcslius@nus.edu.sg, chen@buaa.edu.cn
345cc31c85e19cea9f8b8521be6a37937efd41c2Deep Manifold Traversal: Changing Labels with
Convolutional Features
Cornell University, Washington University in St. Louis
*Authors contributing equally
('31693738', 'Jacob R. Gardner', 'jacob r. gardner')
('3222840', 'Paul Upchurch', 'paul upchurch')
('1940272', 'Matt J. Kusner', 'matt j. kusner')
('7769997', 'Yixuan Li', 'yixuan li')
('1706504', 'John E. Hopcroft', 'john e. hopcroft')
34d484b47af705e303fc6987413dc0180f5f04a9RI:Medium: Unsupervised and Weakly-Supervised
Discovery of Facial Events
1 Introduction
The face is one of the most powerful channels of nonverbal communication. Facial expression has been a
focus of emotion research for over a hundred years [11]. It is central to several leading theories of emotion
[16, 28, 44] and has been the focus of at times heated debate about issues in emotion science [17, 23, 40].
Facial expression gures prominently in research on almost every aspect of emotion, including psychophys
iology [30], neural correlates [18], development [31], perception [4], addiction [24], social processes [26],
depression [39] and other emotion disorders [46], to name a few. In general, facial expression provides cues
about emotional response, regulates interpersonal behavior, and communicates aspects of psychopathology.
While people have believed for centuries that facial expressions can reveal what people are thinking and
feeling, it is relatively recently that the face has been studied scientifically for what it can tell us about
internal states, social behavior, and psychopathology.
Faces possess their own language. Beginning with Darwin and his contemporaries, extensive efforts
have been made to manually describe this language. A leading approach, the Facial Action Coding System
(FACS) [19] , segments the visible effects of facial muscle activation into ”action units.” Because of its
descriptive power, FACS has become the state of the art in manual measurement of facial expression and is
widely used in studies of spontaneous facial behavior. The FACS taxonomy was develop by manually ob-
serving graylevel variation between expressions in images and to a lesser extent by recording the electrical
activity of underlying facial muscles [9]. Because of its importance to human social dynamics, person per-
ception, biological bases of behavior, extensive efforts have been made to automatically detect this language
(i.e., facial expression) using computer vision and machine learning. In part for these reasons, much effort
in automatic facial image analysis seeks to automatically recognize FACS action units [5, 45, 38, 42]. With
few exceptions, previous work on facial expression has been supervised in nature (i.e. event categories are
defined in advance in labeled training data, see [5, 45, 38, 42] for a review of state-of-the-art algorithms)
using either FACS or emotion labels (e.g. angry, surprise, sad). Because manual coding is highly labor
intensive, progress in automated facial image analysis has been limited by lack of sufficient training data
especially with respect to human behavior in naturally occurring settings (as opposed to posed facial be-
havior). Little attention has been paid to the problem of unsupervised or weakly-supervised discovery of
facial events prior to recognition. In this proposal we question whether the reliance on supervised learning
is necessary. Specifically, Can unsupervised or weakly-supervised learning algorithms discover useful and
meaningful facial events in video sequences of natural occurring behavior?. Three are the main contributions
of this proposal:
• We ask whether unsupervised or weakly-supervised learning algorithms can discover useful and
meaningful facial events in video sequences of one or more persons with natural occurring behavior.
Several issues contribute to the challenge of discovery of facial events; these include the large vari-
ability in the temporal scale and periodicity of facial expressions, illumination and fast pose changes,
the complexity of decoupling rigid and non-rigid motion from video, the exponential nature of all
possible facial movement combinations, and characterization of subtle facial behavior.
• We propose two novel non-parametric algorithms for unsupervised and weakly-supervised time-series
analysis. In preliminary experiments these algorithms were able to discover meaningful facial events
341002fac5ae6c193b78018a164d3c7295a495e4von Mises-Fisher Mixture Model-based Deep
learning: Application to Face Verification
('1773090', 'Md. Abul Hasnat', 'md. abul hasnat')
('34767162', 'Jonathan Milgram', 'jonathan milgram')
('34086868', 'Liming Chen', 'liming chen')
34ce703b7e79e3072eed7f92239a4c08517b0c55What impacts skin color in digital photos?
Advanced Digital Sciences Center, University of Illinois at Urbana-Champaign, Singapore
('3213946', 'Albrecht Lindner', 'albrecht lindner')
('1702224', 'Stefan Winkler', 'stefan winkler')
345bea5f7d42926f857f395c371118a00382447fTransfiguring Portraits
Computer Science and Engineering, University of Washington
Figure 1: Our system’s goal is to let people imagine and explore how they may look like in a different country, era, hair style, hair color, age,
and anything else that can be queried in an image search engine. The examples above show a single input photo (left) and automatically
synthesized appearances of the input person with ”curly hair” (top row), in ”india” (2nd row), and at ”1930” (3rd row).
('2419955', 'Ira Kemelmacher-Shlizerman', 'ira kemelmacher-shlizerman')
34ec83c8ff214128e7a4a4763059eebac59268a6Action Anticipation By Predicting Future
Dynamic Images
Australian Centre for Robotic Vision, ANU, Canberra, Australia
('46771280', 'Cristian Rodriguez', 'cristian rodriguez')
('1688071', 'Basura Fernando', 'basura fernando')
('40124570', 'Hongdong Li', 'hongdong li')
{cristian.rodriguez, basura.fernando, hongdong.li}@.anu.edu.au
3463f12ad434d256cd5f94c1c1bfd2dd6df36947Article
Facial Expression Recognition with Fusion Features
Extracted from Salient Facial Areas
School of Control Science and Engineering, Shandong University, Jinan 250061, China
Academic Editors: Xue-Bo Jin; Shuli Sun; Hong Wei and Feng-Bao Yang
Received: 23 January 2017; Accepted: 24 March 2017; Published: 29 March 2017
('7895427', 'Yanpeng Liu', 'yanpeng liu')
('29275442', 'Yibin Li', 'yibin li')
('1708045', 'Xin Ma', 'xin ma')
('1772484', 'Rui Song', 'rui song')
liuyanpeng@sucro.org (Y.L.); liyb@sdu.edu.cn (Y.L.); maxin@sdu.edu.cn (X.M.)
* Correspondence: rsong@sdu.edu.cn
346c9100b2fab35b162d7779002c974da5f069eePhoto Search by Face Positions and Facial Attributes
on Touch Devices
National Taiwan University, Taipei, Taiwan
('2476032', 'Yu-Heng Lei', 'yu-heng lei')
('35081710', 'Yan-Ying Chen', 'yan-ying chen')
('2817570', 'Lime Iida', 'lime iida')
('33970300', 'Bor-Chun Chen', 'bor-chun chen')
('1776110', 'Hsiao-Hang Su', 'hsiao-hang su')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
{limeiida, siriushpa}@gmail.com, b95901019@ntu.edu.tw, winston@csie.ntu.edu.tw
{ryanlei, yanying}@cmlab.csie.ntu.edu.tw,
34863ecc50722f0972e23ec117f80afcfe1411a9An Efficient Face Recognition Algorithm Based
on Robust Principal Component Analysis
TNLIST and Department of Automation
Tsinghua University
Beijing, China
('2860279', 'Ziheng Wang', 'ziheng wang')
('2842970', 'Xudong Xie', 'xudong xie')
zihengwang.thu@gmail.com, xdxie@tsinghua.edu.cn
34b7e826db49a16773e8747bc8dfa48e344e425d
34c594abba9bb7e5813cfae830e2c4db78cf138cTransport-Based Single Frame Super Resolution of Very Low Resolution Face Images
Carnegie Mellon University
We describe a single-frame super-resolution method for reconstructing high-
resolution (abbr. high-res) faces from very low-resolution (abbr. low-res)
face images (e.g. smaller than 16× 16 pixels) by learning a nonlinear La-
grangian model for the high-res face images. Our technique is based on the
mathematics of optimal transport, and hence we denote it as transport-based
SFSR (TB-SFSR). In the training phase, a nonlinear model of high-res fa-
cial images is constructed based on transport maps that morph a reference
image into the training face images. In the testing phase, the resolution of
a degraded image is enhanced by finding the model parameters that best fit
the given low resolution data.
Generally speaking, most SFSR methods [2, 3, 4, 5] are based on a
linear model for the high-res images. Hence, ultimately, the majority of
SFSR models in the literature can be written as, Ih(x) = ∑i wiψi(x), where
Ih is a high-res image or a high-res image patch, w’s are weight coefficients,
and ψ’s are high-res images (or image patches), which are learned from the
training images using a specific model. Here we propose a fundamentally
different approach toward modeling high-res images. In our approach the
high-res image is modeled as a mass preserving mapping of a high-res tem-
plate image, I0, as follows
Ih(x) = det(I +∑
αiDvi(x))I0(x +∑
αivi(x)),
(1)
where I is the identity matrix, αi is the weight coefficient of displacement
field vi (i.e. a smooth vector field), and Dvi(x) is the Jacobian matrix of the
displacement field vi, evaluated at x. The proposed method can be viewed
as a linear modeling in the space of mass-preserving mappings, which cor-
responds to a non-linear model in the image space. Thus (through the use of
the optimal mapping function f(x) = x +∑i αivi(x)) our modeling approach
can also displace pixels, in addition to changing their intensities.
Given a training set of high-res face images, I1, ...,IN : Ω → R with
Ω = [0,1]2 the image intensities are first normalized to integrate to 1. This
is done so the images can be treated as distributions of a fixed amount of in-
tensity values (i.e. fixed amount of mass). Next, the reference face is defined
to be the average image, I0 = 1
i=1 Ii, and the optimal transport distance
between the reference image and the i’th training image, Ii, is defined to be,
N ∑N
(cid:90)
dOT (I0,Ii) = minui
|ui(x)|2Ii(x)dx
s.t. det(I + Dui(x))I0(x + ui(x)) = Ii(x)
(2)
where (f(x) = x + u(x)) : Ω → Ω is a mass preserving transform from Ii to
I0, u is the optimal displacement field, and Dui is the Jacobian matrix of
u. The optimization problem above is well posed and has a unique min-
imizer [1]. Having optimal displacement fields ui for i = 1, . . . ,N a sub-
space, V , is learned for these displacement fields. Let v j for j = 1, ...,M
be a basis for subspace V. Then, any combination of the basis displacement
fields can be used to construct an arbitrary deformation field, fα (x) = x +
∑M
j=1 α jv j(x), which can then be used to construct a given image Iα (x) =
det(Dfα (x))I0(fα (x)). Hence, subspace V provides a generative model for
the high-res face image.
In the testing phase, we constrain the space of
possible high-res solutions to those, which are representable as Iα for some
α ∈ RM. Hence, for a degraded input image, Il, and assuming that φ (.) is
known and following the MAP criteria we can write,
α∗ = argminα
(cid:107)Il − φ (Iα )(cid:107)2
(3)
where a gradient descent approach is used to obtain a local optima α∗. Note
that, images of faces (and other deformable objects) differ from each other
s.t Iα (x) = det(Dfα (x))I0(fα (x))
('2062432', 'Soheil Kolouri', 'soheil kolouri')
('1818350', 'Gustavo K. Rohde', 'gustavo k. rohde')
34108098e1a378bc15a5824812bdf2229b938678Reconstructive Sparse Code Transfer for
Contour Detection and Semantic Labeling
1TTI Chicago
California Institute of Technology
University of California at Berkeley / ICSI
('1965929', 'Michael Maire', 'michael maire')
('2251428', 'Stella X. Yu', 'stella x. yu')
('1690922', 'Pietro Perona', 'pietro perona')
mmaire@ttic.edu, stellayu@berkeley.edu, perona@caltech.edu
341ed69a6e5d7a89ff897c72c1456f50cfb23c96DAGER: Deep Age, Gender and Emotion
Recognition Using Convolutional Neural
Networks
Computer Vision Lab, Sighthound Inc., Winter Park, FL
('1707795', 'Afshin Dehghan', 'afshin dehghan')
('16131262', 'Enrique G. Ortiz', 'enrique g. ortiz')
('37574860', 'Guang Shu', 'guang shu')
('2234898', 'Syed Zain Masood', 'syed zain masood')
{afshindehghan, egortiz, guangshu, zainmasood}@sighthound.com
348a16b10d140861ece327886b85d96cce95711eFinding Good Features for Object Recognition
by
B.S. (Cornell University
M.S. (University of California, Berkeley
A dissertation submitted in partial satisfaction
of the requirements for the degree of
Doctor of Philosophy
in
Computer Science
in the
GRADUATE DIVISION
of the
UNIVERSITY OF CALIFORNIA, BERKELEY
Committee in charge:
Professor Jitendra Malik, Chair
Spring 2005
('3236352', 'Andras David Ferencz', 'andras david ferencz')
('1744452', 'David A. Forsyth', 'david a. forsyth')
('1678771', 'Peter J. Bickel', 'peter j. bickel')
3419af6331e4099504255a38de6f6b7b3b1e5c14Modified Eigenimage Algorithm for Painting
Image Retrieval
Stanford University
('12833413', 'Qun Feng Tan', 'qun feng tan')
34c8de02a5064e27760d33b861b7e47161592e65Video Action Recognition based on Deeper Convolution Networks with
Pair-Wise Frame Motion Concatenation
School of Computer Science, Northwestern Polytechnical University, China
Sensor-enhanced Social Media (SeSaMe) Centre, National University of Singapore, Singapore
School of Information Engineering, Nanchang University, China
('9229148', 'Yamin Han', 'yamin han')
('40188000', 'Peng Zhang', 'peng zhang')
('2628886', 'Tao Zhuo', 'tao zhuo')
('1730584', 'Wei Huang', 'wei huang')
('1801395', 'Yanning Zhang', 'yanning zhang')
340d1a9852747b03061e5358a8d12055136599b0Audio-Visual Recognition System Insusceptible
to Illumination Variation over Internet Protocol
('1968167', 'Yee Wan Wong', 'yee wan wong')
34ccdec6c3f1edeeecae6a8f92e8bdb290ce40fdProceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
A Virtual Assistant to Help Dysphagia Patients Eat Safely at Home
SRI International, Menlo Park California / *Brooklyn College, Brooklyn New York
('6647218', 'Michael Freed', 'michael freed')
('1936842', 'Brian Burns', 'brian burns')
('39451362', 'Aaron Heller', 'aaron heller')
('3431324', 'Sharon Beaumont-Bowman', 'sharon beaumont-bowman')
{first name, last name}@sri.com, sharonb@brooklyn.cuny.edu
34b42bcf84d79e30e26413f1589a9cf4b37076f9Learning Sparse Representations of High
Dimensional Data on Large Scale Dictionaries
Princeton University
Princeton, NJ 08544, USA
('1730249', 'Zhen James Xiang', 'zhen james xiang')
('1693135', 'Peter J. Ramadge', 'peter j. ramadge')
{zxiang,haoxu,ramadge}@princeton.edu
5a3da29970d0c3c75ef4cb372b336fc8b10381d7CNN-based Real-time Dense Face Reconstruction
with Inverse-rendered Photo-realistic Face Images
('8280113', 'Yudong Guo', 'yudong guo')
('2938279', 'Juyong Zhang', 'juyong zhang')
('1688642', 'Jianfei Cai', 'jianfei cai')
('15679675', 'Boyi Jiang', 'boyi jiang')
('48510441', 'Jianmin Zheng', 'jianmin zheng')
5a93f9084e59cb9730a498ff602a8c8703e5d8a5HUSSAIN ET. AL: FACE RECOGNITION USING LOCAL QUANTIZED PATTERNS
Face Recognition using Local Quantized
Patterns
Fréderic Jurie
GREYC — CNRS UMR 6072,
University of Caen Basse-Normandie
Caen, France
('2695106', 'Sibt ul Hussain', 'sibt ul hussain')
('3423479', 'Thibault Napoléon', 'thibault napoléon')
Sibt.ul.Hussain@gmail.com
Thibault.Napoleon@unicaen.fr
Frederic.Jurie@unicaen.fr
5a87bc1eae2ec715a67db4603be3d1bb8e53ace2A Novel Convergence Scheme for Active Appearance Models
School of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA 30332
('38410822', 'Aziz Umit Batur', 'aziz umit batur')
('2583044', 'Monson H. Hayes', 'monson h. hayes')
{batur, mhh3}@ece.gatech.edu
5aad56cfa2bac5d6635df4184047e809f8fecca2A VISUAL DICTIONARY ATTACK ON PICTURE PASSWORDS
Cornell University
('1803066', 'Amir Sadovnik', 'amir sadovnik')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
5a8ca0cfad32f04449099e2e3f3e3a1c8f6541c0Available online at www.sciencedirect.com
ScienceDirect
Procedia Computer Science 87 ( 2016 ) 300 – 305
4th International Conference on Recent Trends in Computer Science &Engineering
Automatic Frontal Face Reconstruction Approach for Pose Invariant Face
Recognition
Kavitha.Ja,Mirnalinee.T.Tb
aResearch Scholar, Anna University, Chennai, Inida
SSN College of Engineering, Kalavakkam, Tamil Nadu, India
5ac80e0b94200ee3ecd58a618fe6afd077be0a00Unifying Geometric Features and Facial Action Units for Improved
Performance of Facial Expression Analysis
Kent State University
Keywords:
Facial Action Unit, Facial Expression, Geometric features.
('1688430', 'Mehdi Ghayoumi', 'mehdi ghayoumi'){mghayoum,akbansal}@kent.edu
5aadd85e2a77e482d44ac2a215c1f21e4a30d91bFace Recognition using Principle Components and Linear
Discriminant Analysis
HATIM A.
ABOALSAMH 1,2
HASSAN I.
MATHKOUR 1,2
GHAZY M.R.
ASSASSA 1,2
MONA F.M.
MURSI 1,3
1 Center of Excellence in Information Assurance (CoEIA),
2 Department of Computer Science
3 Department of Information Technology
College of Computer and Information Sciences
King Saud University, Riyadh
SAUDI ARABIA
hatim@ksu.edu.sa
mathkour@ksu.edu.sa
gassassa@coeia.edu.sa
monmursi@coeia.edu.sa
5a34a9bb264a2594c02b5f46b038aa1ec3389072Label-Embedding for Image Classification ('2893664', 'Zeynep Akata', 'zeynep akata')
('1723883', 'Florent Perronnin', 'florent perronnin')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
5a5f9e0ed220ce51b80cd7b7ede22e473a62062cVideos as Space-Time Region Graphs
Robotics Institute, Carnegie Mellon University
Figure 1. How do you recognize simple actions such as opening book? We argue action
understanding requires appearance modeling but also capturing temporal dynamics
(how shape of book changes) and functional relationships. We propose to represent
videos as space-time region graphs followed by graph convolutions for inference.
('39849136', 'Xiaolong Wang', 'xiaolong wang')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
5ac946fc6543a445dd1ee6d5d35afd3783a31353FEATURELESS: BYPASSING FEATURE EXTRACTION IN ACTION CATEGORIZATION
S. L. Pinteaa, P. S. Mettesa
J. C. van Gemerta,b, A. W. M. Smeuldersa
aIntelligent Sensory Information Systems,
University of Amsterdam
Amsterdam, Netherlands
5a4c6246758c522f68e75491eb65eafda375b701978-1-4244-4296-6/10/$25.00 ©2010 IEEE
1118
ICASSP 2010
5aad5e7390211267f3511ffa75c69febe3b84cc7Driver Gaze Estimation
Without Using Eye Movement
MIT AgeLab
('2145054', 'Lex Fridman', 'lex fridman')
('2180983', 'Philipp Langhans', 'philipp langhans')
('7137846', 'Joonbum Lee', 'joonbum lee')
('1901227', 'Bryan Reimer', 'bryan reimer')
fridman@mit.edu, philippl@mit.edu, joonbum@mit.edu, reimer@mit.edu
5a029a0b0ae8ae7fc9043f0711b7c0d442bfd372
5ae970294aaba5e0225122552c019eb56f20af74International Journal of Computer and Electrical Engineering
Establishing Dense Correspondence of High Resolution 3D
Faces via Möbius Transformations
College of Electronic Science and Engineering, National University of Defense Technology, Changsha, China
Manuscript submitted July 14, 2014; accepted November 2, 2014.
doi: 10. 17706/ijcee.2014.v6.866
('30373915', 'Jian Liu', 'jian liu')
('37509862', 'Quan Zhang', 'quan zhang')
('3224964', 'Chaojing Tang', 'chaojing tang')
* Corresponding author. Email: cjtang@263.net
5a86842ab586de9d62d5badb2ad8f4f01eada885International Journal of Engineering Research and General Science Volume 3, Issue 3, May-June, 2015
ISSN 2091-2730
Facial Emotion Recognition and Classification Using Hybridization
Method
Chandigarh Engg. College, Mohali, Punjab, India
('6010530', 'Anchal Garg', 'anchal garg')
('9744572', 'Rohit Bajaj', 'rohit bajaj')
anchalgarg949@gmail.com, 07696449500
5a4ec5c79f3699ba037a5f06d8ad309fb4ee682cDownloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 12/17/2017 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
AutomaticageandgenderclassificationusingsupervisedappearancemodelAliMainaBukarHassanUgailDavidConnahAliMainaBukar,HassanUgail,DavidConnah,“Automaticageandgenderclassificationusingsupervisedappearancemodel,”J.Electron.Imaging25(6),061605(2016),doi:10.1117/1.JEI.25.6.061605.
5aa57a12444dbde0f5645bd9bcec8cb2f573c6a0The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014

149

Face Recognition using Adaptive Margin Fisher’s
Criterion and Linear Discriminant Analysis

(AMFC-LDA)
COMSATS Institute of Information Technology, Pakistan
('2151799', 'Marryam Murtaza', 'marryam murtaza')
('33088042', 'Muhammad Sharif', 'muhammad sharif')
('36739230', 'Mudassar Raza', 'mudassar raza')
('1814986', 'Jamal Hussain Shah', 'jamal hussain shah')
5aed0f26549c6e64c5199048c4fd5fdb3c5e69d6International Journal of Computer Applications® (IJCA) (0975 – 8887)
International Conference on Knowledge Collaboration in Engineering, ICKCE-2014
Human Expression Recognition using Facial Features
G.Saranya
Post graduate student, Dept. of ECE
Parisutham Institute of Technology and Science
Thanjavur.
Affiliated to Anna university, Chennai
recognition can be used
5a7520380d9960ff3b4f5f0fe526a00f63791e99The Indian Spontaneous Expression
Database for Emotion Recognition
('38657440', 'Priyadarshi Patnaik', 'priyadarshi patnaik')
('2680543', 'Aurobinda Routray', 'aurobinda routray')
('2730256', 'Rajlakshmi Guha', 'rajlakshmi guha')
5a07945293c6b032e465d64f2ec076b82e113fa6Pulling Actions out of Context: Explicit Separation for Effective Combination
Stony Brook University, Stony Brook, NY 11794, USA
('50874742', 'Yang Wang', 'yang wang'){wang33, minhhoai}@cs.stonybrook.edu
5fff61302adc65d554d5db3722b8a604e62a8377Additive Margin Softmax for Face Verification
UESTC
Georgia Tech
UESTC
UESTC
('47939378', 'Feng Wang', 'feng wang')
('51094998', 'Weiyang Liu', 'weiyang liu')
('8424682', 'Haijun Liu', 'haijun liu')
('1709439', 'Jian Cheng', 'jian cheng')
feng.wff@gmail.com
wyliu@gatech.edu
haijun liu@126.com
chengjian@uestc.edu.cn
5f771fed91c8e4b666489ba2384d0705bcf75030Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning
and A New Benchmark for Multi-Human Parsing
National University of Singapore
National University of Defense Technology
Qihoo 360 AI Institute
('46509484', 'Jian Zhao', 'jian zhao')
('2757639', 'Jianshu Li', 'jianshu li')
('48207454', 'Li Zhou', 'li zhou')
('1715286', 'Terence Sim', 'terence sim')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('33221685', 'Jiashi Feng', 'jiashi feng')
chengyu996@gmail.com zhouli2025@gmail.com
{eleyans, elefjia}@nus.edu.sg
{zhaojian90, jianshu}@u.nus.edu
tsim@comp.nus.edu.sg
5fa04523ff13a82b8b6612250a39e1edb5066521Dockerface: an Easy to Install and Use Faster R-CNN Face Detector in a Docker
Container
Center for Behavioral Imaging
College of Computing
Georgia Institute of Technology
('31601235', 'Nataniel Ruiz', 'nataniel ruiz')
('1692956', 'James M. Rehg', 'james m. rehg')
nataniel.ruiz@gatech.edu
rehg@gatech.edu
5fa6e4a23da0b39e4b35ac73a15d55cee8608736IJCV special issue (Best papers of ECCV 2016) manuscript No.
(will be inserted by the editor)
RED-Net:
A Recurrent Encoder-Decoder Network for Video-based Face Alignment
Submitted: April 19 2017 / Revised: December 12 2017
('4340744', 'Xi Peng', 'xi peng')
5f871838710a6b408cf647aacb3b198983719c311716
Locally Linear Regression for Pose-Invariant
Face Recognition
('1695600', 'Xiujuan Chai', 'xiujuan chai')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
('1698902', 'Wen Gao', 'wen gao')
5f64a2a9b6b3d410dd60dc2af4a58a428c5d85f9
5f344a4ef7edfd87c5c4bc531833774c3ed23542c Copyright by Ira Cohen, 2003
5f6ab4543cc38f23d0339e3037a952df7bcf696bVideo2Vec: Learning Semantic Spatial-Temporal
Embeddings for Video Representation
School of Computer Engineering
School of Electrical Engineering
School of Computer Science
Arizona State University
Tempe, Arizona 85281
Arizona State University
Tempe, Arizona 85281
Arizona State University
Tempe, Arizona 85281
('8060096', 'Sheng-hung Hu', 'sheng-hung hu')
('2180892', 'Yikang Li', 'yikang li')
('2913552', 'Baoxin Li', 'baoxin li')
Email:shenghun@asu.edu
Email:yikangli@asu.edu
Email:Baoxin.Li@asu.edu
5f7c4c20ae2731bfb650a96b69fd065bf0bb950eTurk J Elec Eng & Comp Sci
(2016) 24: 1797 { 1814
c⃝ T (cid:127)UB_ITAK
doi:10.3906/elk-1310-253
A new fuzzy membership assignment and model selection approach based on
dynamic class centers for fuzzy SVM family using the (cid:12)re(cid:13)y algorithm
Young Researchers and Elite Club, Mashhad Branch, Islamic Azad University, Mashhad, Iran
Faculty of Engineering, Ferdowsi University, Mashhad, Iran
Received: 01.11.2013
(cid:15)
Accepted/Published Online: 30.06.2014
(cid:15)
Final Version: 23.03.2016
('9437627', 'Omid Naghash Almasi', 'omid naghash almasi')
('4945660', 'Modjtaba Rouhani', 'modjtaba rouhani')
5f94969b9491db552ffebc5911a45def99026afeMultimodal Learning and Reasoning for Visual
Question Answering
Integrative Sciences and Engineering
National University of Singapore
Electrical and Computer Engineering
National University of Singapore
('3393294', 'Ilija Ilievski', 'ilija ilievski')
('33221685', 'Jiashi Feng', 'jiashi feng')
ilija.ilievski@u.nus.edu
elefjia@nus.edu.sg
5f758a29dae102511576c0a5c6beda264060a401Fine-grained Video Attractiveness Prediction Using Multimodal
Deep Learning on a Large Real-world Dataset
Wuhan University, Tencent AI Lab, National University of Singapore, University of Rochester
('3179887', 'Xinpeng Chen', 'xinpeng chen')
('47740660', 'Jingyuan Chen', 'jingyuan chen')
('34264361', 'Lin Ma', 'lin ma')
('1849993', 'Jian Yao', 'jian yao')
('46641573', 'Wei Liu', 'wei liu')
('33642939', 'Jiebo Luo', 'jiebo luo')
('38144094', 'Tong Zhang', 'tong zhang')
5fa0e6da81acece7026ac1bc6dcdbd8b204a5f0a
5feb1341a49dd7a597f4195004fe9b59f67e6707A Deep Ranking Model for Spatio-Temporal Highlight Detection
from a 360◦ Video
Seoul National University
('7877122', 'Youngjae Yu', 'youngjae yu')
('1693291', 'Sangho Lee', 'sangho lee')
('35272603', 'Joonil Na', 'joonil na')
('35365676', 'Jaeyun Kang', 'jaeyun kang')
('1743920', 'Gunhee Kim', 'gunhee kim')
{yj.yu, sangho.lee, joonil}@vision.snu.ac.kr, {kjy13411}@gmail.com, gunhee@snu.ac.kr
5f0d4a0b5f72d8700cdf8cb179263a8fa866b59bCBMM Memo No. 85
06/2018
Deep Regression Forests for Age Estimation
Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai University
Johns Hopkins University
College of Computer and Control Engineering, Nankai University 4: Hikvision Research
('41187410', 'Wei Shen', 'wei shen')
('9544564', 'Yilu Guo', 'yilu guo')
('46394340', 'Yan Wang', 'yan wang')
('1681247', 'Kai Zhao', 'kai zhao')
('46172451', 'Bo Wang', 'bo wang')
('35922327', 'Alan Yuille', 'alan yuille')
5f57a1a3a1e5364792b35e8f5f259f92ad561c1fImplicit Sparse Code Hashing
Institute of Information Science
Academia Sinica, Taiwan
('2144284', 'Tsung-Yu Lin', 'tsung-yu lin')
('2301765', 'Tsung-Wei Ke', 'tsung-wei ke')
('1805102', 'Tyng-Luh Liu', 'tyng-luh liu')
5f27ed82c52339124aa368507d66b71d96862cb7Semi-supervised Learning of Classifiers: Theory, Algorithms
and Their Application to Human-Computer Interaction
This work has been partially funded by NSF Grant IIS 00-85980.
DRAFT
('1774778', 'Ira Cohen', 'ira cohen')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
Ira Cohen: Hewlett-Packard Labs, Palo Alto, CA, USA, ira.cohen@hp.com
Fabio G. Cozman and Marcelo C. Cirelo: Escola Polit´ecnica, Universidade de S˜ao Paulo, S˜ao Paulo,Brazil. fgcozman@usp.br,
marcelo.cirelo@poli.usp.br
Nicu Sebe: Faculty of Science, University of Amsterdam, The Netherlands. nicu@science.uva.nl
Thomas S. Huang: Beckman Institute, University of Illinois at Urbana-Champaign, USA. huang@ifp.uiuc.edu
5fa932be4d30cad13ea3f3e863572372b915bec8
5fea26746f3140b12317fcf3bc1680f2746e172eDense Supervision for Visual Comparisons via Synthetic Images
Semantic Jitter:
University of Texas at Austin
University of Texas at Austin
Distinguishing subtle differences in attributes is valuable, yet
learning to make visual comparisons remains non-trivial. Not
only is the number of possible comparisons quadratic in the
number of training images, but also access to images adequately
spanning the space of fine-grained visual differences is limited.
We propose to overcome the sparsity of supervision problem
via synthetically generated images. Building on a state-of-the-
art image generation engine, we sample pairs of training images
exhibiting slight modifications of individual attributes. Augment-
ing real training image pairs with these examples, we then train
attribute ranking models to predict the relative strength of an
attribute in novel pairs of real images. Our results on datasets of
faces and fashion images show the great promise of bootstrapping
imperfect image generators to counteract sample sparsity for
learning to rank.
INTRODUCTION
Fine-grained analysis of images often entails making visual
comparisons. For example, given two products in a fashion
catalog, a shopper may judge which shoe appears more pointy
at the toe. Given two selfies, a teen may gauge in which one he
is smiling more. Given two photos of houses for sale on a real
estate website, a home buyer may analyze which facade looks
better maintained. Given a series of MRI scans, a radiologist
may judge which pair exhibits the most shape changes.
In these and many other such cases, we are interested in
inferring how a pair of images compares in terms of a par-
ticular property, or “attribute”. That is, which is more pointy,
smiling, well-maintained, etc. Importantly, the distinctions of
interest are often quite subtle. Subtle comparisons arise both
in image pairs that are very similar in almost every regard
(e.g., two photos of the same individual wearing the same
clothing, yet smiling more in one photo than the other), as
well as image pairs that are holistically different yet exhibit
only slight differences in the attribute in question (e.g., two
individuals different in appearance, and one is smiling slightly
more than the other).
A growing body of work explores computational models
for visual comparisons [1], [2], [3], [4], [5], [6], [7], [8], [9],
[10], [11], [12]. In particular, ranking models for “relative
attributes” [2], [3], [4], [5], [9], [11] use human-ordered pairs
of images to train a system to predict the relative ordering in
novel image pairs.
A major challenge in training a ranking model is the sparsity
of supervision. That sparsity stems from two factors: label
availability and image availability. Because training instances
consist of pairs of images—together with the ground truth
human judgment about which exhibits the property more
Fig. 1: Our method “densifies” supervision for training ranking functions to
make visual comparisons, by generating ordered pairs of synthetic images.
Here, when learning the attribute smiling, real training images need not be
representative of the entire attribute space (e.g., Web photos may cluster
around commonly photographed expressions, like toothy smiles). Our idea
“fills in” the sparsely sampled regions to enable fine-grained supervision.
Given a novel pair (top), the nearest synthetic pairs (right) may present better
training data than the nearest real pairs (left).
or less—the space of all possible comparisons is quadratic
in the number of potential
training images. This quickly
makes it intractable to label an image collection exhaustively
for its comparative properties. At the same time, attribute
comparisons entail a greater cognitive load than, for example,
object category labeling. Indeed, the largest existing relative
attribute datasets sample only less than 0.1% of all image pairs
for ground truth labels [11], and there is a major size gap
between standard datasets labeled for classification (now in
the millions [13]) and those for comparisons (at best in the
thousands [11]). A popular shortcut is to propagate category-
level comparisons down to image instances [4], [14]—e.g.,
deem all ocean scenes as “more open” than all forest scenes—
but
label noise and in practice
underperforms training with instance-level comparisons [2].
this introduces substantial
Perhaps more insidious than the annotation cost, however,
is the problem of even curating training images that suf-
ficiently illustrate fine-grained differences. Critically, sparse
supervision arises not simply because 1) we lack resources
to get enough image pairs labeled, but also because 2) we
lack a direct way to curate photos demonstrating all sorts
of subtle attribute changes. For example, how might we
gather unlabeled image pairs depicting all subtle differences
Novel PairReal PairsSynthetic Pairsvs.
('2206630', 'Aron Yu', 'aron yu')
('1794409', 'Kristen Grauman', 'kristen grauman')
aron.yu@utexas.edu
grauman@cs.utexas.edu
5f5906168235613c81ad2129e2431a0e5ef2b6e4Noname manuscript No.
(will be inserted by the editor)
A Unified Framework for Compositional Fitting of
Active Appearance Models
Received: date / Accepted: date
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
5fb5d9389e2a2a4302c81bcfc068a4c8d4efe70cMultiple Facial Attributes Estimation based on
Weighted Heterogeneous Learning
H.Fukui* T.Yamashita* Y.Kato* R.Matsui*
Chubu University
**Abeja Inc.
1200, Matuoto-cho, Kasugai,
4-1-20, Toranomon, Minato-ku,
Aichi, Japan
Tokyo, Japan
('2531207', 'T. Ogata', 't. ogata')
5f676d6eca4c72d1a3f3acf5a4081c29140650fbTo Skip or not to Skip? A Dataset of Spontaneous Affective Response
of Online Advertising (SARA) for Audience Behavior Analysis
College of Electronics and Information Engineering, Sichuan University, Chengdu 610064, China
BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
3 HP Labs, Palo Alto, CA 94304, USA
Center for Research in Intelligent Systems, University of California, Riverside, CA 92521, USA
('1803478', 'Songfan Yang', 'songfan yang')
('39776603', 'Le An', 'le an')
('1784929', 'Mehran Kafai', 'mehran kafai')
('1707159', 'Bir Bhanu', 'bir bhanu')
syang@scu.edu.cn, lan004@unc.edu, mehran.kafai@hp.com, bhanu@cris.ucr.edu
5f453a35d312debfc993d687fd0b7c36c1704b16Clemson University
TigerPrints
All Theses
12-2015
Theses
A Training Assistant Tool for the Automated Visual
Inspection System
Follow this and additional works at: http://tigerprints.clemson.edu/all_theses
Part of the Electrical and Computer Engineering Commons
Recommended Citation
Ramaraj, Mohan Karthik, "A Training Assistant Tool for the Automated Visual Inspection System" (2015). All Theses. Paper 2285.
This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized
('4154752', 'Mohan Karthik Ramaraj', 'mohan karthik ramaraj')Clemson University, rmohankarthik91@gmail.com
administrator of TigerPrints. For more information, please contact awesole@clemson.edu.
5fc664202208aaf01c9b62da5dfdcd71fdadab29arXiv:1504.05308v1 [cs.CV] 21 Apr 2015
5fac62a3de11125fc363877ba347122529b5aa50AMTnet: Action-Micro-Tube Regression by
End-to-end Trainable Deep Architecture
Oxford Brookes University, Oxford, United Kingdom
('3017538', 'Suman Saha', 'suman saha')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
('1931660', 'Gurkirt Singh', 'gurkirt singh')
{suman.saha-2014, gurkirt.singh-2015, fabio.cuzzolin}@brookes.ac.uk
5fa1724a79a9f7090c54925f6ac52f1697d6b570Proceedings of the Workshop on Grammar and Lexicon: Interactions and Interfaces,
pages 41–47, Osaka, Japan, December 11 2016.
41
5fba1b179ac80fee80548a0795d3f72b1b6e49cdVirtual U: Defeating Face Liveness Detection by Building Virtual Models
From Your Public Photos
University of North Carolina at Chapel Hill
('1734114', 'Yi Xu', 'yi xu')
('39310157', 'True Price', 'true price')
('40454588', 'Jan-Michael Frahm', 'jan-michael frahm')
('1792232', 'Fabian Monrose', 'fabian monrose')
{yix, jtprice, jmf, fabian}@cs.unc.edu
33f7e78950455c37236b31a6318194cfb2c302a4Parameterizing Object Detectors
in the Continuous Pose Space
Boston University, USA
2 Disney Research Pittsburgh, USA
('1702188', 'Kun He', 'kun he')
('14517812', 'Leonid Sigal', 'leonid sigal')
{hekun,sclaroff}@cs.bu.edu, lsigal@disneyresearch.com
33548531f9ed2ce6f87b3a1caad122c97f1fd2e9International Journal of Computer Applications (0975 – 8887)
Volume 104 – No.2, October 2014
Facial Expression Recognition in Video using
Adaboost and SVM
Surabhi Prabhakar
Department of CSE
Amity University
Noida, India
Jaya Sharma
Shilpi Gupta
Department of CSE
Department of CSE
Amity University
Noida, India
Amity University
Noida, India
33ac7fd3a622da23308f21b0c4986ae8a86ecd2bBuilding an On-Demand Avatar-Based Health Intervention for Behavior Change
School of Computing and Information Sciences
Florida International University
Miami, FL, 33199, USA
Department of Computer Science
University of Miami
Coral Gables, FL, 33146, USA
('2671668', 'Ugan Yasavur', 'ugan yasavur')
('2782570', 'Claudia de Leon', 'claudia de leon')
('1809087', 'Reza Amini', 'reza amini')
('1765935', 'Ubbo Visser', 'ubbo visser')
33030c23f6e25e30b140615bb190d5e1632c3d3bToward a General Framework for Words and
Pictures
Stony Brook University
Stony Brook University
Hal Daum´e III
University of Maryland
Jesse Dodge
University of Washington
University of Maryland
Stony Brook University
Alyssa Mensch
M.I.T.
University of Aberdeen
Karl Stratos
Columbia University
Stony Brook University
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
('2694557', 'Amit Goyal', 'amit goyal')
('1682965', 'Xufeng Han', 'xufeng han')
('38390487', 'Margaret Mitchell', 'margaret mitchell')
('1721910', 'Kota Yamaguchi', 'kota yamaguchi')
33ba256d59aefe27735a30b51caf0554e5e3a1dfEarly Active Learning via Robust
Representation and Structured Sparsity
†Department of Computer Science and Engineering
University of Texas at Arlington, Arlington, Texas 76019, USA
‡Department of Electrical Engineering and Computer Science
Colorado School of Mines, Golden, Colorado 80401, USA
('1688370', 'Feiping Nie', 'feiping nie')
('1683402', 'Hua Wang', 'hua wang')
('1748032', 'Heng Huang', 'heng huang')
feipingnie@gmail.com, huawangcs@gmail.com, heng@uta.edu, chqding@uta.edu
33c3702b0eee6fc26fc49f79f9133f3dd7fa3f13Imperial College London
Department of Computing
Machine Learning Techniques
for Automated Analysis of Facial
Expressions
December, 2013
Supervised by Prof. Maja Pantic
Submitted in part fulfilment of the requirements for the degree of PhD in Computing and
the Diploma of Imperial College London. This thesis is entirely my own work, and, except
where otherwise indicated, describes my own research.
('1729713', 'Ognjen Rudovic', 'ognjen rudovic')
33aff42530c2fd134553d397bf572c048db12c28From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning
Universitat Pompeu Fabra
Centre de Visio per Computador
Universitat Pompeu Fabra
Barcelona
Barcelona
Barcelona
('40097226', 'Adria Ruiz', 'adria ruiz')
('2820687', 'Joost van de Weijer', 'joost van de weijer')
('1692494', 'Xavier Binefa', 'xavier binefa')
adria.ruiz@upf.es
joost@cvc.uab.es
xavier.binefa@upf.es
33a1a049d15e22befc7ddefdd3ae719ced8394bfFULL PAPER
International Journal of Recent Trends in Engineering, Vol 2, No. 1, November 2009
An Efficient Approach to Facial Feature Detection
for Expression Recognition
S.P. Khandait1, P.D. Khandait2 and Dr.R.C.Thool2
1Deptt. of Info.Tech., K.D.K.C.E., Nagpur, India
2Deptt.of Electronics Engg., K.D.K.C.E., Nagpur, India, 2Deptt. of Info.Tech., SGGSIET, Nanded
Prapti_khandait@yahoo.co.in
prabhakark_117@yahoo.co.in , rcthool@yahoo.com,
334e65b31ad51b1c1f84ce12ef235096395f1ca7Emotion in Human-Computer Interaction
Emotion in Human-Computer Interaction
Brave, S. & Nass, C. (2002). Emotion in human-computer interaction. In J. Jacko & A.
Sears (Eds.), Handbook of human-computer interaction (pp. 251-271). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Scott Brave and Clifford Nass
Department of Communication
Stanford University
Stanford, CA 94305-2050
Phone: 650-428-1805,650-723-5499
Fax: 650-725-2472
brave,nass@stanford.edu
3328413ee9944de1cc7c9c1d1bf2fece79718ba1Co-Training of Audio and Video Representations
from Self-Supervised Temporal Synchronization
Dartmouth College
Facebook Research
Dartmouth College
('3443095', 'Bruno Korbar', 'bruno korbar')
('1687325', 'Du Tran', 'du tran')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
bruno.18@dartmouth.edu
trandu@fb.com
LT@dartmouth.edu
3399f8f0dff8fcf001b711174d29c9d4fde89379Face R-CNN
Tencent AI Lab, China
('39049654', 'Hao Wang', 'hao wang'){hawelwang,michaelzfli,denisji,yitongwang}@tencent.com
333aa36e80f1a7fa29cf069d81d4d2e12679bc67Suggesting Sounds for Images
from Video Collections
1Computer Science Department, ETH Z¨urich, Switzerland
2Disney Research, Switzerland
('39231399', 'Oliver Wang', 'oliver wang')
('1734448', 'Andreas Krause', 'andreas krause')
('2893744', 'Alexander Sorkine-Hornung', 'alexander sorkine-hornung')
{msoler,krausea}@ethz.ch
{jean-charles.bazin,owang,alex}@disneyresearch.com
3312eb79e025b885afe986be8189446ba356a507This is a post-print of the original paper published in ECCV 2016 (SpringerLink).
MOON : A Mixed Objective Optimization
Network for the Recognition of Facial Attributes
Vision and Security Technology (VAST) Lab,
University of Colorado at Colorado Springs
('39886114', 'Ethan M. Rudd', 'ethan m. rudd')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
{erudd,mgunther,tboult}@vast.uccs.edu
33792bb27ef392973e951ca5a5a3be4a22a0d0c6Two-dimensional Whitening Reconstruction for
Enhancing Robustness of Principal Component
Analysis
('2766473', 'Xiaoshuang Shi', 'xiaoshuang shi')
('1759643', 'Zhenhua Guo', 'zhenhua guo')
('1688370', 'Feiping Nie', 'feiping nie')
('1705066', 'Lin Yang', 'lin yang')
('1748883', 'Jane You', 'jane you')
('1692693', 'Dacheng Tao', 'dacheng tao')
3328674d71a18ed649e828963a0edb54348ee598IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 34, NO. 6, DECEMBER 2004
2405
A Face and Palmprint Recognition Approach Based
on Discriminant DCT Feature Extraction
('15132338', 'Xiao-Yuan Jing', 'xiao-yuan jing')
('1698371', 'David Zhang', 'david zhang')
339937141ffb547af8e746718fbf2365cc1570c8Facial Emotion Recognition in Real Time ('1849233', 'Dan Duncan', 'dan duncan')
('3133285', 'Gautam Shine', 'gautam shine')
('3158339', 'Chris English', 'chris english')
duncand@stanford.edu
gshine@stanford.edu
chriseng@stanford.edu
33402ee078a61c7d019b1543bb11cc127c2462d2Self-Supervised Video Representation Learning With Odd-One-Out Networks
ACRV, The Australian National University University of Oxford QUVA Lab, University of Amsterdam
('1688071', 'Basura Fernando', 'basura fernando')
33aa980544a9d627f305540059828597354b076c
33ae696546eed070717192d393f75a1583cd8e2c
33f2b44742cc828347ccc5ec488200c25838b664Pooling the Convolutional Layers in Deep ConvNets for Action Recognition
School of Computer Science and Technology, Tianjin University, China
School of Computer and Information, Hefei University of Technology, China
('2905510', 'Shichao Zhao', 'shichao zhao')
('1732242', 'Yanbin Liu', 'yanbin liu')
('2302512', 'Yahong Han', 'yahong han')
('2248826', 'Richang Hong', 'richang hong')
{zhaoshichao, csyanbin, yahong}@tju.edu.cn, hongrc.hfut@gmail.com
3393459600368be2c4c9878a3f65a57dcc0c2cfaEigen-PEP for Video Face Recognition
Stevens Institute of Technology Adobe Systems Inc
('3131569', 'Haoxiang Li', 'haoxiang li')
('1745420', 'Gang Hua', 'gang hua')
('1720987', 'Xiaohui Shen', 'xiaohui shen')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
3352426a67eabe3516812cb66a77aeb8b4df4d1bJOURNAL OF LATEX CLASS FILES, VOL. 4, NO. 5, APRIL 2015
Joint Multi-view Face Alignment in the Wild
('3234063', 'Jiankang Deng', 'jiankang deng')
('2814229', 'George Trigeorgis', 'george trigeorgis')
('47943220', 'Yuxiang Zhou', 'yuxiang zhou')
334d6c71b6bce8dfbd376c4203004bd4464c2099BICONVEX RELAXATION FOR SEMIDEFINITE PROGRAMMING IN
COMPUTER VISION
('36861219', 'Sohil Shah', 'sohil shah')
('1746575', 'Christoph Studer', 'christoph studer')
('1962083', 'Tom Goldstein', 'tom goldstein')
33695e0779e67c7722449e9a3e2e55fde64cfd99Riemannian Coding and Dictionary Learning: Kernels to the Rescue
Australian National University and NICTA
While sparse coding on non-flat Riemannian manifolds has recently become
increasingly popular, existing solutions either are dedicated to specific man-
ifolds, or rely on optimization problems that are difficult to solve, especially
when it comes to dictionary learning. In this paper, we propose to make use
of kernels to perform coding and dictionary learning on Riemannian man-
ifolds. To this end, we introduce a general Riemannian coding framework
with its kernel-based counterpart. This lets us (i) generalize beyond the spe-
cial case of sparse coding; (ii) introduce efficient solutions to two coding
schemes; (iii) learn the kernel parameters; (iv) perform unsupervised and
supervised dictionary learning in a much simpler manner than previous Rie-
mannian coding approaches.
i=1, di ∈ M, be a dictionary on a Rie-
mannian manifold M, and x ∈ M be a query point on the manifold. We
(cid:17)
define a general Riemannian coding formulation as
More specifically, let D = {di}N
(cid:93)N
j=1 α jd j
min
s.t. α ∈ C,
+ λγ(α;x,D)
δ 2(cid:0)x,
(1)
on α. Moreover, (cid:85) : M×···×M× R× R···× R → M is an operator
where δ : M×M → R+ is a metric on M, α ∈ RN is the vector of Rie-
mannian codes, γ is a prior on the codes α and C is a set of constraints
that combines multiple dictionary atoms {d j ∈ M} with weights {α j} and
generates a point ˆx on M. This general formulation encapsulates intrinsic
sparse coding [2, 5], but also lets us derive and intrinsic version of Locality-
constrained Linear Coding [10]. Such intrinsic formulations, however, de-
pend on the logarithm map, which may be highly nonlinear, or not even have
an analytic solution.
To overcome these weaknesses and obtain a general formulation of Rie-
mannian coding, we propose to perform coding in RKHS. This has the
twofold advantage of yielding simple solutions to several popular coding
techniques and of resulting in a potentially better representation than stan-
dard coding techniques due to the nonlinearity of the approach. To this
end, let φ : M → H be a mapping to an RKHS induced by the kernel
k(x,y) = φ (x)T φ (y). Coding in H can then be formulated as
(cid:13)(cid:13)(cid:13)φ(cid:0)x)−∑N
(cid:13)(cid:13)(cid:13)2
j=1 α jφ(cid:0)d j)
+ λγ(α;φ(cid:0)x),φ(cid:0)D))
min
s.t. α ∈ C.
(2)
As shown in the paper, the reconstruction term in (2) can be kernelized.
More importantly, after kernelization, this term remains quadratic, convex
and similar to its counterpart in Euclidean space. This lets us derive efficient
solutions to two coding schemes: kernel Sparse Coding (kSC) and kernel
Locality Constrained Coding (kLCC).
In many cases, it is beneficial not only to compute the codes for a given
dictionary, but also to optimize the dictionary to best suit the problem at
hand. Given training data, and for fixed codes, we then show that, by relying
on the Representer theorem [8], the dictionary update has an analytic form.
Furthermore, we introduce an approach to supervised dictionary learning,
which, given labeled data, jointly learns the dictionary and a classifier acting
on the codes. The resulting supervised coding schemes are referred to as
kSSC and kSLCC.
We demonstrate the effectiveness of our approach on three different
types of non-flat manifolds, as well as illustrate its generality by also ap-
plying it to Euclidean space, which simply is a special type of Rieman-
nian manifold. In particular, we evaluated our different techniques on two
challenging classification datasets where the images are represented with
region covariance descriptors (RCovDs) [9], which lie on SPD manifolds.
('2862871', 'Mathieu Salzmann', 'mathieu salzmann')
334ac2a459190b41923be57744aa6989f9a54a51Apples to Oranges: Evaluating Image Annotations from Natural Language
Processing Systems
Brown Laboratory for Linguistic Information Processing (BLLIP)
Brown University, Providence, RI
('2139196', 'Rebecca Mason', 'rebecca mason')
('1749837', 'Eugene Charniak', 'eugene charniak')
{rebecca,ec}@cs.brown.edu
33e20449aa40488c6d4b430a48edf5c4b43afdabTRANSACTIONS ON AFFECTIVE COMPUTING
The Faces of Engagement: Automatic
Recognition of Student Engagement from Facial
Expressions
('1775637', 'Jacob Whitehill', 'jacob whitehill')
('3089406', 'Zewelanji Serpell', 'zewelanji serpell')
('3267606', 'Yi-Ching Lin', 'yi-ching lin')
('39687351', 'Aysha Foster', 'aysha foster')
('1741200', 'Javier R. Movellan', 'javier r. movellan')
333e7ad7f915d8ee3bb43a93ea167d6026aa3c22This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TIFS.2014.2309851
DRAFT
3D Assisted Face Recognition: Dealing With
Expression Variations
('2128163', 'Nesli Erdogmus', 'nesli erdogmus')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
334166a942acb15ccc4517cefde751a381512605 International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 10 | Oct -2017 www.irjet.net p-ISSN: 2395-0072
Facial Expression Analysis using Deep Learning
M.Tech Student, SSG Engineering College, Odisha, India
---------------------------------------------------------------------***---------------------------------------------------------------------
examination structures need to analyse the facial exercises
('13518951', 'Raman Patel', 'raman patel')
33403e9b4bbd913ae9adafc6751b52debbd45b0e
33ef419dffef85443ec9fe89a93f928bafdc922eSelfKin: Self Adjusted Deep Model For
Kinship Verification
Faculty of Engineering, Bar-Ilan University, Israel
('32450996', 'Eran Dahan', 'eran dahan')
('1926432', 'Yosi Keller', 'yosi keller')
33ad23377eaead8955ed1c2b087a5e536fecf44eAugmenting CRFs with Boltzmann Machine Shape Priors for Image Labeling
∗ indicates equal contribution
('2177037', 'Andrew Kae', 'andrew kae')
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
('1697141', 'Honglak Lee', 'honglak lee')
1 University of Massachusetts, Amherst, MA, USA, {akae,elm}@cs.umass.edu
2 University of Michigan, Ann Arbor, MI, USA, {kihyuks,honglak}@umich.edu
053b263b4a4ccc6f9097ad28ebf39c2957254dfbCost-Effective HITs for Relative Similarity Comparisons
Cornell University
University of California, San Diego
Cornell University
('3035230', 'Michael J. Wilber', 'michael j. wilber')
('2064392', 'Iljung S. Kwak', 'iljung s. kwak')
('1769406', 'Serge J. Belongie', 'serge j. belongie')
05b8673d810fadf888c62b7e6c7185355ffa4121(will be inserted by the editor)
A Comprehensive Survey to Face Hallucination
Received: date / Accepted: date
('2870173', 'Nannan Wang', 'nannan wang')
056d5d942084428e97c374bb188efc386791e36dTemporally Robust Global Motion
Compensation by Keypoint-based Congealing
Michigan State University
('2447931', 'Yousef Atoum', 'yousef atoum')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
05e658fed4a1ce877199a4ce1a8f8cf6f449a890
05ad478ca69b935c1bba755ac1a2a90be6679129Attribute Dominance: What Pops Out?
Georgia Tech
('3169410', 'Naman Turakhia', 'naman turakhia')nturakhia@gatech.edu
0595d18e8d8c9fb7689f636341d8a55cc15b3e6aDiscriminant Analysis on Riemannian Manifold of Gaussian Distributions
for Face Recognition with Image Sets
1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing, 100049, China
('39792743', 'Wen Wang', 'wen wang')
('39792743', 'Ruiping Wang', 'ruiping wang')
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{wen.wang, zhiwu.huang}@vipl.ict.ac.cn, {wangruiping, sgshan, xlchen}@ict.ac.cn
0573f3d2754df3a717368a6cbcd940e105d67f0bEmotion Recognition In The Wild Challenge 2013∗
Res. School of Computer
Science
Australian National University
Roland Goecke
Vision & Sensing Group
University of Canberra
Australian National University
Vision & Sensing Group
University of Canberra
HCC Lab
University of Canberra
Australian National University
('1735697', 'Abhinav Dhall', 'abhinav dhall')
('2942991', 'Jyoti Joshi', 'jyoti joshi')
('1743035', 'Michael Wagner', 'michael wagner')
jyoti.joshi@canberra.edu.au
abhinav.dhall@anu.edu.au
roland.goecke@ieee.org
michael.wagner@canberra.edu.au
05a0d04693b2a51a8131d195c68ad9f5818b2ce1Dual-reference Face Retrieval
School of Computing Sciences, University of East Anglia, Norwich, UK
University of Pittsburgh, Pittsburgh, USA
3JD Artificial Intelligence Research (JDAIR), Beijing, China
('19285980', 'BingZhang Hu', 'bingzhang hu')
('40255667', 'Feng Zheng', 'feng zheng')
('40799321', 'Ling Shao', 'ling shao')
bingzhang.hu@uea.ac.uk, feng.zheng@pitt.edu, ling.shao@ieee.org
0562fc7eca23d47096472a1d42f5d4d086e21871
054738ce39920975b8dcc97e01b3b6cc0d0bdf32Towards the Design of an End-to-End Automated
System for Image and Video-based Recognition
('9215658', 'Rama Chellappa', 'rama chellappa')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('26988560', 'Rajeev Ranjan', 'rajeev ranjan')
('2716670', 'Swami Sankaranarayanan', 'swami sankaranarayanan')
('40080979', 'Amit Kumar', 'amit kumar')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')
05e03c48f32bd89c8a15ba82891f40f1cfdc7562Scalable Robust Principal Component
Analysis using Grassmann Averages
('2142792', 'Søren Hauberg', 'søren hauberg')
('1808965', 'Aasa Feragen', 'aasa feragen')
('2105795', 'Michael J. Black', 'michael j. black')
05a312478618418a2efb0a014b45acf3663562d7Accelerated Sampling for the Indian Buffet Process
Cambridge University, Trumpington Street, Cambridge CB21PZ, UK
('2292194', 'Finale Doshi-Velez', 'finale doshi-velez')
('1983575', 'Zoubin Ghahramani', 'zoubin ghahramani')
finale@alum.mit.edu
zoubin@eng.cam.ac.uk
056ba488898a1a1b32daec7a45e0d550e0c51ae4Cascaded Continuous Regression for Real-time
Incremental Face Tracking
Enrique S´anchez-Lozano, Brais Martinez,
Computer Vision Laboratory. University of Nottingham
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos'){psxes1,yorgos.tzimiropoulos,michel.valstar}@nottingham.ac.uk
050fdbd2e1aa8b1a09ed42b2e5cc24d4fe8c7371Contents
Scale Space and PDE Methods
Spatio-Temporal Scale Selection in Video Data . . . . . . . . . . . . . . . . . . . . .
Dynamic Texture Recognition Using Time-Causal Spatio-Temporal
Scale-Space Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Corner Detection Using the Affine Morphological Scale Space . . . . . . . . . . .
Luis Alvarez
Nonlinear Spectral Image Fusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Martin Benning, Michael Möller, Raz Z. Nossek, Martin Burger,
Daniel Cremers, Guy Gilboa, and Carola-Bibiane Schönlieb
16
29
41
Tubular Structure Segmentation Based on Heat Diffusion. . . . . . . . . . . . . . .
54
Fang Yang and Laurent D. Cohen
Analytic Existence and Uniqueness Results for PDE-Based Image
Reconstruction with the Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Laurent Hoeltgen, Isaac Harris, Michael Breuß, and Andreas Kleefeld
Combining Contrast Invariant L1 Data Fidelities with Nonlinear
Spectral Image Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Leonie Zeune, Stephan A. van Gils, Leon W.M.M. Terstappen,
and Christoph Brune
An Efficient and Stable Two-Pixel Scheme for 2D
Forward-and-Backward Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Martin Welk and Joachim Weickert
66
80
94
Restoration and Reconstruction
Blind Space-Variant Single-Image Restoration of Defocus Blur. . . . . . . . . . .
109
Leah Bar, Nir Sochen, and Nahum Kiryati
Denoising by Inpainting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121
Robin Dirk Adam, Pascal Peter, and Joachim Weickert
Stochastic Image Reconstruction from Local Histograms
of Gradient Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Agnès Desolneux and Arthur Leclaire
133
('3205375', 'Tony Lindeberg', 'tony lindeberg')
('3205375', 'Tony Lindeberg', 'tony lindeberg')
056294ff40584cdce81702b948f88cebd731a93e
052880031be0a760a5b606b2ad3d22f237e8af70Datasets on object manipulation and interaction: a survey ('3112203', 'Yongqiang Huang', 'yongqiang huang')
('35760122', 'Yu Sun', 'yu sun')
055de0519da7fdf27add848e691087e0af166637Joint Unsupervised Face Alignment
and Behaviour Analysis(cid:2)
Imperial College London, UK
('1786302', 'Lazaros Zafeiriou', 'lazaros zafeiriou')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{l.zafeiriou12,e.antonakos,s.zafeiriou,m.pantic}@imperial.ac.uk
0515e43c92e4e52254a14660718a9e498bd61cf5MACHINE LEARNING SYSTEMS FOR DETECTING DRIVER DROWSINESS
Sabanci University
Faculty of
Engineering and Natural Sciences
Orhanli, Istanbul
University Of California San Diego
Institute of
Neural Computation
La Jolla, San Diego
('40322754', 'Esra Vural', 'esra vural')
('2724380', 'Gwen Littlewort', 'gwen littlewort')
('1858421', 'Marian Bartlett', 'marian bartlett')
('29794862', 'Javier Movellan', 'javier movellan')
053c2f592a7f153e5f3746aa5ab58b62f2cf1d21International Journal of Research in
Engineering & Technology (IJRET)
ISSN 2321-8843
Vol. 1, Issue 2, July 2013, 11-20
© Impact Journals
PERFORMANCE EVALUATION OF ILLUMINATION NORMALIZATION TECHNIQUES
FOR FACE RECOGNITION
PSG College of Technology, Coimbatore, Tamil Nadu, India
05891725f5b27332836cf058f04f18d74053803fOne-shot Action Localization by Learning Sequence Matching Network
The Australian National University
ShanghaiTech University
Fatih Porikli
The Australian National University
('51050729', 'Hongtao Yang', 'hongtao yang')
('33913193', 'Xuming He', 'xuming he')
u5226028@anu.edu.au
hexm@shanghaitech.edu.cn
fatih.porikli@anu.edu.au
0568fc777081cbe6de95b653644fec7b766537b2Learning Expressionlets on Spatio-Temporal Manifold for Dynamic Facial
Expression Recognition
1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences (UCAS), Beijing, 100049, China
University of Oulu, Finland
('1730228', 'Mengyi Liu', 'mengyi liu')
('1685914', 'Shiguang Shan', 'shiguang shan')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1710220', 'Xilin Chen', 'xilin chen')
mengyi.liu@vipl.ict.ac.cn, {sgshan, wangruiping, xlchen}@ict.ac.cn
05d80c59c6fcc4652cfc38ed63d4c13e2211d944On Sampling-based Approximate Spectral Decomposition
Google Research, New York, NY
Courant Institute of Mathematical Sciences and Google Research, New York, NY
Courant Institute of Mathematical Sciences, New York, NY
('2794322', 'Sanjiv Kumar', 'sanjiv kumar')
('1709415', 'Mehryar Mohri', 'mehryar mohri')
('8395559', 'Ameet Talwalkar', 'ameet talwalkar')
sanjivk@google.com
mohri@cs.nyu.edu
ameet@cs.nyu.edu
05ea7930ae26165e7e51ff11b91c7aa8d7722002Learning And-Or Model to Represent Context and
Occlusion for Car Detection and Viewpoint Estimation
('3198440', 'Tianfu Wu', 'tianfu wu')
('40479452', 'Bo Li', 'bo li')
('3133970', 'Song-Chun Zhu', 'song-chun zhu')
055530f7f771bb1d5f352e2758d1242408d34e4dA Facial Expression Recognition System from
Depth Video
Department of Computer Education
Sungkyunkwan University
Seoul, Republic of Korea
('3241032', 'Md. Zia Uddin', 'md. zia uddin')Email: ziauddin@skku.edu
050eda213ce29da7212db4e85f948b812a215660Combining Models and Exemplars for Face Recognition:
An Illuminating Example
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
('1715286', 'Terence Sim', 'terence sim')
('1733113', 'Takeo Kanade', 'takeo kanade')
051a84f0e39126c1ebeeb379a405816d5d06604dCogn Comput (2009) 1:257–267
DOI 10.1007/s12559-009-9018-7
Biometric Recognition Performing in a Bioinspired System
Joan Fa`bregas Æ Marcos Faundez-Zanuy
Published online: 20 May 2009
Ó Springer Science+Business Media, LLC 2009
05e3acc8afabc86109d8da4594f3c059cf5d561fActor-Action Semantic Segmentation with Grouping Process Models
Department of Electrical Engineering and Computer Science
University of Michigan, Ann Arbor
CVPR 2016
OBJECTIVE
We seek to label each pixel in a video with a pair of actor (e.g. adult, baby and
dog) and action (e.g. eating, walking and jumping) labels.
Overview of the Grouping Process Model
Video Labeling
- We propose a novel grouping process model (GPM) that adaptively adds
long-ranging interactions of the supervoxel hierarchy to the labeling CRF.
Input Video
Segment-Level
s.
The Tree Slice Problem
slice
The Video Labeling Problem
Selected Nodes
Input Video
- We incorporate the video-level recognition into segment-level labeling by
the means of global labeling cost and the GPM.
- a set of random variables defined on the segments taking
Definition & Joint Modeling
Segment-Level:
- a video segmentation with n segments.
V = {q1, q2, . . . , qN}
L = {l1, l2, . . . , lN}
labels from both actor space and action space, e.g. adult-eating, dog-crawling.
Supervoxel Hierarchy:
T = {T1, T2, . . . , TS}
chy with S total supervoxels.
s = {s1, s2, . . . , sS}
voxels denoting its active or not.
- a segmentation tree extracted from a supervoxel hierar-
- a set of binary random variables defined on the super-
The Overall Objective Function:
(L∗, s∗) = arg min
E(L, s|V,T )
E(L, s|V,T ) = Ev(L|V) +E h(s|T )
L,s
+(cid:31)t∈T
(Eh(Lt|st) +E h(st|Lt))
Grouping Cues from Segment Labeling. The GPM uses evidence directly from
the segment-level CRF to locate supervoxels across various scales that best cor-
respond to the actor and its action.
Eh(st|Lt) = (H(Lt)|Lt| + θh)st
The Tree Slice Constraint. We seek a single labeling over the video. Each node
in CRF is associated with one and only one supervoxel in the hierarchy. This con-
straint is the same as our previous work in Xu et al. ICCV 2013.
Eh(s|T ) =
P(cid:31)p=1
δ(PT
p s (cid:31)= 1)θτ
Labeling Cues from Supervoxel Hierarchy. Once the supervoxels are selected,
they provide strong labeling cues to the segment-level CRF. The CRF nodes con-
nected to the same active supervoxel are encouraged to have the same label.
Eh(Lt|st) =(cid:31) (cid:30)i∈Lt(cid:30)j(cid:30)=i,j∈Lt
ij(li, lj) =(cid:31) θt
ψh
if li (cid:31)= lj
otherwise
ψh
ij(li, lj)
if st = 1
otherwise
Segment-Level CRF
The segment-level CRF considers the interplay of actors and actions.
- denotes the set of actor labels (e.g. adult, baby and dog).
- denotes the set of action labels (e.g. eating, running and crawling).
Ev(L|V) =(cid:31)i∈V
ξv
i (li) +(cid:31)i∈V (cid:31)j∈E(i)
ξv
ij(li, lj)
ξv
i (li) = ψv
i (lXi ) +φ v
i (lYi ) +ϕ v
i (lXi , lYj )
ij(lXi , lXj )
ψv
ij(lYi , lYj )
φv
ij(lXi , lXj ) +φ v
ψv
ij(lYi , lYj )
(cid:31)= lXj ∧ lYi = lYj
if lXi
(cid:31)= lYj
if lXi = lXj ∧ lYi
(cid:31)= lXj ∧ lYi
(cid:31)= lYj
if lXi
if lXi = lXj ∧ lYi = lYj .
ξv
ij(li, lj) =
Iterative Inference
Directly solving the overall objective function is hard. We use an iterative inference
schema to efficiently solve it.
The Video Labeling Problem. Given a tree slice, we find the best labeling.
L∗ = arg min
= arg min
E(L|s,V,T )
Ev(L|V) +(cid:31)t∈T
- Optimization depends on
- Solvable by graph-cuts multi-label inference.

Eh(Lt|st)
The Tree Slice Problem. Given a labeling, we find the best tree slice.
- Rewrite as a binary linear program.
s∗ = arg min
E(s|L,V,T )
= arg min
Eh(st|Lt)
Eh(s|T ) +(cid:31)t∈T
s.t. Ps = 1P and s ∈ {0, 1}S
min(cid:31)t∈T
αtst
Experiments: The Actor-Action Semantic Segmentation
- Dataset: the A2D large-scale video labeling dataset.
One-third of videos have more than one actor performing different actions.
- Two different hierarchies: TSP and GBH.
- Video-level recognition is added through both global labeling cost and the GPM.
It consists of 3782 YouTube videos with an average length of 136 frames.
100.0
80.0
60.0
40.0
20.0
0.0
!-./'
77.9
74.6
44.8
45.7
64.9
38.0
85.2
84.9
58.3
59.4
!"#$
$%#$ &'()*+,' !"#$%& !"#$!&
100.0
80.0
60.0
40.0
20.0
0.0
!-.(/0
77.6
74.6
45.5
47.0
85.3
84.8
60.5
61.2
63.9
29.0
!"#$
$%#$ &'()*+,' !"#$%& !"#$!&
100.0
80.0
60.0
40.0
20.0
0.0
1!-./'23!-.(/04
84.2
76.2
72.9
63.0
83.8
43.3
43.9
25.4
26.5
13.9
!"#$
$%#$ &'()*+,' !"#$%& !"#$!&
5)6,73'()*+,)-")*./0+11-'223*+24839,))/:73 ;)/<*)3=(>,)3!--6'*-+
Visual example of the actor-action video labelings for all methods. (a) - (c) are
videos where most methods get correct labelings; (d) - (e) are videos where GPM
models outperform; (h) - (i) are different videos with partially correct labelings.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Ground-Truth
AHRF
FCRF
adult-none
adult-eating
adult-eating
adult-eating
baby-crawling
Trilayer
GPM (TSP)
GPM (GBH)
adult-none
adult-eating
adult-eating
adult-eating
car-running
car-running
car-running
car-running
car-running
car-running
baby-rolling
baby-rolling
baby-rolling
baby-rolling
baby-rolling
baby-rolling
dog-eating
baby-crawling
dog-crawling
adult-none
car-rolling
car-rolling
dog-crawling
dog-crawling
bird-eating
cat-climbing
adult-walking
adult-walking
bird-eating
bird-eating
adult-walking
bird-walking
bird-flying
car-running
car-running
bird-walking
bird-walking
car-running
dog-walking
dog-walking
adult-walking
car-jumping
ball-flying
adult-walking
car-running
adult-walking
adult-walking
adult-walking
car-running
adult-walking
adult-running
ball-rolling
adult-none
adult-walking
adult-running
adult-walking
adult-walking
dog-walking
dog-rolling
ball-rolling
ball-rolling
adult-none
car-jumping
adult-none
bird-walking
adult-crawling
adult-jumping
adult-crawling
car-flying
adult-crawling
adult-crawling
αt = H(Lt)|Lt| + θh
Acknowledgements. This work has been supported in part by Google, Samsung, DARPA W32P4Q-15-C-0070
and ARO W911NF-15-1-0354.
('2026123', 'Chenliang Xu', 'chenliang xu')
('3587688', 'Jason J. Corso', 'jason j. corso')
05f4d907ee2102d4c63a3dc337db7244c570d067
0559fb9f5e8627fecc026c8ee6f7ad30e54ee9294
Facial Expression Recognition
ADSIP Research Centre, University of Central Lancashire
UK
1. Introduction
Facial expressions are visible signs of a person’s affective state, cognitive activity and
personality. Humans can perform expression recognition with a remarkable robustness
without conscious effort even under a variety of adverse conditions such as partially
occluded faces, different appearances and poor illumination. Over the last two decades, the
advances in imaging technology and ever increasing computing power have opened up a
possibility of automatic facial expression recognition and this has led to significant research
efforts from the computer vision and pattern recognition communities. One reason for this
growing interest is due to a wide spectrum of possible applications in diverse areas, such as
more engaging human-computer interaction (HCI) systems, video conferencing, augmented
reality. Additionally from the biometric perspective, automatic recognition of facial
expressions has been investigated in the context of monitoring patients in the intensive care
and neonatal units for signs of pain and anxiety, behavioural research, identifying level of
concentration, and improving face recognition.
Automatic facial expression recognition is a difficult task due to its inherent subjective
nature, which is additionally hampered by usual difficulties encountered in pattern
recognition and computer vision research. The vast majority of the current state-of-the-art
facial expression recognition systems are based on 2-D facial images or videos, which offer
good performance only for the data captured under controlled conditions. As a result, there
is currently a shift towards the use of 3-D facial data to yield better recognition performance.
However, it requires more expensive data acquisition systems and sophisticated processing
algorithms. The aim of this chapter is to provide an overview of the existing methodologies
and recent advances in the facial expression recognition, as well as present a systematic
description of the authors’ work on the use of 3-D facial data for automatic recognition of
facial expressions, starting from data acquisition and database creation to data processing
algorithms and performance evaluation.
1.1 Facial expression
Facial expressions are generated ... skin texture” (Pantic & Rothkrantz, 2000)” should be
replaced by “Expressions shown on the face are produced by a combination of contraction
activities made by facial muscles, with most noticeable temporal deformation around nose,
lips, eyelids, and eyebrows as well as facial skin texture patterns (Pantic & Rothkrantz,
2000). Typical facial expressions last for a few seconds, normally between 250 milliseconds
and five seconds (Fasel & Luettin, 2003). According to psychologists Ekman and Friesen
('2647218', 'Bogdan J. Matuszewski', 'bogdan j. matuszewski')
('2343120', 'Wei Quan', 'wei quan')
052f994898c79529955917f3dfc5181586282cf8Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos
1NEC Labs America
2UC Merced
Dalian University of Technology
4UC San Diego
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
05a7be10fa9af8fb33ae2b5b72d108415519a698Multilayer and Multimodal Fusion of Deep Neural Networks
for Video Classification
NVIDIA
('2214162', 'Xiaodong Yang', 'xiaodong yang'){xiaodongy, pmolchanov, jkautz}@nvidia.com
050a149051a5d268fcc5539e8b654c2240070c82MAGISTERSKÉ A DOKTORSKÉSTUDIJNÍ PROGRAMY31. 5. 2018SBORNÍKSTUDENTSKÁ VĚDECKÁ KONFERENCE
05318a267226f6d855d83e9338eaa9e718b2a8dd_______________________________________________________PROCEEDING OF THE 16TH CONFERENCE OF FRUCT ASSOCIATION
Age Estimation from Face Images: Challenging
Problem for Audience Measurement Systems
Yaroslavl State University
Russia
('1857299', 'Alexander Ganin', 'alexander ganin')
('39942308', 'Olga Stepanova', 'olga stepanova')
('39635716', 'Anton Lebedev', 'anton lebedev')
vhr@yandex.ru, angnn@mail.ru, dcslab@uniyar.ac.ru, lebedevdes@gmail.com
057d5f66a873ec80f8ae2603f937b671030035e6Newtonian Image Understanding:
Unfolding the Dynamics of Objects in Static Images
Allen Institute for Arti cial Intelligence (AI
University of Washington
('3012475', 'Roozbeh Mottaghi', 'roozbeh mottaghi')
('2456400', 'Hessam Bagherinezhad', 'hessam bagherinezhad')
('2563325', 'Mohammad Rastegari', 'mohammad rastegari')
('2270286', 'Ali Farhadi', 'ali farhadi')
0580edbd7865414c62a36da9504d1169dea78d6fBaseline CNN structure analysis for facial expression recognition ('2448391', 'Minchul Shin', 'minchul shin')
('1702520', 'Munsang Kim', 'munsang kim')
('1750864', 'Dong-Soo Kwon', 'dong-soo kwon')
050a3346e44ca720a54afbf57d56b1ee45ffbe49Multi-Cue Zero-Shot Learning with Strong Supervision
Max-Planck Institute for Informatics
('2893664', 'Zeynep Akata', 'zeynep akata')
('34070834', 'Mateusz Malinowski', 'mateusz malinowski')
('1739548', 'Mario Fritz', 'mario fritz')
('1697100', 'Bernt Schiele', 'bernt schiele')
0517d08da7550241fb2afb283fc05d37fce5d7b7Sensors & Transducers, Vol. 153, Issue 6, June 2013, pp. 92-99

SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss
© 2013 by IFSA
http://www.sensorsportal.com
Combination of Local Multiple Patterns and Exponential
Discriminant Analysis for Facial Recognition
College of Computer Science, Chongqing University, Chongqing, 400030, China
College of software, Chongqing University of Posts and Telecommunications Chongqing
Institute of Computer Science and Technology, Chongqing University of Posts and
400065, China
Telecommunications, Chongqing 400065, China
1 Tel.: 023-65112784, fax: 023-65112784
Received: 26 April 2013 /Accepted: 14 June 2013 /Published: 25 June 2013
('2623870', 'Lifang Zhou', 'lifang zhou')
('1713814', 'Bin Fang', 'bin fang')
('1964987', 'Weisheng Li', 'weisheng li')
('2103166', 'Lidou Wang', 'lidou wang')
1 E-mail: zhoulf@cqupt.edu.cn
053931267af79a89791479b18d1b9cde3edcb415Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Attributes for Improved Attributes: A Multi-Task Network
Utilizing Implicit and Explicit Relationships for Facial Attribute Classification
University of Maryland, College Park
College Park, MD
('3351637', 'Emily M. Hand', 'emily m. hand')
('9215658', 'Rama Chellappa', 'rama chellappa')
{emhand, rama}@umiacs.umd.edu
05f3d1e9fb254b275354ca69018e9ed321dd8755Face Recognition using Optimal Representation
Ensemble
NICTA , Queensland Research Laboratory, QLD, Australia
Grif th University, QLD, Australia
University of Adelaide, SA, Australia
29·4·2013
('1711119', 'Hanxi Li', 'hanxi li')
('1780381', 'Chunhua Shen', 'chunhua shen')
('1744926', 'Yongsheng Gao', 'yongsheng gao')
05e96d76ed4a044d8e54ef44dac004f796572f1a
051f03bc25ec633592aa2ff5db1d416b705eac6cTo appear in the International Joint Conference on Biometrics (IJCB 2011), Washington D.C., October 2011
Partial Face Recognition: An Alignment Free Approach
Department of Computer Science and Engineering
Michigan State University, East Lansing, MI 48824, U.S.A
('40397682', 'Shengcai Liao', 'shengcai liao')
('6680444', 'Anil K. Jain', 'anil k. jain')
{scliao,jain}@cse.msu.edu
9d58e8ab656772d2c8a99a9fb876d5611fe2fe20Beyond Temporal Pooling: Recurrence and Temporal
Convolutions for Gesture Recognition in Video
{lionel.pigou,aaron.vandenoord,sander.dieleman,
Ghent University
February 11, 2016
('2660640', 'Lionel Pigou', 'lionel pigou')
('48373216', 'Sander Dieleman', 'sander dieleman')
('10182287', 'Mieke Van Herreweghe', 'mieke van herreweghe')
mieke.vanherreweghe, joni.dambre}@ugent.be
9d8ff782f68547cf72b7f3f3beda9dc3e8ecfce6International Journal of Pattern Recognition
and Arti¯cial Intelligence
Vol. 26, No. 1 (2012) 1250002 (9 pages)
#.c World Scienti¯c Publishing Company
DOI: 10.1142/S0218001412500024
IMPROVED PSEUDOINVERSE LINEAR
DISCRIMINANT ANALYSIS METHOD FOR
DIMENSIONALITY REDUCTION
*Signal Processing Laboratory, School of Engineering
Gri th University, QLD-4111, Brisbane, Australia
University of the South Paci c, Fiji
‡Laboratory of DNA Information Analysis
Human Genome Center, Institute of Medical Science
University of Tokyo, 4-6-1 Shirokanedai
Minato-ku, Tokyo 108-8639, Japan
Received 4 November 2010
Accepted 22 September 2011
Published 11 May 2012
Pseudoinverse linear discriminant analysis (PLDA) is a classical method for solving small
sample size problem. However, its performance is limited. In this paper, we propose an improved
PLDA method which is faster and produces better classi¯cation accuracy when experimented on
several datasets.
Keywords : Pseudoinverse;
tational complexity.
linear discriminant analysis; dimensionality reduction; compu-
1. Introduction
Dimensionality reduction is an important aspect of pattern classi¯cation. It helps in
improving the robustness (or generalization capability) of the pattern classi¯er and
in reducing its computational complexity. The linear discriminant analysis (LDA)
method5 is a well-known dimensionality reduction technique studied in the litera-
ture. The LDA technique ¯nds an orientation matrix W that transforms high-
dimensional feature vectors belonging to di®erent classes to lower dimensional
feature vectors such that the projected feature vectors of a class are well separated
from the feature vectors of other classes. The orientation W is obtained by max-
imizing the Fisher's criterion function J1ðWÞ ¼ jW TSBWj=jW TSW Wj, where SB is
between-class scatter matrix and SW is within-class scatter matrix. It has been shown
in the literature that modi¯ed version of Fisher's criterion J2ðWÞ ¼ jW TSBWj=
jW TST Wj produces similar results, where ST is total scatter matrix.6
1250002-1
Int. J. Patt. Recogn. Artif. Intell. 2012.26. Downloaded from www.worldscientific.comby GRIFFITH UNIVERSITY INFORMATION SERVICES on 09/05/12. For personal use only
('3150542', 'Kuldip K. Paliwal', 'kuldip k. paliwal')
('40532633', 'Alok Sharma', 'alok sharma')
§aloks@ims.u-tokyo.ac.jp
¶sharma_al@usp.ac.fj
9d42df42132c3d76e3447ea61e900d3a6271f5feInternational Journal of Computer Applications (0975 – 8887)
Advanced Computing and Communication Techniques for High Performance Applications (ICACCTHPA-2014)
AutoCAP: An Automatic Caption Generation System
based on the Text Knowledge Power Series
Representation Model
M.Tech Dept of CSE
NSS College of Engineering
Palakkad, Kerala
('24326432', 'Krishnapriya P S', 'krishnapriya p s')
9d55ec73cab779403cd933e6eb557fb04892b634Kernel principal component analysis network for image classification1
Key Laboratory of Computer Network and Information Integration of Ministry of Education, Southeast University, Nanjing
210096, China)
(2 Institut National de la Santé et de la Recherche Médicale U 1099, Rennes 35000, France)
(3 Laboratoire Traitement du Signal et de l’Image, Université de Rennes 1, Rennes 35000, France)
(4Centre de Recherche en Information Biomédicale Sino-français, Nanjing 210096, China)
('1684465', 'Lotfi Senhadji', 'lotfi senhadji')
9d8fd639a7aeab0dd1bc6eef9d11540199fd6fe2Workshop track - ICLR 2018
LEARNING TO CLUSTER
ZHAW Datalab, Zurich University of Applied Sciences
Winterthur, Switzerland
('40087403', 'Benjamin B. Meier', 'benjamin b. meier')
('2793787', 'Thilo Stadelmann', 'thilo stadelmann')
benjamin.meier70@gmail.com, stdm@zhaw.ch, oliver.duerr@gmail.com
9d357bbf014289fb5f64183c32aa64dc0bd9f454Face Identification by Fitting a 3D Morphable Model
using Linear Shape and Texture Error Functions
University of Freiburg, Instit ut f ur Informatik
Georges-K¨ohler-Allee 52, 79110 Freiburg, Germany,
('3293655', 'Sami Romdhani', 'sami romdhani')
('2880906', 'Volker Blanz', 'volker blanz')
('1687079', 'Thomas Vetter', 'thomas vetter')
fromdhani, volker, vetterg@informatik.uni-freiburg.de
9d66de2a59ec20ca00a618481498a5320ad38481POP: Privacy-preserving Outsourced Photo Sharing
and Searching for Mobile Devices
cid:3) School of Software, Tsinghua University
Illinois Institute of Technology
('1718343', 'Lan Zhang', 'lan zhang')
('8645024', 'Taeho Jung', 'taeho jung')
('1773806', 'Cihang Liu', 'cihang liu')
('1752660', 'Xuan Ding', 'xuan ding')
('34569491', 'Xiang-Yang Li', 'xiang-yang li')
('10258874', 'Yunhao Liu', 'yunhao liu')
9d839dfc9b6a274e7c193039dfa7166d3c07040bAugmented Faces
1ETH Z¨urich
2Kooaba AG
3K.U. Leuven
('1727791', 'Matthias Dantone', 'matthias dantone')
('1696393', 'Lukas Bossard', 'lukas bossard')
('1726249', 'Till Quack', 'till quack')
('1681236', 'Luc Van Gool', 'luc van gool')
{dantone,bossard,tquack,vangool}@vision.ee.ethz.ch
9dcc6dde8d9f132577290d92a1e76b5decc6d755Journal of Trends in the Development of Machinery
and Associated Technology
Vol. 16, No. 1, 2012, ISSN 2303-4009 (online), p.p. 175-178
FACIAL EXPRESSION ANALYSIS BASED
ON OPTIMIZED GABOR FEATURES
Istanbul University
Avcilar, 34320 Istanbul
Turkey
Yalçın Çekiç
Bahcesehir University
Besiktas, 34349 Istanbul
Turkey
('40701205', 'Aydın Akan', 'aydın akan')
9d36c81b27e67c515df661913a54a797cd1260bbApplications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 1,Jan-Feb 2012, pp.787-793
3D FACE RECOGNITION TECHNIQUES - A REVIEW
Gujarat Technological University, India
Gujarat Technological University, India
security at many places
('9318822', 'Mahesh M. Goyani', 'mahesh m. goyani')
('9198701', 'Preeti B. Sharma', 'preeti b. sharma')
('9318822', 'Mahesh M. Goyani', 'mahesh m. goyani')
9d757c0fede931b1c6ac344f67767533043cba14Search Based Face Annotation Using PCA and
Unsupervised Label Refinement Algorithms
Savitribai Phule Pune University
D.Y.Patil Institute of Engineering and Technology, Pimpri, Pune
Mahatma Phulenagar, 120/2 Mahaganpati soc, Chinchwad, Pune-19, MH, India
D.Y.Patil Institute of Engineering and Technology, Pimpri, Pune
Computer Department, D.Y.PIET, Pimpri, Pune-18, MH, India
presents
('15731441', 'Shital Shinde', 'shital shinde')
('3392505', 'Archana Chaugule', 'archana chaugule')
9d57c4036a0e5f1349cd11bc342ac515307b6720Landmark Weighting for 3DMM Shape Fitting
aSchool of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
bCVSSP, University of Surrey, Guildford, GU2 7XH, UK
A B S T R A C T
('51232704', 'Yu Yanga', 'yu yanga')
('37020604', 'Xiao-Jun Wu', 'xiao-jun wu')
('1748684', 'Josef Kittler', 'josef kittler')
9d941a99e6578b41e4e32d57ece580c10d578b22Sensors 2015, 15, 4326-4352; doi:10.3390/s150204326
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Illumination-Invariant and Deformation-Tolerant Inner Knuckle
Print Recognition Using Portable Devices
School of Computer Science and Engineering, South China University of Technology
Higher Education Mega Center, Panyu, Guangzhou 510006, China;
2 National-Regional Key Technology Engineering Laboratory for Medical Ultrasound,
School of Medicine, Shenzhen University, Shenzhen 518060, China
The Chinese University of Hong Kong
Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen 518057, China
Academic Editor: Vittorio M.N. Passaro
Received: 6 January 2015 / Accepted: 6 February 2015 / Published: 12 February 2015
('2884662', 'Xuemiao Xu', 'xuemiao xu')
('35636977', 'Qiang Jin', 'qiang jin')
('3041338', 'Le Zhou', 'le zhou')
('38166238', 'Jing Qin', 'jing qin')
('1720633', 'Tien-Tsin Wong', 'tien-tsin wong')
('2513505', 'Guoqiang Han', 'guoqiang han')
E-Mails: jin.q@mail.scut.edu.cn (Q.J.); z.le02@mail.scut.edu.cn (L.Z.); csgqhan@scut.edu.cn (G.H.)
Hong Kong 999077, China; E-Mail: ttwong@cse.cuhk.edu.hk
* Authors to whom correspondence should be addressed; E-Mails: xuemx@scut.edu.cn (X.X.);
jqin@szu.edu.cn (J.Q.); Tel.:+86-20-39380285 (X.X.); +86-755-86392117 (J.Q.).
9d60ad72bde7b62be3be0c30c09b7d03f9710c5fA Survey: Face Recognition Techniques
Assistant Professor, ITM GOI
M Tech, ITM GOI
face
video
(Eigen
passport-verification,
('4122158', 'Arun Agrawal', 'arun agrawal')
('3731551', 'Ranjana Sikarwar', 'ranjana sikarwar')
9d896605fbf93315b68d4ee03be0770077f84e40Baby Talk: Understanding and Generating Image Descriptions
Stony Brook University
Stony Brook University, NY 11794, USA
('2170826', 'Girish Kulkarni', 'girish kulkarni')
('1699545', 'Yejin Choi', 'yejin choi')
('40305780', 'Siming Li', 'siming li')
('1685538', 'Tamara L Berg', 'tamara l berg')
('3128210', 'Visruth Premraj', 'visruth premraj')
('2985883', 'Sagnik Dhar', 'sagnik dhar')
('39668247', 'Alexander C Berg', 'alexander c berg')
{tlberg}@cs.stonybrook.edu
9d61b0beb3c5903fc3032655dc0fd834ec0b2af3Learning a Locality Preserving Subspace for Visual Recognition
Microsoft Research Asia, Beijing 100080, China
School of Mathematical Science, Peking University, China
('3945955', 'Xiaofei He', 'xiaofei he')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1689532', 'Yuxiao Hu', 'yuxiao hu')
*Department of Computer Science, University of Chicago (xiaofei@cs.uchicago.edu)
9d24179aa33a94c8c61f314203bf9e906d6b64deSearching for People through
Textual and Visual Attributes
Institute of Computing
University of Campinas (Unicamp
Campinas-SP, Brazil
Fig. 1. The proposed approach aims at searching for people using textual and visual attributes. Given an image database of faces, we extract the points of
interest (PoIs) to construct a visual dictionary that allow us to obtain the feature vectors by a quantization process (top). Then we train attribute classifiers to
generate a score for each image (middle). Finally, given a textual query (e.g., male), we fusion obtained scores to return a unique final rank (bottom).
('37811966', 'Junior Fabian', 'junior fabian')
('1820089', 'Ramon Pires', 'ramon pires')
('2145405', 'Anderson Rocha', 'anderson rocha')
9d3aa3b7d392fad596b067b13b9e42443bbc377cFacial Biometric Templates and Aging:
Problems and Challenges for Artificial
Intelligence
Cyprus University of Technology
P.O Box 50329, Lemesos, 3066, Cyprus
('1830709', 'Andreas Lanitis', 'andreas lanitis')andreas.lanitis@cut.ac.cy
9db4b25df549555f9ffd05962b5adf2fd9c86543Nonlinear 3D Face Morphable Model
Department of Computer Science and Engineering
Michigan State University, East Lansing MI
('1849929', 'Luan Tran', 'luan tran')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
{tranluan, liuxm}@msu.edu
9d06d43e883930ddb3aa6fe57c6a865425f28d44Clustering Appearances of Objects Under Varying Illumination Conditions
Computer Science & Engineering
University of California at San Diego
cid:1) Honda Research Institute
David Kriegman
Computer Science
800 California Street
University of Illinois at Urbana-Champaign
La Jolla, CA 92093
Mountain View, CA 94041
Urbana, IL 61801
('1788818', 'Jeffrey Ho', 'jeffrey ho')
('33047058', 'Jongwoo Lim', 'jongwoo lim')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
('2457452', 'Kuang-chih Lee', 'kuang-chih lee')
jho@cs.ucsd.edu myang@honda-ri.com jlim1@uiuc.edu
klee10@uiuc.edu
kriegman@cs.ucsd.edu
9c1305383ce2c108421e9f5e75f092eaa4a5aa3cSPEAKER RETRIEVAL FOR TV SHOW VIDEOS BY ASSOCIATING AUDIO SPEAKER
RECOGNITION RESULT TO VISUAL FACES∗
School of Electrical and Information Engineering, Xi an Jiaotong University, Xi an, China
’CNRS-LTCI, TELECOM-ParisTech, Paris, France
('1859487', 'Yina Han', 'yina han')
('2485487', 'Joseph Razik', 'joseph razik')
('1693574', 'Gerard Chollet', 'gerard chollet')
('1774346', 'Guizhong Liu', 'guizhong liu')
9cfb3a68fb10a59ec2a6de1b24799bf9154a8fd1
9c1860de6d6e991a45325c997bf9651c8a9d716f3D Reconstruction and Face Recognition Using Kernel-Based
ICA and Neural Networks
Chi-Yung Lee
Dept. of Electrical Dept. of CSIE Dept. of CSIE
Engineering Chaoyang University Nankai Institute of
National University of Technology Technology
('1734467', 'Cheng-Jian Lin', 'cheng-jian lin') of Kaohsiung s9527618@cyut.edu.tw cylee@nkc.edu.tw
cjlin@nuk.edu.tw
9c9ef6a46fb6395702fad622f03ceeffbada06e5EUROGRAPHICS 2004 / M.-P. Cani and M. Slater
(Guest Editors)
Volume 23 (2004), Number 3
Exchanging Faces in Images
1 Max-Planck-Institut für Informatik, Saarbrücken, Germany
University of Basel, Departement Informatik, Basel, Switzerland
('2880906', 'Volker Blanz', 'volker blanz')
('2658043', 'Kristina Scherbaum', 'kristina scherbaum')
('1687079', 'Thomas Vetter', 'thomas vetter')
('1746884', 'Hans-Peter Seidel', 'hans-peter seidel')
9c1cdb795fd771003da4378f9a0585730d1c3784Stacked Deformable Part Model with Shape
Regression for Object Part Localization
Center for Biometrics and Security Research & National Laboratory
of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China
('1721677', 'Junjie Yan', 'junjie yan')
('1718623', 'Zhen Lei', 'zhen lei')
('1708973', 'Yang Yang', 'yang yang')
('34679741', 'Stan Z. Li', 'stan z. li')
{jjyan,zlei,yang.yang,szli}@nlpr.ia.ac.cn
9ca7899338129f4ba6744f801e722d53a44e4622Deep Neural Networks Regularization for Structured
Output Prediction
Soufiane Belharbi∗
INSA Rouen, LITIS
76000 Rouen, France
INSA Rouen, LITIS
76000 Rouen, France
INSA Rouen, LITIS
76000 Rouen, France
INSA Rouen, LITIS
76000 Rouen, France
Normandie Univ, UNIROUEN, UNIHAVRE,
Normandie Univ, UNIROUEN, UNIHAVRE,
Normandie Univ, UNIROUEN, UNIHAVRE,
Normandie Univ, UNIROUEN, UNIHAVRE,
('1712446', 'Clément Chatelain', 'clément chatelain')
('1782268', 'Romain Hérault', 'romain hérault')
('37078795', 'Sébastien Adam', 'sébastien adam')
soufiane.belharbi@insa-rouen.fr
romain.herault@insa-rouen.fr
clement.chatelain@insa-rouen.fr
sebastien.adam@univ-rouen.fr
9c1664f69d0d832e05759e8f2f001774fad354d6Action representations in robotics: A
taxonomy and systematic classification
Journal Title
XX(X):1–32
c(cid:13)The Author(s) 2016
Reprints and permission:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/ToBeAssigned
www.sagepub.com/
('33237072', 'Philipp Zech', 'philipp zech')
('2898615', 'Erwan Renaudo', 'erwan renaudo')
('36081156', 'Simon Haller', 'simon haller')
('46447747', 'Xiang Zhang', 'xiang zhang')
9c25e89c80b10919865b9c8c80aed98d223ca0c6GENDER PREDICTION BY GAIT ANALYSIS BASED ON TIME SERIES VARIATION OF
JOINT POSITIONS
Dept. of Computer Science
School of Science and Technology
Meiji University
Dept. of Fundamental Science and Technology
Graduate School of Science and Technology
Meiji University
1-1-1 Higashimita Tama-ku
Kawasaki Kanagawa Japan
1-1-1 Higashimita Tama-ku
Kawasaki Kanagawa Japan
('1800246', 'Ryusuke Miyamoto', 'ryusuke miyamoto')
('8187964', 'Risako Aoki', 'risako aoki')
E-mail: miya@cs.meiji.ac.jp
E-mail: aori@cs.meiji.ac.jp
9c7444c6949427994b430787a153d5cceff46d5cJournal of Computer Science 5 (11): 801-810, 2009
ISSN 1549-3636
© 2009 Science Publications
Boosting Kernel Discriminative Common Vectors for Face Recognition
1Department of Computer Science and Engineering,
SRM University, Kattankulathur, Chennai-603 203, Tamilnadu, India
Bharathidasan University, Trichy, India
('34608395', 'C. Lakshmi', 'c. lakshmi')
('2594379', 'M. Ponnavaikko', 'm. ponnavaikko')
9c065dfb26ce280610a492c887b7f6beccf27319Learning from Video and Text via Large-Scale Discriminative Clustering
1 ´Ecole Normale Sup´erieure
2Inria
3CIIRC
('19200186', 'Antoine Miech', 'antoine miech')
('2285263', 'Jean-Baptiste Alayrac', 'jean-baptiste alayrac')
('2329288', 'Piotr Bojanowski', 'piotr bojanowski')
('1785596', 'Ivan Laptev', 'ivan laptev')
('1782755', 'Josef Sivic', 'josef sivic')
9c781f7fd5d8168ddae1ce5bb4a77e3ca12b40b6 International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 07 | July-2016 www.irjet.net p-ISSN: 2395-0072
Attribute Based Face Classification Using Support Vector Machine
Research Scholar, PSGR Krishnammal College for Women, Coimbatore
PSGR Krishnammal College for Women, Coimbatore
9c373438285101d47ab9332cdb0df6534e3b93d1Occupancy Detection in Vehicles Using Fisher Vector
Image Representation
Xerox Research Center
Webster, NY 14580
Xerox Research Center
Webster, NY 14580
('1762503', 'Yusuf Artan', 'yusuf artan')
('5942563', 'Peter Paul', 'peter paul')
Yusuf.Artan@xerox.com
Peter.Paul@xerox.com
9cbb6e42a35f26cf1d19f4875cd7f6953f10b95dExpression Recognition with Ri-HOG Cascade
Graduate School of System Informatics, Kobe University, Kobe, 657-8501, Japan
RIEB, Kobe University, Kobe, 657-8501, Japan
('2866465', 'Jinhui Chen', 'jinhui chen')
('2834542', 'Zhaojie Luo', 'zhaojie luo')
('1744026', 'Tetsuya Takiguchi', 'tetsuya takiguchi')
('1678564', 'Yasuo Ariki', 'yasuo ariki')
9ce0d64125fbaf625c466d86221505ad2aced7b1Saliency Based Framework for Facial Expression
Recognition
To cite this version:
Facial Expression Recognition. Frontiers of Computer Science, 2017, <10.1007/s11704-017-6114-9>.

HAL Id: hal-01546192
https://hal.archives-ouvertes.fr/hal-01546192
Submitted on 23 Jun 2017
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('1943666', 'Rizwan Ahmed Khan', 'rizwan ahmed khan')
('39469581', 'Alexandre Meyer', 'alexandre meyer')
('1971616', 'Hubert Konik', 'hubert konik')
('1768560', 'Saïda Bouakaz', 'saïda bouakaz')
('1943666', 'Rizwan Ahmed Khan', 'rizwan ahmed khan')
('39469581', 'Alexandre Meyer', 'alexandre meyer')
('1971616', 'Hubert Konik', 'hubert konik')
('1768560', 'Saïda Bouakaz', 'saïda bouakaz')
9c4cc11d0df2de42d6593f5284cfdf3f05da402aAppears in the 14th International Conference on Pattern Recognition, ICPR’98, Queensland, Australia, August 17-20, 1998.
Enhanced Fisher Linear Discriminant Models for Face Recognition
George Mason University
University Drive, Fairfax, VA 22030-4444, USA
cliu, wechsler
('39664966', 'Chengjun Liu', 'chengjun liu')
('1781577', 'Harry Wechsler', 'harry wechsler')
@cs.gmu.edu
9cd6a81a519545bf8aa9023f6e879521f85d4cd1Domain-invariant Face Recognition using Learned Low-rank
Transformation
Duke University
Durham, NC, 27708
Duke University
Durham, NC, 27708
University of Maryland
College Park, MD
May 11, 2014
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
('2682056', 'Ching-Hui Chen', 'ching-hui chen')
qiang.qiu@duke.edu
guillermo.sapiro@duke.edu
ching@umd.edu
9cadd166893f1b8aaecb27280a0915e6694441f5Appl. Math. Inf. Sci. 7, No. 2, 455-462 (2013)
455
Applied Mathematics & Information Sciences
An International Journal
c⃝ 2013 NSP
Natural Sciences Publishing Cor.
Multi-Modal Emotion Recognition Fusing Video and
Audio
School of Computer Software, Tianjin University, 300072 Tianjin, China
School of Computer Science and Technology, Tianjin University, 300072 Tianjin, China
Received: 7 Sep. 2012; Revised 15 Nov. 2012; Accepted 18 Nov. 2012
Published online: 1 Mar. 2013
('29962190', 'Chao Xu', 'chao xu')
('2531641', 'Pufeng Du', 'pufeng du')
('38465490', 'Zhiyong Feng', 'zhiyong feng')
('1889014', 'Zhaopeng Meng', 'zhaopeng meng')
('2375971', 'Tianyi Cao', 'tianyi cao')
('36675950', 'Caichao Dong', 'caichao dong')
02601d184d79742c7cd0c0ed80e846d95def052eGraphical Representation for Heterogeneous
Face Recognition
('2299758', 'Chunlei Peng', 'chunlei peng')
('10699750', 'Xinbo Gao', 'xinbo gao')
('2870173', 'Nannan Wang', 'nannan wang')
('38158055', 'Jie Li', 'jie li')
02cc96ad997102b7c55e177ac876db3b91b4e72cMuseumVisitors: a dataset for pedestrian and group detection, gaze estimation
and behavior understanding
('36971654', 'Federico Bartoli', 'federico bartoli')
('2973738', 'Giuseppe Lisanti', 'giuseppe lisanti')
('2831602', 'Lorenzo Seidenari', 'lorenzo seidenari')
('2602265', 'Svebor Karaman', 'svebor karaman')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
1{firstname.lastname}@unifi.it, University of Florence
2sk4089@columbia.edu, Columbia University
02e43d9ca736802d72824892c864e8cfde13718eTransferring a Semantic Representation for Person Re-Identification and
Search
Shi, Z; Yang, Y; Hospedales, T; XIANG, T; IEEE Conference on Computer Vision and
Pattern Recognition
© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained for all other uses, in any current or future media, including reprinting/republishing
this material for advertising or promotional purposes, creating new collective works, for resale
or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
For additional information about this publication click this link.
http://qmro.qmul.ac.uk/xmlui/handle/123456789/10075
Information about this research object was correct at the time of download; we occasionally
make corrections to records, please therefore check the published record when citing. For
more information contact scholarlycommunications@qmul.ac.uk
02fda07735bdf84554c193811ba4267c24fe2e4aIllumination Invariant Face Recognition
Using Near-Infrared Images
('34679741', 'Stan Z. Li', 'stan z. li')
('1724841', 'Rufeng Chu', 'rufeng chu')
('40397682', 'Shengcai Liao', 'shengcai liao')
('39306651', 'Lun Zhang', 'lun zhang')
023ed32ac3ea6029f09b8c582efbe3866de7d00aCENTER FOR
MACHINE PERCEPTION
Discriminative learning from
partially annotated examples
CZECH TECHNICAL
UNIVERSITY IN PRAGUE
Study Programme: Electrical Engineering and
Information Technology
Branch of Study: Artificial Intelligence and Biocybernetics
CTU–CMP–2016–07
June 14, 2016
ftp://cmp.felk.cvut.cz/pub/cvl/articles/antoniuk/Antoniuk-TR-2016-07.pdf
Available at
Thesis Advisors: Ing. Vojtˇech Franc, Ph.D. ,
prof. Ing. V´aclav Hlav´aˇc, CSc.
Acknowledgements: SGS15/201/OHK3/3T/13, CAK/TE01020197,
UP-Driving/688652, GACR/P103/12/G084.
Research Reports of CMP, Czech Technical University in Prague, No
Published by
Center for Machine Perception, Department of Cybernetics
Faculty of Electrical Engineering, Czech Technical University
Technick´a 2, 166 27 Prague 6, Czech Republic
fax +420 2 2435 7385, phone +420 2 2435 7637, www: http://cmp.felk.cvut.cz
('2742026', 'Kostiantyn Antoniuk', 'kostiantyn antoniuk')antonkos@fel.cvut.cz
0241513eeb4320d7848364e9a7ef134a69cbfd55Supervised Translation-Invariant Sparse
Coding
University of Illinois at Urbana Champaign
²NEC Laboratories America at Cupertino
('1706007', 'Jianchao Yang', 'jianchao yang')
('38701713', 'Kai Yu', 'kai yu')
02dd0af998c3473d85bdd1f77254ebd71e6158c6PPP: Joint Pointwise and Pairwise Image Label Prediction
1Department of Computer Science, Arizona State Univerity
2Yahoo Research
('33513248', 'Yilin Wang', 'yilin wang')
('1736632', 'Jiliang Tang', 'jiliang tang')
{yilinwang,suhang.wang,huan.liu,baoxin.li}@asu.edu
jlt@yahoo-inc.com
0290523cabea481e3e147b84dcaab1ef7a914612Generated Motion Maps
Tokyo Denki University
National Institute of Advanced Industrial Science and Technology (AIST
('20505300', 'Yuta Matsuzaki', 'yuta matsuzaki')
('34935749', 'Kazushige Okayasu', 'kazushige okayasu')
('2462801', 'Akio Nakamura', 'akio nakamura')
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
matsuzaki.y, okayasu.k@is.dendai.ac.jp, nkmr-a@cck.dendai.ac.jp
hirokatsu.kataoka@aist.go.jp
0229829e9a1eed5769a2b5eccddcaa7cd9460b92Pooled Motion Features for First-Person Videos
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA
Figure 1: Overall representation framework of our pooled time series (PoT). Given a sequence of per-frame feature descriptors (e.g., HOF or CNN
features) from a video, PoT represents motion information in the video by computing short-term/long-term changes in each descriptor value.
In this paper, we present a new feature representation for first-person videos.
In first-person video understanding (e.g., activity recognition [4]), it is very
important to capture both entire scene dynamics (i.e., egomotion) and salient
local motion observed in videos. We describe a representation framework
('1904850', 'Brandon Rothrock', 'brandon rothrock')
025720574ef67672c44ba9e7065a83a5d6075c36Unsupervised Learning of Video Representations using LSTMs
University of Toronto, 6 Kings College Road, Toronto, ON M5S 3G4 CANADA
('2897313', 'Nitish Srivastava', 'nitish srivastava')
('2711409', 'Elman Mansimov', 'elman mansimov')
('1776908', 'Ruslan Salakhutdinov', 'ruslan salakhutdinov')
NITISH@CS.TORONTO.EDU
EMANSIM@CS.TORONTO.EDU
RSALAKHU@CS.TORONTO.EDU
029317f260b3303c20dd58e8404a665c7c5e73391276
Character Identification in Feature-Length Films
Using Global Face-Name Matching
and Yeh-Min Huang, Member, IEEE
('1688633', 'Changsheng Xu', 'changsheng xu')
('1694235', 'Hanqing Lu', 'hanqing lu')
026e4ee480475e63ae68570d73388f8dfd4b4cdeEvaluating gender portrayal in Bangladeshi TV
Department of CSE
Eastern University
Dhaka, Bangladesh
Department of Women and Gender Studies
Rawshan E Fatima
Dhaka University
Dhaka, Bangladesh
Khulna University of Engineering and Technology
Massachusetts Institute of Technology
Department of EEE
Khulna, Bangladesh
Media Lab
Cambridge, MA, USA
('34688479', 'Md. Naimul Hoque', 'md. naimul hoque')
('40081015', 'Manash Kumar Mandal', 'manash kumar mandal')
('1706468', 'Nazmus Saquib', 'nazmus saquib')
naimul.et@easternuni.edu.bd
rawshan.e.fatima@gmail.com
manashmndl@gmail.com
saquib@mit.edu
02e628e99f9a1b295458cb453c09863ea1641b67Two-stage Convolutional Part Heatmap
Regression for the 1st 3D Face Alignment in the
Wild (3DFAW) Challenge
Computer Vision Laboratory, University of Nottingham, Nottingham, UK
('3458121', 'Adrian Bulat', 'adrian bulat')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
{adrian.bulat,yorgos.tzimiropoulos}@nottingham.ac.uk
0273414ba7d56ab9ff894959b9d46e4b2fef7fd0Photographic home styles in Congress: a
computer vision approach∗
December 1, 2016
('40845190', 'L. Jason Anastasopoulos', 'l. jason anastasopoulos')
('2007721', 'Dhruvil Badani', 'dhruvil badani')
('2647307', 'Crystal Lee', 'crystal lee')
('2361255', 'Shiry Ginosar', 'shiry ginosar')
('40411568', 'Jake Williams', 'jake williams')
02e133aacde6d0977bca01ffe971c79097097b7f
02567fd428a675ca91a0c6786f47f3e35881bcbdACCEPTED BY IEEE TIP
Deep Label Distribution Learning
With Label Ambiguity
('2226422', 'Bin-Bin Gao', 'bin-bin gao')
('1694501', 'Chao Xing', 'chao xing')
('3407628', 'Chen-Wei Xie', 'chen-wei xie')
('1808816', 'Jianxin Wu', 'jianxin wu')
('1735299', 'Xin Geng', 'xin geng')
02f4b900deabbe7efa474f2815dc122a4ddb5b76Local and Global Optimization Techniques in Graph-based Clustering
The University of Tokyo, Japan
('11682769', 'Daiki Ikami', 'daiki ikami')
('2759239', 'Toshihiko Yamasaki', 'toshihiko yamasaki')
('1712839', 'Kiyoharu Aizawa', 'kiyoharu aizawa')
{ikami, yamasaki, aizawa}@hal.t.u-tokyo.ac.jp
029b53f32079063047097fa59cfc788b2b550c4b
02bd665196bd50c4ecf05d6852a4b9ba027cd9d0
026b5b8062e5a8d86c541cfa976f8eee97b30ab8MDLFace: Memorability Augmented Deep Learning for Video Face Recognition
IIIT-Delhi, India
('1931069', 'Gaurav Goswami', 'gaurav goswami')
('1875774', 'Romil Bhardwaj', 'romil bhardwaj')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
{gauravgs,romil11092,rsingh,mayank}@iiitd.ac.in
0235b2d2ae306b7755483ac4f564044f46387648Recognition of Facial Attributes
using Adaptive Sparse Representations
of Random Patches
1 Department of Computer Science
Pontificia Universidad Cat´olica de Chile
http://dmery.ing.puc.cl
2 Department of Computer Science & Engineering
University of Notre Dame
http://www.nd.edu/~kwb
('1797475', 'Domingo Mery', 'domingo mery')
02467703b6e087799e04e321bea3a4c354c5487dTo appear in the CVPR Workshop on Biometrics, June 2016
Grouper: Optimizing Crowdsourced Face Annotations∗
Noblis
Noblis
Noblis
Noblis
Michigan State University
('9453012', 'Jocelyn C. Adams', 'jocelyn c. adams')
('7996649', 'Kristen C. Allen', 'kristen c. allen')
('15282121', 'Tim Miller', 'tim miller')
('1718102', 'Nathan D. Kalka', 'nathan d. kalka')
('6680444', 'Anil K. Jain', 'anil k. jain')
jocelyn.adams@noblis.org
kristen.allen@noblis.org
timothy.miller@noblis.org
nathan.kalka@noblis.org
jain@cse.msu.edu
02e39f23e08c2cb24d188bf0ca34141f3cc72d47REMOVING ILLUMINATION ARTIFACTS FROM FACE IMAGES USING THE NUISANCE
ATTRIBUTE PROJECTION
Vitomir ˇStruc, Boˇstjan Vesnicer, France Miheliˇc, Nikola Paveˇsi´c
Faculty of Electrical Engineering, University of Ljubljana, Tr za ska 25, SI-1000 Ljubljana, Slovenia
023be757b1769ecb0db810c95c010310d7daf00bYANG, MOU, ZHANG ET AL.: FACE ALIGNMENT ASSISTED BY HEAD POSE ESTIMATION1
Face Alignment Assisted by Head Pose
Estimation
1 Computer Laboratory
University of Cambridge
Cambridge, UK
2 School of EECS
Queen Mary University of London
London, UK
3 Faculty of Arts & Sciences
Harvard University
Cambridge, MA, US
('2966679', 'Heng Yang', 'heng yang')
('2734386', 'Wenxuan Mou', 'wenxuan mou')
('40491398', 'Yichi Zhang', 'yichi zhang')
('1744405', 'Ioannis Patras', 'ioannis patras')
('1781916', 'Hatice Gunes', 'hatice gunes')
('39626495', 'Peter Robinson', 'peter robinson')
heng.yang@cl.cam.ac.uk
w.mou@qmul.ac.uk
yichizhang@fas.harvard.edu
i.patras@qmul.ac.uk
h.gunes@qmul.ac.uk
peter.robinson@cl.cam.ac.uk
0278acdc8632f463232e961563e177aa8c6d6833Selective Transfer Machine for Personalized
Facial Expression Analysis
1 INTRODUCTION
Index Terms—Facial expression analysis, personalization, domain adaptation, transfer learning, support vector machine (SVM)
A UTOMATIC facial AU detection confronts a number of
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
('3141839', 'Fernando De la Torre', 'fernando de la torre')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
0209389b8369aaa2a08830ac3b2036d4901ba1f1DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild
Rıza Alp G¨uler 1
1INRIA-CentraleSup´elec, France
Imperial College London, UK
University College London, UK
('2814229', 'George Trigeorgis', 'george trigeorgis')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2796644', 'Patrick Snape', 'patrick snape')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('2010660', 'Iasonas Kokkinos', 'iasonas kokkinos')
1riza.guler@inria.fr
2{g.trigeorgis, e.antonakos, p.snape,s.zafeiriou}@imperial.ac.uk
3i.kokkinos@cs.ucl.ac.uk
02c993d361dddba9737d79e7251feca026288c9c
02239ae5e922075a354169f75f684cad8fdfd5abCommonly Uncommon:
Semantic Sparsity in Situation Recognition
Computer Science and Engineering, University of Washington, Seattle, WA
Allen Institute for Arti cial Intelligence (AI2), Seattle, WA
University of Virginia, Charlottesville, VA
('2064210', 'Mark Yatskar', 'mark yatskar')
('2004053', 'Vicente Ordonez', 'vicente ordonez')
('2270286', 'Ali Farhadi', 'ali farhadi')
[my89, lsz, ali]@cs.washington.edu, vicente@cs.virginia.edu
02d650d8a3a9daaba523433fbe93705df0a7f4b1How Does Aging Affect Facial Components?
Michigan State University
('40653304', 'Charles Otto', 'charles otto')
('34393045', 'Hu Han', 'hu han')
{ottochar,hhan,jain}@cse.msu.edu
0294f992f8dfd8748703f953925f9aee14e1b2a2Blur-Robust Face Recognition via
Transformation Learning
Beijing University of Posts and Telecommunications, Beijing, China
('40448827', 'Jun Li', 'jun li')
('1690083', 'Chi Zhang', 'chi zhang')
('23224233', 'Jiani Hu', 'jiani hu')
('1774956', 'Weihong Deng', 'weihong deng')
02820c1491b10a1ff486fed32c269e4077c36551Active User Authentication for Smartphones: A Challenge
Data Set and Benchmark Results
1Department of Electrical and Computer Engineering and the Center for Automation Research,
UMIACS, University of Maryland, College Park, MD
Rutgers, The State University of New Jersey, 508 CoRE, 94 Brett Rd, Piscataway, NJ
('3152615', 'Upal Mahbub', 'upal mahbub')
('40599829', 'Sayantan Sarkar', 'sayantan sarkar')
{umahbub, ssarkar2, rama}@umiacs.umd.edu
vishal.m.patel@rutgers.edu∗
a40edf6eb979d1ddfe5894fac7f2cf199519669fImproving Facial Attribute Prediction using Semantic Segmentation
Center for Research in Computer Vision
University of Central Florida
('3222250', 'Mahdi M. Kalayeh', 'mahdi m. kalayeh')
('40206014', 'Boqing Gong', 'boqing gong')
('1745480', 'Mubarak Shah', 'mubarak shah')
Mahdi@eecs.ucf.edu
bgong@crcv.ucf.edu
shah@crcv.ucf.edu
a46283e90bcdc0ee35c680411942c90df130f448
a4a5ad6f1cc489427ac1021da7d7b70fa9a770f2Yudistira and Kurita EURASIP Journal on Image and Video
Processing (2017) 2017:85
DOI 10.1186/s13640-017-0235-9
EURASIP Journal on Image
and Video Processing
RESEARCH
Open Access
Gated spatio and temporal convolutional
neural network for activity recognition:
towards gated multimodal deep learning
('2035597', 'Novanto Yudistira', 'novanto yudistira')
('1742728', 'Takio Kurita', 'takio kurita')
a4876b7493d8110d4be720942a0f98c2d116d2a0Multi-velocity neural networks for gesture recognition in videos
Massachusetts Institute of Technology
Cambridge, MA
('37381309', 'Otkrist Gupta', 'otkrist gupta')
('2283049', 'Dan Raviv', 'dan raviv')
('1717566', 'Ramesh Raskar', 'ramesh raskar')
otkrist@mit.edu
raviv@mit.edu
raskar@media.mit.edu
a40f8881a36bc01f3ae356b3e57eac84e989eef0End-to-end semantic face segmentation with conditional
random fields as convolutional, recurrent and adversarial
networks
('3038211', 'Umut Güçlü', 'umut güçlü')
('1920611', 'Meysam Madadi', 'meysam madadi')
('7855312', 'Sergio Escalera', 'sergio escalera')
('1857280', 'Xavier Baró', 'xavier baró')
('38485168', 'Rob van Lier', 'rob van lier')
('2052286', 'Marcel van Gerven', 'marcel van gerven')
a4a0b5f08198f6d7ea2d1e81bd97fea21afe3fc3Ecient Recurrent Residual Networks Improved by
Feature Transfer
MSc Thesis
written by
degree of
Master of Science
at the Delft University of Technology
Date of the public defense: Members of the Thesis Committee:
August 31, 2017
Prof. Marcel Reinders
Dr. Julian Urbano Merino
Dr. Gonzalez Adrlana (Bosch)
('1694101', 'Yue Liu', 'yue liu')
('37806314', 'Silvia-Laura Pintea', 'silvia-laura pintea')
('30445013', 'Jan van Gemert', 'jan van gemert')
('2372050', 'Ildiko Suveg', 'ildiko suveg')
('30445013', 'Jan van Gemert', 'jan van gemert')
('37806314', 'Silvia-Laura Pintea', 'silvia-laura pintea')
('2372050', 'Ildiko Suveg', 'ildiko suveg')
a46086e210c98dcb6cb9a211286ef906c580f4e8Fusing Multi-Stream Deep Networks for Video Classification
Fudan University, Shanghai, China
Alibaba Group, Seattle, USA
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('31825486', 'Xi Wang', 'xi wang')
('1743864', 'Hao Ye', 'hao ye')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
('1715001', 'Jun Wang', 'jun wang')
zxwu, ygj, xwang10, haoye10, xyxue@fudan.edu.cn
wongjun@gmail.com
a44590528b18059b00d24ece4670668e86378a79Learning the Hierarchical Parts of Objects by Deep
Non-Smooth Nonnegative Matrix Factorization
('19275690', 'Jinshi Yu', 'jinshi yu')
('1764724', 'Guoxu Zhou', 'guoxu zhou')
('1747156', 'Andrzej Cichocki', 'andrzej cichocki')
('1795838', 'Shengli Xie', 'shengli xie')
a472d59cff9d822f15f326a874e666be09b70cfdVISUAL LEARNING WITH WEAKLY LABELED VIDEO
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
May 2015
('3355264', 'Kevin Tang', 'kevin tang')
a4c430b7d849a8f23713dc283794d8c1782198b2Video Concept Embedding
1. Introduction
In the area of natural language processing, there has been
much success in learning distributed representations for
words as vectors. Doing so has an advantage over using
simple labels, or a one-hot coding scheme for representing
individual words. In learning distributed vector representa-
tions for words, we manage to capture semantic relatedness
of words in vector distance. For example, the word vector
for ”car” and ”road” should end up being closer together in
the vector space representation than ”car” and ”penguin”.
This has been very useful in NLP areas of machine transla-
tion and semantic understanding.
In the computer vision domain, video understanding is a
very important topic.
It is made hard due to the large
amount of high dimensional data in videos. One strategy
to address this is to summarize a video into concepts (eg.
running, climbing, cooking). This allows us to represent a
video in a very natural way to humans, such as a sequence
of semantic events. However this has the same shortcom-
ings that one-hot coding of words have.
The goal of this project is to find a meaningful way to em-
bed video concepts into a vector space. The hope would
be to capture semantic relatedness of concepts in a vector
representation, essentially doing for videos what word2vec
did for text. Having a vector representation for video con-
cepts would help in areas such as semantic video retrieval
and video classification, as it would provide a statistically
meaningful and robust way of representing videos as lower
dimensional vectors. An interesting thing would be to ob-
serve if such a vector representation would result in ana-
logical reasoning using simple vector arithmetic.
Figure 1 shows an example of concepts detected at differ-
ent snapshots in the same video. For example, consider
the scenario where the concepts Kicking a ball, Soccer and
Running are detected in the three snapshots respectively
(from left to right). Since, these snapshots belong in the
same video, we expect that these concepts are semantically
similar and that they should lie close in the resulting em-
bedding space. The aim of this project is to find a vector
space embedding for the space of concepts such that vector
representations for semantically similar concepts (in this
Figure 1. Example snapshots from the same video
case, Running, Kicking and Soccer) lie in the vicinity of
each other.
2. Related Work
(Mikolov et al., 2013a) introduces the popular skip-gram
model to learn distributed representations of words from
very large linguistic datasets. Specifically, it uses each
word as an input to a log-linear classifier and predict words
within a certain range before and after the current word in
the dataset.
(Mikolov et al., 2013b) extends this model
to learn representations for phrases, in addition to words,
and also improve the quality of vectors and training speed.
These works also show that the skip-gram model exhibits
a linear structure that enables it to perform reasoning using
basic vector arithmetic. The skip-gram model from these
works is the basis of our model in learning representations
for concepts.
(Le & Mikolov, 2014) extends the concept of word vectors
to sentences and paragraphs. Their approach is more in-
volved than a simple bag of words approach, in that it tries
to capture the nature of the words in the paragraph. They
construct the paragraph vector in such a way that it can be
used to predict the word vectors that are contained inside
the paragraph. They do this by first learning word vectors,
such that the probability of a word vector given its context
is maximized. To learn paragraph vectors, the paragraph
is essentially treated as a word, and the words it contains
become the context. This provides a key insight in how
a set of concept vectors can be used together to provide a
more meaningful vector representation for videos, which
can then be used for retrieval.
(Hu et al.) utilizes structured knowledge in the data to learn
distributed representations that improve semantic related-
('2387189', 'Anirudh Vemula', 'anirudh vemula')
('32203964', 'Rahul Nallamothu', 'rahul nallamothu')
('9619757', 'Syed Zahir Bokhari', 'syed zahir bokhari')
AVEMULA1@ANDREW.CMU.EDU
RNALLAMO@ANDREW.CMU.EDU
SBOKHARI@ANDREW.CMU.EDU
a4cc626da29ac48f9b4ed6ceb63081f6a4b304a2
a4f37cfdde3af723336205b361aefc9eca688f5cRecent Advances
in Face Recognition
a481e394f58f2d6e998aa320dad35c0d0e15d43cSelectively Guiding Visual Concept Discovery
Colorado State University
Fort Collins, Colorado
('2857477', 'Maggie Wigness', 'maggie wigness')
('1694404', 'Bruce A. Draper', 'bruce a. draper')
('1757322', 'J. Ross Beveridge', 'j. ross beveridge')
mwigness,draper,ross@cs.colostate.edu
a30869c5d4052ed1da8675128651e17f97b87918Fine-Grained Comparisons with Attributes ('2206630', 'Aron Yu', 'aron yu')
('1794409', 'Kristen Grauman', 'kristen grauman')
a3ebacd8bcbc7ddbd5753935496e22a0f74dcf7bFirst International Workshop on Adaptive Shot Learning
for Gesture Understanding and Production
ASL4GUP 2017
Held in conjunction with IEEE FG 2017, in May 30, 2017,
Washington DC, USA
a3d8b5622c4b9af1f753aade57e4774730787a00Pose-Aware Person Recognition
Anoop Namboodiri (cid:63)
(cid:63) CVIT, IIIT Hyderabad, India
† Facebook AI Research
('37956314', 'Vijay Kumar', 'vijay kumar')
('2210374', 'Manohar Paluri', 'manohar paluri')
('1694502', 'C. V. Jawahar', 'c. v. jawahar')
a322479a6851f57a3d74d017a9cb6d71395ed806Towards Pose Invariant Face Recognition in the Wild
National University of Singapore
National University of Defense Technology
Nanyang Technological University
4Panasonic R&D Center Singapore
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
Qihoo 360 AI Institute
('2668358', 'Sugiri Pranata', 'sugiri pranata')
('3493398', 'Shengmei Shen', 'shengmei shen')
('1757173', 'Junliang Xing', 'junliang xing')
('46509407', 'Jian Zhao', 'jian zhao')
('5524736', 'Yu Cheng', 'yu cheng')
('33419682', 'Lin Xiong', 'lin xiong')
('2757639', 'Jianshu Li', 'jianshu li')
('40345914', 'Fang Zhao', 'fang zhao')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('33221685', 'Jiashi Feng', 'jiashi feng')
a3017bb14a507abcf8446b56243cfddd6cdb542bFace Localization and Recognition in Varied
Expressions and Illumination
Hui-Yu Huang, Shih-Hang Hsu
a3c8c7da177cd08978b2ad613c1d5cb89e0de741A Spatio-temporal Approach for Multiple
Object Detection in Videos Using Graphs
and Probability Maps
University of S ao Paulo, S ao Paulo, Brazil
2 Institut Mines T´el´ecom, T´el´ecom ParisTech, CNRS LTCI, Paris, France
('1863046', 'Henrique Morimitsu', 'henrique morimitsu')
('1695917', 'Isabelle Bloch', 'isabelle bloch')
henriquem87@gmail.com
a378fc39128107815a9a68b0b07cffaa1ed32d1fDetermining a Suitable Metric When using Non-negative Matrix Factorization∗
Computer Vision Center, Dept. Inform`atica
Universitat Aut`onoma de Barcelona
08193 Bellaterra, Barcelona, Spain
('1761407', 'David Guillamet', 'david guillamet'){davidg,jordi}@cvc.uab.es
a34d75da87525d1192bda240b7675349ee85c123Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not?
Face++, Megvii Inc.
Face++, Megvii Inc.
Face++, Megvii Inc.
('1848243', 'Erjin Zhou', 'erjin zhou')
('2695115', 'Zhimin Cao', 'zhimin cao')
('2274228', 'Qi Yin', 'qi yin')
zej@megvii.com
czm@megvii.com
yq@megvii.com
a301ddc419cbd900b301a95b1d9e4bb770afc6a3Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
DECK: Discovering Event Composition Knowledge from
Web Images for Zero-Shot Event Detection and Recounting in Videos
University of Southern California
IIIS, Tsinghua University
‡ Google Research
('2551285', 'Chuang Gan', 'chuang gan')
('1726241', 'Chen Sun', 'chen sun')
a3dc109b1dff3846f5a2cc1fe2448230a76ad83fJ.Savitha et al, International Journal of Computer Science and Mobile Computing, Vol.4 Issue.4, April- 2015, pg. 722-731
Available Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 4, Issue. 4, April 2015, pg.722 – 731
RESEARCH ARTICLE
ACTIVE APPEARANCE MODEL AND PCA
BASED FACE RECOGNITION SYSTEM
Mrs. J.Savitha M.Sc., M.Phil.
Ph.D Research Scholar, Karpagam University, Coimbatore, Tamil Nadu, India
Dr. A.V.Senthil Kumar
Director, Hindustan College of Arts and Science, Coimbatore, Tamil Nadu, India
Email: savitha.sanjay1@gmail.com
Email: avsenthilkumar@gmail.com
a3f69a073dcfb6da8038607a9f14eb28b5dab2dbProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18)
1184
a38045ed82d6800cbc7a4feb498e694740568258UNLV Theses, Dissertations, Professional Papers, and Capstones
5-2010
African American and Caucasian males' evaluation
of racialized female facial averages
Rhea M. Watson
University of Nevada Las Vegas
Follow this and additional works at: http://digitalscholarship.unlv.edu/thesesdissertations
Part of the Cognition and Perception Commons, Race and Ethnicity Commons, and the Social
Psychology Commons
Repository Citation
Watson, Rhea M., "African American and Caucasian males' evaluation of racialized female facial averages" (2010). UNLV Theses,
Dissertations, Professional Papers, and Capstones. 366.
http://digitalscholarship.unlv.edu/thesesdissertations/366
This Thesis is brought to you for free and open access by Digital Scholarship@UNLV. It has been accepted for inclusion in UNLV Theses, Dissertations,
Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact
digitalscholarship@unlv.edu.
a3f684930c5c45fcb56a2b407d26b63879120cbfLPM for Fast Action Recognition with Large Number of Classes
School of Electrical Engineering and Computer Scinece
University of Ottawa, Ottawa, On, Canada
Department of Electronics and Information Engineering
Hua Zhong University of Science and Technology, Wuhan, China
1. Introduction
In this paper, we provide an overview of the Local Part
Model system for the THUMOS 2013: Action Recognition
with a Large Number of Classes1 evaluations. Our system
uses a combination of fast random sampling feature extrac-
tion and local part model feature representation.
Over the last decade, the advances in the area of com-
puter vision and pattern recognition have fuelled a large
amount of research with great progress in human action
recognition. Much of the early progress [1, 5, 14] has been
reported on atomic actions with several categories based
on staged videos captured under controlled settings, such
as KTH [14] and Weizmann [1]. More recently, there are
emerging interests for sophisticated algorithms in recogniz-
ing actions from realistic video. Such interests involve two
prospects: 1) In comparison to image classification evalu-
ating millions of images with over one thousand categories,
action recognition is still at its initial stage. It is important
to develop reliable, automatic methods which scale to large
numbers of action categories captured in realistic settings.
2) With over 100 hours of videos are uploaded to YouTube
every minute2, and millions of surveillance cameras all over
the world, the need for efficient recognition of the visual
events in the video is crucial for real world applications.
Recent studies [5, 10, 11, 21] have shown that lo-
cal spatio-temporal features can achieve remarkable per-
formance when represented by popular bag-of-features
method. A recent trend is the use of dense sampled points
[16, 21] and trajectories [7, 19] to improve the perfor-
mance. Local Part Model [15] achieved state-of-the-art per-
formance on real-life datasets with high efficiency when
combined with random sampling over high density sam-
1http://crcv.ucf.edu/ICCV13-Action-Workshop/index.html
2http://www.youtube.com/yt/press/statistics.html
pling grids.
In this paper, we focus on recognize human
action “in the wild” with large number of classes. More
specifically, we aim to improve the state-of-the-art Local
Part Model method on large scale real-life action datasets.
The paper is organized as follows: The next section re-
views the LPM algorithm. Section 3 introduces four differ-
ent descriptors we will use. In section 4, we present some
experimental results and analysis. The paper is completed
with a brief conclusion. The code for computing random
sampling with Local Part Model is available on-line3.
2. LPM algorithm
Inspired by the multiscale, deformable part model [6]
for object classification, we proposed a 3D multiscale part
model in [16]. However, instead of adopting deformable
“parts”, we used “parts” with fixed size and location on the
purpose of maintaining both structural information and lo-
cal events ordering for action recognition. As shown in Fig-
ure 1, the local part model includes both a coarse primi-
tive level root feature covering event-content statistics and
higher resolution overlapping part filters incorporating lo-
cal structural and temporal relations.
More recently, we [15] applied random sampling method
with local part model over a very dense sampling grid
and achieved state-of-the-art performance on realistic large
scale datasets with potential for real-time recognition. Un-
der the local part model, a feature consists of a coarse global
root filter and several fine overlapped part filters. The root
filter is extracted on the video at half the resolution. This
way, a high density grid can be defined with far less sam-
ples. For every coarse root filter, a group of fine part filters
are computed at full video resolution and at locations rela-
tive to their root filter reference position. These part filters
3https://github.com/fshi/actionMBH
('36925389', 'Feng Shi', 'feng shi')
('1745632', 'Emil Petriu', 'emil petriu')
fshi98@gmail.com, {laganier, petriu}@site.uottawa.ca
zhenhaiyu@mail.hust.edu.cn
a3f78cc944ac189632f25925ba807a0e0678c4d5Action Recognition in Realistic Sports Videos ('1799979', 'Khurram Soomro', 'khurram soomro')
('40029556', 'Amir Roshan Zamir', 'amir roshan zamir')
a33f20773b46283ea72412f9b4473a8f8ad751ae
a3a6a6a2eb1d32b4dead9e702824375ee76e3ce7Multiple Local Curvature Gabor Binary
Patterns for Facial Action Recognition
Signal Processing Laboratory (LTS5),
´Ecole Polytechnique F´ed´erale de Lausanne, Switzerland
('2383305', 'Nuri Murat Arar', 'nuri murat arar')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
{anil.yuce,murat.arar,jean-philippe.thiran}@epfl.ch
a32c5138c6a0b3d3aff69bcab1015d8b043c91fbDownloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 9/19/2018
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Videoredaction:asurveyandcomparisonofenablingtechnologiesShaganSahAmeyaShringiRaymondPtuchaAaronBurryRobertLoceShaganSah,AmeyaShringi,RaymondPtucha,AaronBurry,RobertLoce,“Videoredaction:asurveyandcomparisonofenablingtechnologies,”J.Electron.Imaging26(5),051406(2017),doi:10.1117/1.JEI.26.5.051406.
a32d4195f7752a715469ad99cb1e6ebc1a099de6Hindawi Publishing Corporation
e Scientific World Journal
Volume 2014, Article ID 749096, 10 pages
http://dx.doi.org/10.1155/2014/749096
Research Article
The Potential of Using Brain Images for Authentication
College of Mechatronic Engineering and Automation, National University of Defense Technology
Changsha, Hunan 410073, China
Received 6 May 2014; Accepted 19 June 2014; Published 10 July 2014
Academic Editor: Wangmeng Zuo
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Biometric recognition (also known as biometrics) refers to the automated recognition of individuals based on their biological or
behavioral traits. Examples of biometric traits include fingerprint, palmprint, iris, and face. The brain is the most important and
complex organ in the human body. Can it be used as a biometric trait? In this study, we analyze the uniqueness of the brain and
try to use the brain for identity authentication. The proposed brain-based verification system operates in two stages: gray matter
extraction and gray matter matching. A modified brain segmentation algorithm is implemented for extracting gray matter from
an input brain image. Then, an alignment-based matching algorithm is developed for brain matching. Experimental results on two
data sets show that the proposed brain recognition system meets the high accuracy requirement of identity authentication. Though
currently the acquisition of the brain is still time consuming and expensive, brain images are highly unique and have the potential
possibility for authentication in view of pattern recognition.
1. Introduction
Identity authentication is an important task for different
applications including access control, ATM card verification,
and forensic affairs. Compared with conventional methods
(e.g., key, ID card, and password), biometric recognition
is more resistant to social engineering attacks (e.g., theft).
Biometric recognition is also intrinsically superior that makes
it unforgettable. During the past few decades, biometric tech-
nologies have shown more and more importance in various
applications [1, 2]. Among them, recognition technologies
based on fingerprint [3, 4], palmprint [5, 6], iris [7, 8], and
face [9, 10] are the most popular.
The brain is the center of the nervous system and the most
important and complex organ in the human body. Though
different brains may be alike in the way they act and have
similar traits, scientists have confirmed that no two brains are
or will ever be the same [11]. Both genes (what we inherit)
and experience (what we learn) could allow individual brains
to develop in distinctly different ways. Recent studies show
that the so-called jumping genes, which ensure that identical
twins are different, may also influence the brains [12]. All
these studies show that the human brain is a work of genius in
its design and capabilities, and it is unique. Though brain gray
matter will change with age or disease, it shows steadiness in
adulthood [13, 14]. The question we are interested in this study
is as follows: can we use the brain for identity authentication?
This paper analyzes the uniqueness of human brain
and proposes to use the brain for personal identification
(authentication). Compared with other biometric techniques,
brain recognition is more resistant to forgery (e.g., fake
fingerprints [15]) and spoofing (e.g., face disguise [16]). Brain
recognition is also more reliable to identify the escapee
since one’s brain can hardly be modified, whereas other
biologic traits may be altered, such as altered fingerprints [17].
Palaniappan and Mandic [18] established a Visual Evoked
Potential- (VEP-) based biometrics, and simulations have
indicated the significant potential of brain electrical activity
as a biometric tool. However, VEP is not robust to the
activity of brain. Aloui et al. [19] extracted characteristics of
brain images and used them in an application as a biometric
tool to identify individuals. Their method just uses a single
slice of the brain and thus suffers from the influence of
noise. Another drawback of this method is that it only uses
('40326124', 'Fanglin Chen', 'fanglin chen')
('8526311', 'Zongtan Zhou', 'zongtan zhou')
('1730001', 'Hui Shen', 'hui shen')
('2517668', 'Dewen Hu', 'dewen hu')
('40326124', 'Fanglin Chen', 'fanglin chen')
Correspondence should be addressed to Dewen Hu; dwhu@nudt.edu.cn
a3d78bc94d99fdec9f44a7aa40c175d5a106f0b9Recognizing Violence in Movies
CIS400/401 Project Final Report
Univ. of Pennsylvania
Philadelphia, PA
Univ. of Pennsylvania
Philadelphia, PA
Ben Sapp
Univ. of Pennsylvania
Philadelphia, PA
Univ. of Pennsylvania
Philadelphia, PA
('1908780', 'Lei Kang', 'lei kang')
('1685978', 'Ben Taskar', 'ben taskar')
kanglei@seas.upenn.edu
mjiawei@seas.upenn.edu
bensapp@cis.upenn.edu
taskar@cis.upenn.edu
a3eab933e1b3db1a7377a119573ff38e780ea6a3978-1-4244-4296-6/10/$25.00 ©2010 IEEE
838
ICASSP 2010
a308077e98a611a977e1e85b5a6073f1a9bae6f0Hindawi Publishing Corporation
e Scientific World Journal
Volume 2014, Article ID 810368, 15 pages
http://dx.doi.org/10.1155/2014/810368
Review Article
Intelligent Screening Systems for Cervical Cancer
Faculty of Engineering Building, University of Malaya, 50603 Kuala Lumpur, Malaysia
Received 24 December 2013; Accepted 11 February 2014; Published 11 May 2014
Academic Editors: S. Balochian, V. Bhatnagar, and Y. Zhang
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Advent of medical image digitalization leads to image processing and computer-aided diagnosis systems in numerous clinical
applications. These technologies could be used to automatically diagnose patient or serve as second opinion to pathologists. This
paper briefly reviews cervical screening techniques, advantages, and disadvantages. The digital data of the screening techniques
are used as data for the computer screening system as replaced in the expert analysis. Four stages of the computer system are
enhancement, features extraction, feature selection, and classification reviewed in detail. The computer system based on cytology
data and electromagnetic spectra data achieved better accuracy than other data.
1. Introduction
Cervical cancer is a leading cause of mortality and morbidity,
which comprises approximately 12% of all cancers in women
worldwide according to World Health Organization (WHO).
In fact, the annual global statistics of WHO estimated 470
600 new cases and 233 400 deaths from cervical cancer
around the year 2000. As reported in National Cervical
Cancer Coalition (NCCC) in 2010, cervical cancer is a cancer
of the cervix which is commonly caused by a virus named
Human Papillomavirus (HPV) [1]. The virus can damage
cells in the cervix, namely, squamous cells and glandular
cells that may develop into squamous cell carcinoma (cancer
of the squamous cells) and adenocarcinoma (cancer of the
glandular cells), respectively. Squamous cell carcinoma can
be thought of as similar to skin cancer because it begins on
the surface of the ectocervix. Adenocarcinoma begins further
inside the uterus, in the mucus-producing gland cells of the
endocervix [2].
Cervical cancer develops from normal to precancerous
cells (dysplasia) over a period of two to three decades [3].
Even though the dysplasia cells look like cancer cells, they
are not malignant cells. These cells are known as cervical
intraepithelial neoplasia (CIN) which is usually of low grade,
and they only affect the surface of the cervical tissue. The
majority will regress back to normal spontaneously. Over
time, a small proportion will continue to develop into cancer.
Based on WHO system, the level of CIN growth can be
divided into grades 1, 2, and 3. It should be noted that at least
two-thirds of the CIN 1 lesions, half of the CIN 2 lesions, and
one-third of the CIN 3 lesions will regress back to normal [3].
The median ages of patients with these different precursor
grades are 25, 29, and 34 years, respectively. Ultimately, a
small proportion will develop into infiltrating cancer, usually
from the age of 45 years onwards.
In 1994, the Bethesda system was introduced to simplify
the WHO system. This system divided all cervical epithelial
precursor lesions into two groups: the Low-grade Squamous
Intraepithelial Lesion (LSIL) and High-grade Squamous
Intraepithelial Lesion (HSIL). The LSIL corresponds to CIN1,
while the HSIL includes CIN2 and CIN3 [4].
Since a period of two to three decades is needed for
cervical cancer to reach an invasive state, the incidence and
mortality related to this disease can be significantly reduced
through early detection and proper treatment. Realizing
this fact, a variety of screening tests have therefore been
developed in attempting to be implemented as early cervical
precancerous screening tools.
2. Methodology
This paper reviews 103 journal papers. The papers are
obtained electronically through 2 major scientific databases:
('2905656', 'Yessi Jusman', 'yessi jusman')
('33102280', 'Siew Cheok Ng', 'siew cheok ng')
('2784667', 'Noor Azuan Abu Osman', 'noor azuan abu osman')
('2905656', 'Yessi Jusman', 'yessi jusman')
Correspondence should be addressed to Siew Cheok Ng; siewcng@um.edu.my and Noor Azuan Abu Osman; azuan@um.edu.my
a35dd69d63bac6f3296e0f1d148708cfa4ba80f6Audio Visual Emotion Recognition with Temporal Alignment and Perception
Attention
National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences
Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain
Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, CAS
('1850313', 'Linlin Chao', 'linlin chao')
('37670752', 'Jianhua Tao', 'jianhua tao')
('2740129', 'Minghao Yang', 'minghao yang')
('1704841', 'Ya Li', 'ya li')
('1718662', 'Zhengqi Wen', 'zhengqi wen')
{linlin.chao, jhtao, mhyang, yli, zqwen}@nlpr.ia.ac.cn
a3a34c1b876002e0393038fcf2bcb00821737105Face Identification across Different Poses and Illuminations
with a 3D Morphable Model
V. Blanz, S. Romdhani, and T. Vetter
University of Freiburg
Georges-K¨ohler-Allee 52, 79110 Freiburg, Germany
fvolker, romdhani, vetterg@informatik.uni-freiburg.de
a3f1db123ce1818971a57330d82901683d7c2b67Poselets and Their Applications in High-Level
Computer Vision
Lubomir Bourdev
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2012-52
http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-52.html
May 1, 2012
a36c8a4213251d3fd634e8893ad1b932205ad1caVideos from the 2013 Boston Marathon:
An Event Reconstruction Dataset for
Synchronization and Localization
CMU-LTI-018
Language Technologies Institute
School of Computer Science
Carnegie Mellon University
5000 Forbes Ave., Pittsburgh, PA 15213
www.lti.cs.cmu.edu
© October 1, 2016
('1915796', 'Junwei Liang', 'junwei liang')
('47896638', 'Han Lu', 'han lu')
('2927024', 'Shoou-I Yu', 'shoou-i yu')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
a3a97bb5131e7e67316b649bbc2432aaa1a6556eCogn Affect Behav Neurosci
DOI 10.3758/s13415-013-0170-x
Role of the hippocampus and orbitofrontal cortex
during the disambiguation of social cues in working memory
Chantal E. Stern
Psychonomic Society, Inc
('2973557', 'Karin Schon', 'karin schon')
a35d3ba191137224576f312353e1e0267e6699a1Increasing security in DRM systems
through biometric authentication.
ecuring the exchange
of intellectual property
and providing protection
to multimedia contents in
distribution systems have enabled the
advent of digital rights management
(DRM) systems [5], [14], [21], [47],
[51], [53]. Rights holders should be able to
license, monitor, and track the usage of rights
in a dynamic digital trading environment, espe-
cially in the near future when universal multimedia
access (UMA) becomes a reality, and any multimedia
content will be available anytime, anywhere. In such
DRM systems, encryption algorithms, access control,
key management strategies, identification and tracing
of contents, or copy control will play a prominent role
to supervise and restrict access to multimedia data,
avoiding unauthorized or fraudulent operations.
A key component of any DRM system, also known
as intellectual property management and protection
(IPMP) systems in the MPEG-21 framework, is user
authentication to ensure that
only those with specific rights are
able to access the digital informa-
tion. It is here that biometrics can
play an essential role, reinforcing securi-
ty at all stages where customer authentica-
tion is needed. The ubiquity of users and
devices, where the same user might want to
access to multimedia contents from different
environments (home, car, work, jogging, etc.) and
also from different devices or media (CD, DVD,
home computer, laptop, PDA, 2G/3G mobile phones,
game consoles, etc.) strengthens the need for reliable
and universal authentication of users.
Classical user authentication systems have been
based in something that you have (like a key, an identi-
fication card, etc.) and/or something that you know
(like a password, or a PIN). With biometrics, a new
user authentication paradigm is added: something that
you are (e.g., fingerprints or face) or something that
you do or produce (e.g., handwritten signature or
50
IEEE SIGNAL PROCESSING MAGAZINE
1053-5888/04/$20.00©2004IEEE
MARCH 2004
('1732220', 'Javier Ortega-Garcia', 'javier ortega-garcia')
('5058247', 'Josef Bigun', 'josef bigun')
('3127386', 'Douglas Reynolds', 'douglas reynolds')
('1775227', 'Joaquin Gonzalez-Rodriguez', 'joaquin gonzalez-rodriguez')
a3a2f3803bf403262b56ce88d130af15e984fff0Building a Compact Relevant Sample Coverage
for Relevance Feedback in Content-Based Image
Retrieval
Tsinghua University, Beijing, China
2 Sensing & Control Technology Laboratory, Omron Corporation, Kyoto, Japan
('38916673', 'Bangpeng Yao', 'bangpeng yao')
('1679380', 'Haizhou Ai', 'haizhou ai')
('1710195', 'Shihong Lao', 'shihong lao')
b56f3a7c50bfcd113d0ba84e6aa41189e262d7aeHarvesting Motion Patterns in Still Images from the Internet
ITCS, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing
University of California, San Diego, La Jolla
Jiajun Wu (jiajunwu.cs@gmail.com)
Yining Wang (ynwang.yining@gmail.com)
Zhulin Li (li-zl12@mails.tsinghua.edu.cn)
Zhuowen Tu (ztu@ucsd.edu)
b5968e7bb23f5f03213178c22fd2e47af3afa04cMulti-Human Parsing in the Wild
National University of Singapore
Beijing Jiaotong University
March 16, 2018
('2757639', 'Jianshu Li', 'jianshu li')
('2263674', 'Yidong Li', 'yidong li')
('46509407', 'Jian Zhao', 'jian zhao')
('1715286', 'Terence Sim', 'terence sim')
('33221685', 'Jiashi Feng', 'jiashi feng')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
b5cd9e5d81d14868f1a86ca4f3fab079f63a366dTag-based Video Retrieval by Embedding Semantic Content in a Continuous
Word Space
University of Southern California
Ram Nevatia
Cees G.M. Snoek
University of Amsterdam
('3407713', 'Arnav Agharwal', 'arnav agharwal')
('3407447', 'Rama Kovvuri', 'rama kovvuri')
{agharwal,nkovvuri,nevatia}@usc.edu
cgmsnoek@uva.nl
b558be7e182809f5404ea0fcf8a1d1d9498dc01aBottom-up and top-down reasoning with convolutional latent-variable models
UC Irvine
UC Irvine
('2894848', 'Peiyun Hu', 'peiyun hu')
('1770537', 'Deva Ramanan', 'deva ramanan')
peiyunh@ics.uci.edu
dramanan@ics.uci.edu
b5cd8151f9354ee38b73be1d1457d28e39d3c2c6Finding Celebrities in Video
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2006-77
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-77.html
May 23, 2006
('3317048', 'Nazli Ikizler', 'nazli ikizler')
('1865836', 'Jai Vasanth', 'jai vasanth')
('1744452', 'David Forsyth', 'david forsyth')
b5fc4f9ad751c3784eaf740880a1db14843a85baSIViP (2007) 1:225–237
DOI 10.1007/s11760-007-0016-5
ORIGINAL PAPER
Significance of image representation for face verification
Received: 29 August 2006 / Revised: 28 March 2007 / Accepted: 28 March 2007 / Published online: 1 May 2007
© Springer-Verlag London Limited 2007
('2627097', 'Anil Kumar Sao', 'anil kumar sao')
('1783087', 'B. V. K. Vijaya Kumar', 'b. v. k. vijaya kumar')
b562def2624f59f7d3824e43ecffc990ad780898
b506aa23949b6d1f0c868ad03aaaeb5e5f7f6b57UNIVERSITY OF CALIFORNIA
RIVERSIDE
Modeling Social and Temporal Context for Video Analysis
A Dissertation submitted in partial satisfaction
of the requirements for the degree of
Doctor of Philosophy
in
Computer Science
by
June 2015
Dissertation Committee:
Dr. Christian R. Shelton, Chairperson
Dr. Tao Jiang
Dr. Stefano Lonardi
Dr. Amit Roy-Chowdhury
('12561781', 'Zhen Qin', 'zhen qin')
b599f323ee17f12bf251aba928b19a09bfbb13bbAUTONOMOUS QUADCOPTER VIDEOGRAPHER
by
REY R. COAGUILA
B.S. Universidad Peruana de Ciencias Aplicadas, 2009
A thesis submitted in partial fulfillment of the requirements
for the degree of Master of Science in Computer Science
in the Department of Electrical Engineering and Computer Science
in the College of Engineering and Computer Science
at the University of Central Florida
Orlando, Florida
Spring Term
2015
Major Professor: Gita R. Sukthankar
b5f2846a506fc417e7da43f6a7679146d99c5e96UCF101: A Dataset of 101 Human Actions
Classes From Videos in The Wild
CRCV-TR-12-01
November 2012
Keywords: Action Dataset, UCF101, UCF50, Action Recognition
Center for Research in Computer Vision
University of Central Florida
4000 Central Florida Blvd.
Orlando, FL 32816-2365 USA
('1799979', 'Khurram Soomro', 'khurram soomro')
('40029556', 'Amir Roshan Zamir', 'amir roshan zamir')
('1745480', 'Mubarak Shah', 'mubarak shah')
b5da4943c348a6b4c934c2ea7330afaf1d655e79Facial Landmarks Detection by Self-Iterative Regression based
Landmarks-Attention Network
University of Chinese Academy of Sciences, Beijing, China
2 Microsoft Research Asia, Beijing, China
('33325349', 'Tao Hu', 'tao hu')
('3245785', 'Honggang Qi', 'honggang qi')
('1697982', 'Jizheng Xu', 'jizheng xu')
('1689702', 'Qingming Huang', 'qingming huang')
hutao16@mails.ucas.ac.cn, hgqi@ucas.ac.cn
b5402c03a02b059b76be829330d38db8e921e4b5Mei, et al, Hybridized KNN and SVM for gene expression data classification
Hybridized KNN and SVM for gene expression data classification
Zhengzhou University, Zhengzhou, Henan 450052, China
Received October 22, 2008
('39156927', 'Zhen Mei', 'zhen mei')
('2380760', 'Qi Shen', 'qi shen')
('35476967', 'Baoxian Ye', 'baoxian ye')
b5160e95192340c848370f5092602cad8a4050cdIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, TO APPEAR
Video Classification With CNNs: Using The Codec
As A Spatio-Temporal Activity Sensor
('33998511', 'Aaron Chadha', 'aaron chadha')
('2822935', 'Alhabib Abbas', 'alhabib abbas')
('2747620', 'Yiannis Andreopoulos', 'yiannis andreopoulos')
b52c0faba5e1dc578a3c32a7f5cfb6fb87be06adJournal of Applied Research and
Technology
ISSN: 1665-6423
Centro de Ciencias Aplicadas y
Desarrollo Tecnológico
México

Hussain Shah, Jamal; Sharif, Muhammad; Raza, Mudassar; Murtaza, Marryam; Ur-Rehman, Saeed
Robust Face Recognition Technique under Varying Illumination
Journal of Applied Research and Technology, vol. 13, núm. 1, febrero, 2015, pp. 97-105
Centro de Ciencias Aplicadas y Desarrollo Tecnológico
Distrito Federal, México
Available in: http://www.redalyc.org/articulo.oa?id=47436895009
How to cite
Complete issue
More information about this article
Journal's homepage in redalyc.org
Scientific Information System
Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal
Non-profit academic project, developed under the open access initiative
jart@aleph.cinstrum.unam.mx
b56530be665b0e65933adec4cc5ed05840c37fc4IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2007
©IEEE
Reducing correspondence ambiguity in loosely labeled training data
University of Arizona
Tucson Arizona
('1728667', 'Kobus Barnard', 'kobus barnard')kobus@cs.arizona.edu
b5f4e617ac3fc4700ec8129fcd0dcf5f71722923Hierarchical Wavelet Networks for Facial Feature Localization
Rog·erio S. Feris
Microsoft Research
Redmond, WA 98052
U.S.A.
Volker Kr¤uger
University of Maryland, CFAR
College Park, MD
U.S.A.
('1936061', 'Jim Gemmell', 'jim gemmell')
b52886610eda6265a2c1aaf04ce209c047432b6dMicroexpression Identification and Categorization
using a Facial Dynamics Map
('1684875', 'Feng Xu', 'feng xu')
('2247926', 'Junping Zhang', 'junping zhang')
b51b4ef97238940aaa4f43b20a861eaf66f67253Hindawi Publishing Corporation
EURASIP Journal on Image and Video Processing
Volume 2008, Article ID 184618, 16 pages
doi:10.1155/2008/184618
Research Article
Unsupervised Modeling of Objects and Their Hierarchical
Contextual Interactions
Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
Received 11 June 2008; Accepted 2 September 2008
Recommended by Simon Lucey
A successful representation of objects in literature is as a collection of patches, or parts, with a certain appearance and position.
The relative locations of the different parts of an object are constrained by the geometry of the object. Going beyond a single
object, consider a collection of images of a particular scene category containing multiple (recurring) objects. The parts belonging
to different objects are not constrained by such a geometry. However, the objects themselves, arguably due to their semantic
relationships, demonstrate a pattern in their relative locations. Hence, analyzing the interactions among the parts across the
collection of images can allow for extraction of the foreground objects, and analyzing the interactions among these objects
can allow for a semantically meaningful grouping of these objects, which characterizes the entire scene. These groupings are
typically hierarchical. We introduce hierarchical semantics of objects (hSO) that captures this hierarchical grouping. We propose
an approach for the unsupervised learning of the hSO from a collection of images of a particular scene. We also demonstrate the
use of the hSO in providing context for enhanced object localization in the presence of significant occlusions, and show its superior
performance over a fully connected graphical model for the same task.
Copyright © 2008 D. Parikh and T. Chen. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
1.
INTRODUCTION
Objects that tend to cooccur in scenes are often semantically
related. Hence, they demonstrate a characteristic grouping
behavior according to their relative positions in the scene.
Some groupings are tighter than others, and thus a hierarchy
of these groupings among these objects can be observed in a
collection of images of similar scenes. It is this hierarchy that
we refer to as the hierarchical semantics of objects (hSO).
This can be better understood with an example.
Consider an office scene. Most offices, as seen in Figure 1,
are likely to have, for instance, a chair, a phone, a monitor,
and a keyboard. If we analyze a collection of images taken
from such office settings, we would observe that across
images, the monitor and keyboard are more or less in the
same position with respect to each other, and hence can be
considered to be part of the same super object at a lower level
in the hSO structure, say a computer. Similarly, the computer
may usually be somewhere in the vicinity of the phone, and
so the computer and the phone belong to the same super
object at a higher level, say the desk area. But the chair and
the desk area may be placed relatively arbitrarily in the scene
with respect to each other, more so than any of the other
objects, and hence belong to a common super object only
at the highest level in the hierarchy, that is, the scene itself.
A possible hSO that would describe such an office scene is
shown in Figure 1. Along with the structure, the hSO may
also store other information such as the relative position of
the objects and their cooccurrence counts as parameters.
The hSO is motivated from an interesting thought
exercise: at what scale is an object defined? Are the individual
keys on a keyboard objects, or the entire keyboard, or is
the entire computer an object? The definition of an object
is blurry, and the hSO exploits this to allow incorporation
of semantic information of the scene layout. The leaves of
the hSO are a collection of parts and represent the objects,
while the various levels in the hSO represent the super objects
('1713589', 'Devi Parikh', 'devi parikh')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
Correspondence should be addressed to Devi Parikh, dparikh@andrew.cmu.edu
b5d7c5aba7b1ededdf61700ca9d8591c65e84e88INTERSPEECH 2010
Data Pruning for Template-based Automatic Speech Recognition
ESAT, Katholieke Universiteit Leuven, Leuven, Belgium
('1717646', 'Dino Seppi', 'dino seppi')dino.seppi@esat.kuleuven.be, dirk.vancompernolle@esat.kuleuven.be
b5c749f98710c19b6c41062c60fb605e1ef4312aEvaluating Two-Stream CNN for Video Classification
School of Computer Science, Shanghai Key Lab of Intelligent Information Processing,
Fudan University, Shanghai, China
('1743864', 'Hao Ye', 'hao ye')
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('3066866', 'Rui-Wei Zhao', 'rui-wei zhao')
('31825486', 'Xi Wang', 'xi wang')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
{haoye10, zxwu,rwzhao14, xwang10, ygj, xyxue}@fudan.edu.cn
b5857b5bd6cb72508a166304f909ddc94afe53e3SSIG and IRISA at Multimodal Person Discovery
1Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
2IRISA & Inria Rennes , CNRS, Rennes, France
('2823797', 'Cassio E. dos Santos', 'cassio e. dos santos')
('1708671', 'Guillaume Gravier', 'guillaume gravier')
('1679142', 'William Robson Schwartz', 'william robson schwartz')
cass@dcc.ufmg.br, guig@irisa.fr, william@dcc.ufmg.br
b59f441234d2d8f1765a20715e227376c7251cd7
b51e3d59d1bcbc023f39cec233f38510819a2cf9CBMM Memo No. 003
March 27, 2014
Can a biologically-plausible hierarchy effectively
replace face detection, alignment, and
recognition pipelines?
by
('1694846', 'Qianli Liao', 'qianli liao')
('2211263', 'Youssef Mroueh', 'youssef mroueh')
b54c477885d53a27039c81f028e710ca54c83f111201
Semi-Supervised Kernel Mean Shift Clustering
('34817359', 'Saket Anand', 'saket anand')
('3323332', 'Sushil Mittal', 'sushil mittal')
('2577513', 'Oncel Tuzel', 'oncel tuzel')
('1729185', 'Peter Meer', 'peter meer')
b503f481120e69b62e076dcccf334ee50559451eRecognition of Facial Action Units with Action
Unit Classifiers and An Association Network
1Department of Electronic and Information Engineering, The Hong Kong Polytechnic
University, Hong Kong
Chu Hai College of Higher Education, Hong Kong
('2366262', 'JunKai Chen', 'junkai chen')
('1715231', 'Zenghai Chen', 'zenghai chen')
('8590720', 'Zheru Chi', 'zheru chi')
('1965426', 'Hong Fu', 'hong fu')
Junkai.Chen@connect.polyu.hk, Zenghai.Chen@connect.polyu.hk
chi.zheru@polyu.edu.hk, hongfu@chuhai.edu.hk
b55d0c9a022874fb78653a0004998a66f8242cadHybrid Facial Representations
for Emotion Recognition
Automatic facial expression recognition is a widely
studied problem in computer vision and human-robot
interaction. There has been a range of studies for
representing facial descriptors for facial expression
recognition. Some prominent descriptors were presented
in the first facial expression recognition and analysis
challenge (FERA2011). In that competition, the Local
Gabor Binary Pattern Histogram Sequence descriptor
showed the most powerful description capability. In this
paper, we introduce hybrid facial representations for facial
expression recognition, which have more powerful
description capability with lower dimensionality. Our
descriptors consist of a block-based descriptor and a pixel-
based descriptor. The block-based descriptor represents
the micro-orientation and micro-geometric structure
information. The pixel-based descriptor represents texture
information. We validate our descriptors on two public
databases, and the results show that our descriptors
perform well with a relatively low dimensionality.
Keywords: Facial expression recognition, Histograms of
Oriented Gradients, HOG, Local Binary Pattern, LBP,
Rotated Local Binary Pattern, RLBP, Gabor filter, GF.

Manuscript received Mar. 31, 2013; revised Aug. 29, 2013; accepted Sept. 23, 2013.
This work was supported by the R&D program of the Korea Ministry of Knowledge and
Economy (MKE) and the Korea Evaluation Institute of Industrial Technology (KEIT
[10041826, Development of emotional features sensing, diagnostics and distribution s/w
platform for measurement of multiple intelligence from young children].
Jaehong Kim
Daejeon, Rep. of Korea.
and
I. Introduction
Facial expression is a natural and intuitive means for humans
to express and sense their emotions and intentions. For this
reason, automatic facial expression recognition has been an
active research field in computer vision and human-robot
interaction for a long time [1], [2]. In the case of robots living
with a family, it is very useful to sense the family members’
emotions through facial expressions and respond appropriately.
There are three stages in the general automatic facial
expression recognition systems. The first stage is to detect the
faces and normalize the photographic images of the faces. This
stage may be based on a holistic facial region or on facial
components such as the eyes, nose, and mouth. The next stage
is to extract the facial expression descriptors from the
normalized faces. Finally, the system classifies the facial
descriptors into the proper expression categories.
In this paper, we introduce new facial expression descriptors.
These descriptors adopt two representations, a block-based
representation and a pixel-based representation, to reflect the
micro-orientation, micro-geometric structure, and texture
information. The descriptors show more powerful description
capability with low dimensionality than the state-of-the-art
descriptors.
II. Previous Work
Many researchers have shown a range of approaches to
construct an automatic facial expression recognition system.
Geometric approaches and texture-based approaches are the
types. Texture-based approaches have
most prominent
generally shown a better performance
than geometric
approaches in previous research [3], [4]. In texture-based
ETRI Journal, Volume 35, Number 6, December 2013 © 2013
http://dx.doi.org/10.4218/etrij.13.2013.0054
Woo-han Yun et al. 1021
('36034086', 'DoHyung Kim', 'dohyung kim')Woo-han Yun (phone: +82 42 860 5804, yochin@etri.re.kr), DoHyung Kim
(dhkim008@etri.re.kr), Chankyu Park
(jhkim504@etri.re.kr) are with the IT Convergence Technology Research Laboratory, ETRI,
(parkck@etri.re.kr),
b5930275813a7e7a1510035a58dd7ba7612943bcJOURNAL OF INFORMATION SCIENCE AND ENGINEERING 26, 1525-1537 (2010)
Short Paper__________________________________________________
Face Recognition Using L-Fisherfaces*
Institute of Information Science
Beijing Jiaotong University
Beijing, 100044 China
College of Information and Electrical Engineering
Shandong University of Science and Technology
Qingdao, 266510 China
An appearance-based face recognition approach called the L-Fisherfaces is pro-
posed in this paper, By using Local Fisher Discriminant Embedding (LFDE), the face
images are mapped into a face subspace for analysis. Different from Linear Discriminant
Analysis (LDA), which effectively sees only the Euclidean structure of face space, LFDE
finds an embedding that preserves local information, and obtains a face subspace that
best detects the essential face manifold structure. Different from Locality Preserving
Projections (LPP) and Unsupervised Discriminant projections (UDP), which ignore the
class label information, LFDE searches for the project axes on which the data points of
different classes are far from each other while requiring data points of the same class to
be close to each other. We compare the proposed L-Fisherfaces approach with PCA,
LDA, LPP, and UDP on three different face databases. Experimental results suggest that
the proposed L-Fisherfaces provides a better representation and achieves higher accuracy
in face recognition.
Keywords: face recognition, local Fisher discriminant embedding, manifold learning, lo-
cality preserving projections, unsupervised discriminant projections
1. INTRODUCTION
Face recognition has aroused wide concerns over the past few decades due to its
potential applications, such as criminal identification, credit card verification, and secu-
rity system and scene surveillance. In the literature, various algorithms have been proposed
for this problem [1, 2]. PCA and LDA are two well-known linear subspace-learning tech-
niques and have become the most popular methods for face recognition [3-5]. Recently, He
et al. [6, 7] and Yang et al. [8, 9] proposed two manifold learning based methods,
namely, Locality Preserving Projections (LPP) and unsupervised discriminant projection
(UDP), for face recognition. LPP is a linear subspace method derived from Laplacian
Eigenmap [10]. It results in a linear map that optimally preserves local neighborhood
information and its objective function is to minimize the local scatter of the projected
data. Unlike LPP, UDP finds a linear map based on the criterion that seeks to maximize
Received July 29, 2008; revised October 30, 2008; accepted January 8, 2009.
Communicated by H. Y. Mark Liao.
* This work was partially supported by the National Natural Science Foundation of China (NSFC, No. 60672062)
and the Major State Basic Research Development Program of China (973 Program No. 2004CB318005).
1525
('7924002', 'Cheng-Yuan Zhang', 'cheng-yuan zhang')
('2383779', 'Qiu-Qi Ruan', 'qiu-qi ruan')
b59c8b44a568587bc1b61d130f0ca2f7a2ae3b88An Enhanced Intelligent Agent with Image Description
Generation
Department of Computer Science and Digital Technologies, Facutly of Engineering and
Environment, Northumbria University, Newcastle, NE1 8ST, United Kingdom
learning
for
techniques
('29695322', 'Ben Fielding', 'ben fielding')
('1921534', 'Philip Kinghorn', 'philip kinghorn')
('2801063', 'Kamlesh Mistry', 'kamlesh mistry')
('1712838', 'Li Zhang', 'li zhang')
{ben.fielding, philip.kinghorn, kamlesh.mistry, li.zhang (corr. author)}@northumbria.ac.uk
b59cee1f647737ec3296ccb3daa25c890359c307Continuously Reproducing Toolchains in Pattern
Recognition and Machine Learning Experiments
A. Anjos
Idiap Research Institute
Martigny, Switzerland
M. G¨unther
Vision and Security Technology
University of Colorado
Colorado Springs, USA
andre.anjos@idiap.ch
mgunther@vast.uccs.edu
b249f10a30907a80f2a73582f696bc35ba4db9e2Improved graph-based SFA: Information preservation
complements the slowness principle
Institut f¨ur Neuroinformatik
Ruhr-University Bochum, Germany
('2366497', 'Alberto N. Escalante', 'alberto n. escalante')
('1736245', 'Laurenz Wiskott', 'laurenz wiskott')
b2a0e5873c1a8f9a53a199eecae4bdf505816ecbHybrid VAE: Improving Deep Generative Models
using Partial Observations
Snap Research
Microsoft Research
('1715440', 'Sergey Tulyakov', 'sergey tulyakov')
('2388416', 'Sebastian Nowozin', 'sebastian nowozin')
stulyakov@snap.com
{awf,Sebastian.Nowozin}@microsoft.com
b2cd92d930ed9b8d3f9dfcfff733f8384aa93de8HyperFace: A Deep Multi-task Learning Framework for Face Detection,
Landmark Localization, Pose Estimation, and Gender Recognition
University of Maryland
College Park, MD
('26988560', 'Rajeev Ranjan', 'rajeev ranjan')rranjan1@umd.edu
b216040f110d2549f61e3f5a7261cab128cab3612734
IEICE TRANS. INF. & SYST., VOL.E100–D, NO.11 NOVEMBER 2017
LETTER
Weighted Voting of Discriminative Regions for Face Recognition∗
SUMMARY
This paper presents a strategy, Weighted Voting of Dis-
criminative Regions (WVDR), to improve the face recognition perfor-
mance, especially in Small Sample Size (SSS) and occlusion situations.
In WVDR, we extract the discriminative regions according to facial key
points and abandon the rest parts. Considering different regions of face
make different contributions to recognition, we assign weights to regions
for weighted voting. We construct a decision dictionary according to the
recognition results of selected regions in the training phase, and this dic-
tionary is used in a self-defined loss function to obtain weights. The final
identity of test sample is the weighted voting of selected regions. In this
paper, we combine the WVDR strategy with CRC and SRC separately, and
extensive experiments show that our method outperforms the baseline and
some representative algorithms.
key words: discriminative regions, small sample size, occlusion, weighted
strategy, face recognition
1.
Introduction
Face recognition is one of the most popular and challenging
problems in computer vision. Many representative methods,
such as SRC [1] and CRC [2], have achieved good results in
the controlled condition. However, face recognition with
occlusion or small training size is still challenging.
Wright et al. [1] first apply the Sparse Representation
based Classification (SRC) for face recognition (FR). Zhang
et al. [2] propose Collaborative Representation based Clas-
sification (CRC) and claim that it is the CR instead of the
l1-norm sparsity that truly improves the FR performance.
However, the performance of classifiers (e.g. SVM [3], SRC
and CRC) declines dramatically if the training sample size
is small. Some works have been done to tackle the Small
Sample Size (SSS) problem. The Extended SRC [4] algo-
rithm constructs an auxiliary intra-class variant dictionary
to represent the variations between training and test images,
while the construction of the dictionary needs extra data.
Patch-based methods are another effective way to solve the
SSS problem.
In [5], Zhu et al. propose the patch-based
CRC and multi-scale ensemble. Gao et al. [6] propose the
Regularized Patch-based Representation to solve the SSS
problem. However, patch-based methods are sensitive to the
patch size [7], and haven’t noticed the texture distribution of
a face image.
Images with disguise or occlusion are hard to clas-
sify. The recognition rate of many classifiers (e.g. SVM and
SRC) decreases rapidly when images occluded. Local Con-
tourlet Combined Patterns (LCCP) [8] reports a good per-
formance in non-occlusion images but the recognition rate
decreases in occlusion condition. There are some improve-
ments [9], [10] for occlusion problem. The recent prob-
abilistic collaborative representation (ProCRC) [10] jointly
maximizes the likelihood of test samples with multiple
classes.
Instead of splitting the image into patches of same size,
we extract the face regions according to an alignment algo-
rithm [11]. Some regions, such as eyes and nose, are dis-
In addition, different regions
criminative for recognition.
have different representation abilities. As Fig. 1 shows, dis-
criminative ability of regions is affected by type of region
and training size. So it’s reasonable that the regions are as-
signed with different weights.
In this paper, we propose a method termed Weighted
Voting of Discriminative Regions (WVDR), in which, dis-
criminative regions are extracted from face images and
weights are learned from a decision dictionary in training
Manuscript received June 5, 2017.
Manuscript revised July 16, 2017.
Manuscript publicized August 4, 2017.
The authors are with Shenzhen Key Lab. of Information Sci
& Tech, Shenzhen Engineering Lab. of IS & DCP Department of
Electronic Engineering, Graduate School at Shenzhen, Tsinghua
University, China
This work was supported by the Natural Science Foun-
dation of China (No. 61471216, No. 61771276),
the Na-
tional Key Research and Development Program of China
(No. 2016YFB0101001 and 2017YFC0112500) and the Spe-
cial Foundation for the Development of Strategic Emerging In-
dustries of Shenzhen (No. JCYJ20170307153940960 and No.
JCYJ20150831192224146).
thor)
DOI: 10.1587/transinf.2017EDL8124
Fig. 1
Recognition rates (AR database) when using only a single region.
The s represents the number of training samples per person. The X-axis
represents the regions extracted from face, and the image means the whole
face image.
Copyright c(cid:3) 2017 The Institute of Electronics, Information and Communication Engineers
('3196016', 'Wenming Yang', 'wenming yang')
('2183412', 'Riqiang Gao', 'riqiang gao')
('2883861', 'Qingmin Liao', 'qingmin liao')
a) E-mail: grq15@mails.tsinghua.edu.cn (Corresponding au-
b261439b5cde39ec52d932a222450df085eb5a91International Journal of Computer Trends and Technology (IJCTT) – volume 24 Number 2 – June 2015
Facial Expression Recognition using Analytical Hierarchy
Process
MTech Student 1, 2, Disha Institute of
Management and Technology, Raipur Chhattisgarh, India1, 2
to
its significant contribution
b234cd7788a7f7fa410653ad2bafef5de7d5ad29Unsupervised Temporal Ensemble Alignment
For Rapid Annotation
1 CSIRO, Brisbane, QLD, Australia
Queensland University of Technology, Brisbane, QLD, Australia
Carnegie Mellon University, Pittsburgh, PA, USA
('3231493', 'Ashton Fagg', 'ashton fagg')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
('1820249', 'Simon Lucey', 'simon lucey')
ashton@fagg.id.au, s.sridharan@qut.edu.au, slucey@cs.cmu.edu
b2c60061ad32e28eb1e20aff42e062c9160786beDiverse and Controllable Image Captioning with
Part-of-Speech Guidance
University of Illinois at Urbana-Champaign
('2118997', 'Aditya Deshpande', 'aditya deshpande')
('29956361', 'Jyoti Aneja', 'jyoti aneja')
('46659761', 'Liwei Wang', 'liwei wang')
{ardeshp2, janeja2, lwang97, aschwing, daf}@illinois.edu
b2b535118c5c4dfcc96f547274cdc05dde629976JOURNAL OF IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. XX, NO. X, XXX 2017
Automatic Recognition of Facial Displays of
Unfelt Emotions
Escalera, Xavier Bar´o, Sylwia Hyniewska, Member, IEEE, J¨uri Allik,
('38370357', 'Kaustubh Kulkarni', 'kaustubh kulkarni')
('22197083', 'Ciprian Adrian Corneanu', 'ciprian adrian corneanu')
('22211769', 'Ikechukwu Ofodile', 'ikechukwu ofodile')
('47608164', 'Gholamreza Anbarjafari', 'gholamreza anbarjafari')
b235b4ccd01a204b95f7408bed7a10e080623d2eRegularizing Flat Latent Variables with Hierarchical Structures ('7246002', 'Rongcheng Lin', 'rongcheng lin')
('2703486', 'Huayu Li', 'huayu li')
('38472218', 'Xiaojun Quan', 'xiaojun quan')
('2248826', 'Richang Hong', 'richang hong')
('2737890', 'Zhiang Wu', 'zhiang wu')
('1874059', 'Yong Ge', 'yong ge')
(cid:117)UNC Charlotte. Email: {rlin4, hli38, yong.ge}@uncc.edu,
(cid:63) Hefei University of Technology. Email: hongrc@hfut.edu.cn
† Institute for Infocomm Research. Email: quanx@i2r.a-star.edu.sg
∓ Nanjing University of Finance and Economics. Email: zawu@seu.edu.cn
b29b42f7ab8d25d244bfc1413a8d608cbdc51855EFFECTIVE FACE LANDMARK LOCALIZATION VIA SINGLE DEEP NETWORK
1National Key Laboratory of Fundamental Science on Synthetic Vision
School of Computer Science, Sichuan University, Chengdu, China
('3471145', 'Zongping Deng', 'zongping deng')
('1691465', 'Ke Li', 'ke li')
('7345195', 'Qijun Zhao', 'qijun zhao')
('40600345', 'Yi Zhang', 'yi zhang')
('1715100', 'Hu Chen', 'hu chen')
3huchen@scu.edu.cn
b2e5df82c55295912194ec73f0dca346f7c113f6CUHK&SIAT Submission for THUMOS15 Action Recognition Challenge
The Chinese University of Hong Kong, Hong Kong
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('39060754', 'Limin Wang', 'limin wang')
('40184588', 'Zhe Wang', 'zhe wang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('40612284', 'Yu Qiao', 'yu qiao')
07wanglimin@gmail.com, buptwangzhe2012@gmail.com, yjxiong@ie.cuhk.edu.hk, yu.qiao@siat.ac.cn
b2e6944bebab8e018f71f802607e6e9164ad3537Mixed Error Coding for
Face Recognition with Mixed Occlusions
Zhejiang University of Technology
Hangzhou, China
('4487395', 'Ronghua Liang', 'ronghua liang')
('34478462', 'Xiao-Xin Li', 'xiao-xin li')
{rhliang, mordekai}@zjut.edu.cn
b2c25af8a8e191c000f6a55d5f85cf60794c2709Noname manuscript No.
(will be inserted by the editor)
A Novel Dimensionality Reduction Technique based on
Kernel Optimization Through Graph Embedding
N. Vretos, A. Tefas and I. Pitas
the date of receipt and acceptance should be inserted later
b239a756f22201c2780e46754d06a82f108c1d03Robust Multimodal Recognition via Multitask
Multivariate Low-Rank Representations
Center for Automation Research, UMIACS, University of Maryland, College Park, MD 20742 USA
('9033105', 'Heng Zhang', 'heng zhang')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
{hzhang98, pvishalm, rama}@umiacs.umd.edu
b20cfbb2348984b4e25b6b9174f3c7b65b6aed9eLearning with Ambiguous Label Distribution for
Apparent Age Estimation
Department of Signal Processing
Tampere University of Technology
Tampere 33720, Finland
('40394658', 'Ke Chen', 'ke chen')firstname.lastname@tut.fi
d904f945c1506e7b51b19c99c632ef13f340ef4cA scalable 3D HOG model for fast object detection and viewpoint estimation
KU Leuven, ESAT/PSI - iMinds
Kasteelpark Arenberg 10 B-3001 Leuven, Belgium
('3048367', 'Marco Pedersoli', 'marco pedersoli')
('1704728', 'Tinne Tuytelaars', 'tinne tuytelaars')
firstname.lastname@esat.kuleuven.be
d949fadc9b6c5c8b067fa42265ad30945f9caa99Rethinking Feature Discrimination and
Polymerization for Large-scale Recognition
The Chinese University of Hong Kong
('1715752', 'Yu Liu', 'yu liu')
('46382329', 'Hongyang Li', 'hongyang li')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
{yuliu, yangli, xgwang}@ee.cuhk.edu.hk
d93baa5ecf3e1196b34494a79df0a1933fd2b4ecPrecise Temporal Action Localization by
Evolving Temporal Proposals
East China Normal University
Shanghai, China
University of Washington
Seattle, WA, USA
Shanghai Advanced Research
Institute, CAS, China
East China Normal University
Shanghai, China
Shanghai Advanced Research
Institute, CAS, China
Liang He
East China Normal University
Shanghai, China
('31567595', 'Haonan Qiu', 'haonan qiu')
('1803391', 'Yao Lu', 'yao lu')
('3015119', 'Yingbin Zheng', 'yingbin zheng')
('47939010', 'Feng Wang', 'feng wang')
('1743864', 'Hao Ye', 'hao ye')
hnqiu@ica.stc.sh.cn
luyao@cs.washington.edu
zhengyb@sari.ac.cn
fwang@cs.ecnu.edu.cn
yeh@sari.ac.cn
lhe@cs.ecnu.edu.cn
d961617db4e95382ba869a7603006edc4d66ac3bExperimenting Motion Relativity for Action Recognition
with a Large Number of Classes
East China Normal University
500 Dongchuan Rd., Shanghai, China
('39586279', 'Feng Wang', 'feng wang')
('38755510', 'Xiaoyan Li', 'xiaoyan li')
d9810786fccee5f5affaef59bc58d2282718af9bAdaptive Frame Selection for
Enhanced Face Recognition in
Low-Resolution Videos
by
Thesis submitted to the
College of Engineering and Mineral Resources
at West Virginia University
in partial fulfillment of the requirements
for the degree of
Master of Science
in
Electrical Engineering
Arun Ross, PhD., Chair
Xin Li, PhD.
Donald Adjeroh, PhD.
Lane Department of Computer Science and Electrical Engineering
Morgantown, West Virginia
2008
Keywords: Face Biometrics, Super-Resolution, Optical Flow, Super-Resolution using
Optical Flow, Adaptive Frame Selection, Inter-Frame Motion Parameter, Image Quality,
Image-Level Fusion, Score-Level Fusion
('2531952', 'Raghavender Reddy Jillela', 'raghavender reddy jillela')
('2531952', 'Raghavender Reddy Jillela', 'raghavender reddy jillela')
d94d7ff6f46ad5cab5c20e6ac14c1de333711a0c978-1-5090-4117-6/17/$31.00 ©2017 IEEE
3031
ICASSP 2017
d930ec59b87004fd172721f6684963e00137745fFace Pose Estimation using a
Tree of Boosted Classifiers
Signal Processing Institute
´Ecole Polytechnique F´ed´erale de Lausanne (EPFL)
September 11, 2006
('1768663', 'Julien Meynet', 'julien meynet')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
d9739d1b4478b0bf379fe755b3ce5abd8c668f89
d9c4586269a142faee309973e2ce8cde27bda718Contextual Visual Similarity
The Robotics Institute
Carnegie Mellon University
('2461523', 'Xiaofang Wang', 'xiaofang wang')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
('1709305', 'Martial Hebert', 'martial hebert')
xiaofan2@andrew.cmu.edu {kkitani,hebert}@cs.cmu.edu
d912b8d88d63a2f0cb5d58164e7414bfa6b41dfaFacial identification problem: A tracking based approach
Department of Information Technology
University of Milan
via Bramante, 65 - 26013, Crema (CR), Italy
Telephone: +390373898047, Fax: 0373899010
AST Group, ST Microelectronics
via Olivetti, 5 - 20041,
Agrate Brianza (MI), Italy
Telephone: +390396037234
('3330245', 'Marco Anisetti', 'marco anisetti')
('2061298', 'Valerio Bellandi', 'valerio bellandi')
('1746044', 'Ernesto Damiani', 'ernesto damiani')
('2666794', 'Fabrizio Beverina', 'fabrizio beverina')
Email: {anisetti,bellandi,damiani}@dti.unimi.it
Email: fabrizio.beverina@st.com
d9318c7259e394b3060b424eb6feca0f71219179406
Face Matching and Retrieval Using Soft Biometrics
('2222919', 'Unsang Park', 'unsang park')
('6680444', 'Anil K. Jain', 'anil k. jain')
d9a1dd762383213741de4c1c1fd9fccf44e6480d
d963e640d0bf74120f147329228c3c272764932bInternational Journal of Advanced Science and Technology
Vol.64 (2014), pp.1-10
http://dx.doi.org/10.14257/ijast.2014.64.01
Image Processing for Face Recognition Rate Enhancement
School of Computer and Information, Hefei University of Technology, Hefei
University of Technology, Baghdad, Iraq
People’s Republic of China
Israa_ameer@yahoo.com
d9ef1a80738bbdd35655c320761f95ee609b8f49 Volume 5, Issue 4, 2015 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
A Research - Face Recognition by Using Near Set Theory
Department of Computer Science and Engineering
Abha Gaikwad -Patil College of Engineering, Nagpur, Maharashtra, India
('9231464', 'Bhakti Kurhade', 'bhakti kurhade')
d9c4b1ca997583047a8721b7dfd9f0ea2efdc42cLearning Inference Models for Computer Vision
d9bad7c3c874169e3e0b66a031c8199ec0bc2c1fIt All Matters:
Reporting Accuracy, Inference Time and Power Consumption
for Face Emotion Recognition on Embedded Systems
Institute of Telecommunications, TU Wien
Movidius an Intel Company
Dexmont Pe˜na
Movidius an Intel Company
Movidius an Intel Company
ALaRI, Faculty of Informatics, USI
('48802034', 'Jelena Milosevic', 'jelena milosevic')
('51129064', 'Andrew Forembsky', 'andrew forembsky')
('9151916', 'David Moloney', 'david moloney')
('1697550', 'Miroslaw Malek', 'miroslaw malek')
jelena.milosevic@tuwien.ac.at
andrew.forembsky2@mail.dcu.ie
dexmont.pena@intel.com
david.moloney@intel.com
miroslaw.malek@usi.ch
d9327b9621a97244d351b5b93e057f159f24a21eSCIENCE CHINA
Information Sciences
. RESEARCH PAPERS .
December 2010 Vol. 53 No. 12: 2415–2428
doi: 10.1007/s11432-010-4099-1
Laplacian smoothing transform for face recognition
GU SuiCheng, TAN Ying
& HE XinGui
Key Laboratory of Machine Perception (MOE); Department of Machine Intelligence,
School of Electronics Engineering and Computer Science; Peking University, Beijing 100871, China
Received March 16, 2009; accepted April 1, 2010
d915e634aec40d7ee00cbea96d735d3e69602f1aTwo-Stream convolutional nets for action recognition in untrimmed video
Stanford University
Stanford University
('3308619', 'Kenneth Jung', 'kenneth jung')
('5590869', 'Song Han', 'song han')
kjung@stanford.edu
songhan@stanford.edu
aca232de87c4c61537c730ee59a8f7ebf5ecb14fEBGM VS SUBSPACE PROJECTION FOR FACE RECOGNITION
19.5 Km Markopoulou Avenue, P.O. Box 68, Peania, Athens, Greece
Athens Information Technology
Keywords:
Human-Machine Interfaces, Computer Vision, Face Recognition.
('40089976', 'Andreas Stergiou', 'andreas stergiou')
('1702943', 'Aristodemos Pnevmatikakis', 'aristodemos pnevmatikakis')
('1725498', 'Lazaros Polymenakos', 'lazaros polymenakos')
ac1d97a465b7cc56204af5f2df0d54f819eef8a6A Look at Eye Detection for Unconstrained
Environments
Key words: Unconstrained Face Recognition, Eye Detection, Machine Learning,
Correlation Filters, Photo-head Testing Protocol
1 Introduction
Eye detection is a necessary processing step for many face recognition algorithms.
For some of these algorithms, the eye coordinates are required for proper geomet-
ric normalization before recognition. For others, the eyes serve as reference points
to locate other significant features on the face, such as the nose and mouth. The
eyes, containing significant discriminative information, can even be used by them-
selves as features for recognition. Eye detection is a well studied problem for the
constrained face recognition problem, where we find controlled distances, lighting,
and limited pose variation. A far more difficult scenario for eye detection is the un-
constrained face recognition problem, where we do not have any control over the
environment or the subject. In this chapter, we will take a look at eye detection for
the latter, which encompasses problems of flexible authentication, surveillance, and
intelligence collection.
A multitude of problems affect the acquisition of face imagery in unconstrained
environments, with major problems related to lighting, distance, motion and pose.
Existing work on lighting [14, 7] has focused on algorithmic issues (specifically,
normalization), and not the direct impact of acquisition. Under difficult acquisition
Vision and Security Technology Lab, University of Colorado at Colorado Springs, Colorado
Anderson Rocha
Institute of Computing, University of Campinas (Unicamp), Campinas, Brazil, e-mail: ander
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
USA, e-mail: lastname@uccs.edu
son.rocha@ic.unicamp.br
ac2e44622efbbab525d4301c83cb4d5d7f6f0e55A 3D Morphable Model learnt from 10,000 faces
Imperial College London, UK
†Great Ormond Street Hospital, UK
Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
('1848903', 'James Booth', 'james booth')
('2931390', 'Anastasios Roussos', 'anastasios roussos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('5137183', 'Allan Ponniah', 'allan ponniah')
('2421231', 'David Dunaway', 'david dunaway')
⋆{james.booth,troussos,s.zafeiriou}@imperial.ac.uk, †{allan.ponniah,david.dunaway}@gosh.nhs.uk
ac6c3b3e92ff5fbcd8f7967696c7aae134bea209Deep Cascaded Bi-Network for
Face Hallucination(cid:63)
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
University of California, Merced
('2226254', 'Shizhan Zhu', 'shizhan zhu')
('2391885', 'Sifei Liu', 'sifei liu')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
ac855f0de9086e9e170072cb37400637f0c9b735Fast Geometrically-Perturbed Adversarial Faces
West Virginia University
('35477977', 'Ali Dabouei', 'ali dabouei')
('30319988', 'Sobhan Soleymani', 'sobhan soleymani')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
{ad0046, ssoleyma}@mix.wvu.edu, {jeremy.dawson, nasser.nasrabadi}@mail.wvu.edu
ac21c8aceea6b9495574f8f9d916e571e2fc497fPose-Independent Identity-based Facial Image
Retrieval using Contextual Similarity
King Abdullah University of Science and Technology 4700, Thuwal, Saudi Arabia
('3036634', 'Islam Almasri', 'islam almasri')
ac6a9f80d850b544a2cbfdde7002ad5e25c05ac6779
Privacy-Protected Facial Biometric Verification
Using Fuzzy Forest Learning
('1690116', 'Ahmed Bouridane', 'ahmed bouridane')
('1691478', 'Danny Crookes', 'danny crookes')
('1739563', 'M. Emre Celebi', 'm. emre celebi')
('39486168', 'Hua-Liang Wei', 'hua-liang wei')
aca273a9350b10b6e2ef84f0e3a327255207d0f5
aca75c032cfb0b2eb4c0ae56f3d060d8875e43f9Co-Regularized Ensemble for Feature Selection
School of Computer Science and Technology, Tianjin University, China
School of Information Technology and Electrical Engineering, The University of Queensland
3Tianjin Key Laboratory of Cognitive Computing and Application
('2302512', 'Yahong Han', 'yahong han')
('1698559', 'Yi Yang', 'yi yang')
('1720932', 'Xiaofang Zhou', 'xiaofang zhou')
yahong@tju.edu.cn, yee.i.yang@gmail.com, zxf@itee.uq.edu.au
accbd6cd5dd649137a7c57ad6ef99232759f7544FACIAL EXPRESSION RECOGNITION WITH LOCAL BINARY PATTERNS
AND LINEAR PROGRAMMING
1 Machine Vision Group, Infotech Oulu and Dept. of Electrical and Information Engineering
P. O. Box 4500 Fin-90014 University of Oulu, Finland
College of Electronics and Information, Northwestern Polytechnic University
710072 Xi’an, China
In this work, we propose a novel approach to recognize facial expressions from static
images. First, the Local Binary Patterns (LBP) are used to efficiently represent the facial
images and then the Linear Programming (LP) technique is adopted to classify the seven
facial expressions anger, disgust, fear, happiness, sadness, surprise and neutral.
Experimental results demonstrate an average recognition accuracy of 93.8% on the JAFFE
database, which outperforms the rates of all other reported methods on the same database.
Introduction
Facial expression recognition from static
images is a more challenging problem
than from image sequences because less
information for expression actions
is
available. However, information in a
single image is sometimes enough for
expression recognition, and
in many
applications it is also useful to recognize
single image’s facial expression.
In the recent years, numerous approaches
to facial expression analysis from static
images have been proposed [1] [2]. These
methods
face
representation and similarity measure.
For instance, Zhang [3] used two types of
features: the geometric position of 34
manually selected fiducial points and a
set of Gabor wavelet coefficients at these
points. These two types of features were
used both independently and jointly with
a multi-layer perceptron for classification.
Guo and Dyer [4] also adopted a similar
face representation, combined with linear
to carry out
programming
selection
simultaneous
and
classifier
they reported
technique
feature
training, and
differ
generally
in
a
simple
imperative question
better result. Lyons et al. used a similar face
representation with
LDA-based
classification scheme [5]. All the above methods
required the manual selection of fiducial points.
Buciu et al. used ICA and Gabor representation for
facial expression recognition and reported good result
on the same database [6]. However, a suitable
combination of feature extraction and classification is
still one
for expression
recognition.
In this paper, we propose a novel method for facial
expression recognition. In the feature extraction step,
the Local Binary Pattern (LBP) operator is used to
describe facial expressions. In the classification step,
seven expressions (anger, disgust, fear, happiness,
sadness, surprise and neutral) are decomposed into 21
expression pairs such as anger-fear, happiness-
sadness etc. 21 classifiers are produced by the Linear
Programming (LP) technique, each corresponding to
one of the 21 expression pairs. A simple binary tree
tournament scheme with pairwise comparisons is
Face Representation with Local Binary Patterns

Fig.1 shows the basic LBP operator [7], in which the
original 3×3 neighbourhood at the left is thresholded
by the value of the centre pixel, and a binary pattern
('4729239', 'Xiaoyi Feng', 'xiaoyi feng')
('1714724', 'Matti Pietikäinen', 'matti pietikäinen')
('1751372', 'Abdenour Hadid', 'abdenour hadid')
{xiaoyi,mkp,hadid}@ee.oulu.fi
fengxiao@nwpu.edu.cn
ac51d9ddbd462d023ec60818bac6cdae83b66992Hindawi Publishing Corporation
Computational Intelligence and Neuroscience
Volume 2015, Article ID 709072, 10 pages
http://dx.doi.org/10.1155/2015/709072
Research Article
An Efficient Robust Eye Localization by Learning
the Convolution Distribution Using Eye Template
1Science and Technology on Parallel and Distributed Processing Laboratory, School of Computer,
National University of Defense Technology, Changsha 410073, China
Informatization Office, National University of Defense Technology, Changsha 410073, China
Received 30 January 2015; Accepted 14 April 2015
Academic Editor: Ye-Sho Chen
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Eye localization is a fundamental process in many facial analyses. In practical use, it is often challenged by illumination, head pose,
facial expression, occlusion, and other factors. It remains great difficulty to achieve high accuracy with short prediction time and
low training cost at the same time. This paper presents a novel eye localization approach which explores only one-layer convolution
map by eye template using a BP network. Results showed that the proposed method is robust to handle many difficult situations. In
experiments, accuracy of 98% and 96%, respectively, on the BioID and LFPW test sets could be achieved in 10 fps prediction rate
with only 15-minute training cost. In comparison with other robust models, the proposed method could obtain similar best results
with greatly reduced training time and high prediction speed.
1. Introduction
Eye localization is essential to many face analyses. In analysis
of the human sentiment, eye focus, and head pose, the loca-
tion of the eye is indispensable to extract the corresponding
information there [1]. In face tracing, eye localization is often
required in real time. In face recognition, many algorithms
ask for the alignment of the face images based on eye location
[2]. Inaccurate location may result in the failure of the
recognition [3, 4].
However, real-world eye localization is filled with chal-
lenges. Face pictures are commonly taken by a projection
from the 3D space to the 2D plane. Appearance of the face
image could be influenced by the head pose, facial expression,
and illumination. Texture around eyes is therefore full of
change. Moreover, eyes may be occluded by stuffs like glasses
and hair, as shown in Figure 1. To work in any unexpected
cases, the algorithm should be robust to those impacts.
In the design of the eye localization algorithm in practical
use, prediction accuracy, rate, and the training cost are the
most concerned factors. A robust algorithm should keep high
prediction accuracy for varying cases with diverse face poses,
facial expressions in complex environment with occlusion,
and illumination changes. For real time applications, high
prediction rate is required. For some online learning systems
like the one used for public security, short training time is
also in demand to quickly adapt the algorithm to different
working places. Low training cost is also of benefit for the
tuning of the algorithm. To improve the accuracy in the diffi-
cult environment, complex model is often applied. However,
the over complicated model will increase the training cost
and the prediction time. How to select an approach with
enough complexity to achieve high prediction accuracy, high
prediction rate, and low training cost at the same time is still
a challenge.
Eye localization approaches could be mainly divided into
the texture based and the structure based. Texture based
methods [5–8] learn the features from the image textures. For
the methods exploring local textures [5, 6], high prediction
rate could be achieved with simple training. However, they
are usually not robust to the situation with occlusion and
distortion due to the limited information from the local area.
On the other hand, methods like [7, 8] study the global texture
feature from entire face image by convolution networks. High
('1790480', 'Xuan Li', 'xuan li')
('1791001', 'Yong Dou', 'yong dou')
('2223570', 'Xin Niu', 'xin niu')
('2512580', 'Jiaqing Xu', 'jiaqing xu')
('2672701', 'Ruorong Xiao', 'ruorong xiao')
('1790480', 'Xuan Li', 'xuan li')
Correspondence should be addressed to Xuan Li; lixuan@nudt.edu.cn
acc548285f362e6b08c2b876b628efceceeb813eHindawi Publishing Corporation
Computational and Mathematical Methods in Medicine
Volume 2014, Article ID 427826, 12 pages
http://dx.doi.org/10.1155/2014/427826
Research Article
Objectifying Facial Expressivity Assessment of Parkinson’s
Patients: Preliminary Study
Vrije Universiteit Brussel, 1050 Brussels, Belgium
Shaanxi Provincial Key Lab on Speech and Image Information Processing, Northwestern Polytechnical University, Xi an, China
Vrije Universiteit Brussel, 1050 Brussels, Belgium
Vrije Universiteit Brussel, 1050 Brussels, Belgium
Received 9 June 2014; Accepted 22 September 2014; Published 13 November 2014
Academic Editor: Justin Dauwels
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Patients with Parkinson’s disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a
symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts
to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and
estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To
voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement,
sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial
electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants
were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion
manipulation by evaluating the participant’s self-reports. Disgust-induced emotions were significantly higher than the other
emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity
assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD
patients with different progression of Parkinson’s disease have been observed.
1. Introduction
One of the manifestations of Parkinson’s disease (PD) is the
gradual loss of facial mobility and “mask-like” appearance.
Katsikitis and Pilowsky (1988) [1] stated that PD patients
were rated as significantly less expressive than an aphasic
and control group, on a task designed to assess spontaneous
facial expression. In addition, the spontaneous smiles of PD
patients are often perceived to be “unfelt,” because of the lack
of accompanying cheek raises [2]. Jacobs et al. [3] confirmed
that PD patients show reduced intensity of emotional facial
expression compared to the controls. In order to assess facial
expressivity, most research relies on subjective coding of the
implied researchers, as in aforementioned studies. Tickle-
Degnen and Lyons [4] found that decreased facial expressivity
correlated with self-reports of PD patients as well as the
Unified Parkinson’s Disease Rating Scale (UPDRS) [5]. PD
patients, who rated their ability to facially express emotions
as severely affected, did demonstrate less facial expressivity.
In this paper, we investigate automatic measurements
of facial expressivity from video recorded PD patients and
control populations. To the best of our knowledge, in actual
research, few attempts have been made for designing a
computer-based quantitative analysis of facial expressivity of
PD patient. To analyze whether Parkinson’s disease affected
voluntary expression of facial emotions, Bowers et al. [6]
videotaped PD patients and healthy control participants
while they made voluntary facial expression (happy, sad, fear,
anger, disgust, and surprise). In their approach, the amount of
facial movements change and timing have been quantified by
('40432410', 'Peng Wu', 'peng wu')
('34068333', 'Isabel Gonzalez', 'isabel gonzalez')
('3348420', 'Dongmei Jiang', 'dongmei jiang')
('1970907', 'Hichem Sahli', 'hichem sahli')
('3041213', 'Eric Kerckhofs', 'eric kerckhofs')
('2540163', 'Marie Vandekerckhove', 'marie vandekerckhove')
('40432410', 'Peng Wu', 'peng wu')
Correspondence should be addressed to Peng Wu; pwu@etro.vub.ac.be
acee2201f8a15990551804dd382b86973eb7c0a8To Boost or Not to Boost? On the Limits of
Boosted Trees for Object Detection
Computer Vision and Robotics Research Laboratory
University of California San Diego
('1802326', 'Eshed Ohn-Bar', 'eshed ohn-bar'){eohnbar, mtrivedi}@ucsd.edu
ac0d3f6ed5c42b7fc6d7c9e1a9bb80392742ad5e
ac820d67b313c38b9add05abef8891426edd5afb
ac9a331327cceda4e23f9873f387c9fd161fad76Deep Convolutional Neural Network for Age Estimation based on
VGG-Face Model
University of Bridgeport
University of Bridgeport
Technology Building, Bridgeport CT 06604 USA
('7404315', 'Zakariya Qawaqneh', 'zakariya qawaqneh')
('34792425', 'Arafat Abu Mallouh', 'arafat abu mallouh')
('2791535', 'Buket D. Barkana', 'buket d. barkana')
Emails: {zqawaqneh; aabumall@my.bridgeport.edu}, bbarkana@bridgeport.edu
ac26166857e55fd5c64ae7194a169ff4e473eb8bPersonalized Age Progression with Bi-level
Aging Dictionary Learning
('2287686', 'Xiangbo Shu', 'xiangbo shu')
('8053308', 'Jinhui Tang', 'jinhui tang')
('3233021', 'Zechao Li', 'zechao li')
('2356867', 'Hanjiang Lai', 'hanjiang lai')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
ac559873b288f3ac28ee8a38c0f3710ea3f986d9Team DEEP-HRI Moments in Time Challenge 2018 Technical Report
Hikvision Research Institute
('39816387', 'Chao Li', 'chao li')
('48375401', 'Zhi Hou', 'zhi hou')
('35843399', 'Jiaxu Chen', 'jiaxu chen')
('9162532', 'Jiqiang Zhou', 'jiqiang zhou')
('50322310', 'Di Xie', 'di xie')
('3290437', 'Shiliang Pu', 'shiliang pu')
ac8e09128e1e48a2eae5fa90f252ada689f6eae7Leolani: a reference machine with a theory of
mind for social communication
VU University Amsterdam, Computational Lexicology and Terminology Lab, De
Boelelaan 1105, 1081HV Amsterdam, The Netherlands
www.cltl.nl
('50998926', 'Bram Kraaijeveld', 'bram kraaijeveld'){p.t.j.m.vossen,s.baezsantamaria,l.bajcetic,b.kraaijeveld}@vu.nl
ac8441e30833a8e2a96a57c5e6fede5df81794afIEEE TRANSACTIONS ON IMAGE PROCESSING
Hierarchical Representation Learning for Kinship
Verification
('1952698', 'Naman Kohli', 'naman kohli')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('39129417', 'Richa Singh', 'richa singh')
('2487227', 'Afzel Noore', 'afzel noore')
('2641605', 'Angshul Majumdar', 'angshul majumdar')
ac86ccc16d555484a91741e4cb578b75599147b2Morphable Face Models - An Open Framework
Gravis Research Group, University of Basel
('3277377', 'Thomas Gerig', 'thomas gerig')
('39550224', 'Clemens Blumer', 'clemens blumer')
('34460642', 'Bernhard Egger', 'bernhard egger')
('1687079', 'Thomas Vetter', 'thomas vetter')
ac12ba5bf81de83991210b4cd95b4ad048317681Combining Deep Facial and Ambient Features
for First Impression Estimation
Program of Computational Science and Engineering, Bo gazi ci University
Bebek, Istanbul, Turkey
Nam k Kemal University
C¸ orlu, Tekirda˘g, Turkey
Bo gazi ci University
Bebek, Istanbul, Turkey
('38007788', 'Heysem Kaya', 'heysem kaya')
('1764521', 'Albert Ali Salah', 'albert ali salah')
furkan.gurpinar@boun.edu.tr
hkaya@nku.edu.tr
salah@boun.edu.tr
ac75c662568cbb7308400cc002469a14ff25edfdREGULARIZATION STUDIES ON LDA FOR FACE RECOGNITION
Bell Canada Multimedia Laboratory, The Edward S. Rogers Sr. Department of
Electrical and Computer Engineering, University of Toronto, M5S 3G4, Canada
('1681365', 'Juwei Lu', 'juwei lu')
ac9dfbeb58d591b5aea13d13a83b1e23e7ef1feaFrom Gabor Magnitude to Gabor Phase Features:
Tackling the Problem of Face Recognition under Severe Illumination Changes
215
12
X
From Gabor Magnitude to Gabor Phase
Features: Tackling the Problem of Face
Recognition under Severe Illumination Changes
Faculty of Electrical Engineering, University of Ljubljana
Slovenia
1. Introduction
Among the numerous biometric systems presented in the literature, face recognition
systems have received a great deal of attention in recent years. The main driving force in the
development of these systems can be found in the enormous potential face recognition
technology has in various application domains ranging from access control, human-machine
interaction and entertainment to homeland security and surveillance (Štruc et al., 2008a).
While contemporary face recognition techniques have made quite a leap in terms of
performance over the last two decades, they still struggle with their performance when
deployed in unconstrained and uncontrolled environments (Gross et al., 2004; Phillips et al.,
2007). In such environments the external conditions present during the image acquisition
stage heavily influence the appearance of a face in the acquired image and consequently
affect the performance of the recognition system. It is said that face recognition techniques
suffer from the so-called PIE problem, which refers to the problem of handling Pose,
Illumination and Expression variations that are typically encountered in real-life operating
conditions. In fact, it was emphasized by numerous researchers that the appearance of the
same face can vary significantly from image to image due to changes of the PIE factors and
that the variability in the images induced by the these factors can easily surpass the
variability induced by the subjects’ identity (Gross et al., 2004; Short et al., 2005). To cope
with image variability induced by the PIE factors, face recognition systems have to utilize
feature extraction techniques capable of extracting stable and discriminative features from
facial images regardless of the conditions governing the acquisition procedure. We will
confine ourselves in this chapter to tackling the problem of illumination changes, as it
represents the PIE factor which, in our opinion, is the hardest to control when deploying a
face recognition system, e.g., in access control applications.
Many feature extraction techniques, among them particularly the appearance based
methods, have difficulties extracting stable features from images captured under varying
illumination conditions and, hence, perform poorly when deployed in unconstrained
environments. Researchers have, therefore, proposed a number of alternatives that should
compensate for the illumination changes and thus ensure stable face recognition
performance.
('2011218', 'Vitomir Štruc', 'vitomir štruc')
('1753753', 'Nikola Pavešić', 'nikola pavešić')
acb83d68345fe9a6eb9840c6e1ff0e41fa373229Kernel Methods in Computer Vision:
Object Localization, Clustering,
and Taxonomy Discovery
vorgelegt von
Matthew Brian Blaschko, M.S.
aus La Jolla
Von der Fakult¨at IV - Elektrotechnik und Informatik
der Technischen Universit¨at Berlin
zur Erlangung des akademischen Grades
Doktor der Naturwissenschaften
Dr. rer. nat.
genehmigte Dissertation
Promotionsausschuß:
Vorsitzender: Prof. Dr. O. Hellwich
Berichter: Prof. Dr. T. Hofmann
Berichter: Prof. Dr. K.-R. M¨uller
Berichter: Prof. Dr. B. Sch¨olkopf
Tag der wissenschaftlichen Aussprache: 23.03.2009
Berlin 2009
D83
ade1034d5daec9e3eba1d39ae3f33ebbe3e8e9a7Multimodal Caricatural Mirror
(1) : Université catholique de Louvain, Belgium
(2) Universitat Polytecnica de Barcelona, Spain
(3) Universidad Polytècnica de Madrid, Spain
Aristotle University of Thessaloniki, Greece
Bogazici University, Turkey
(6) Faculté Polytechnique de Mons, Belgium
ad8540379884ec03327076b562b63bc47e64a2c7Int. J. Bio-Inspired Computation, Vol. 5, No. 3, 2013
175
Bee royalty offspring algorithm for improvement of
facial expressions classification model
Department of Computer Science,
Mahshahr Branch,
Islamic Azad University
Mahshahr, Iran
*Corresponding author
Md Jan Nordin
Centre for Artificial Intelligence Technology,
Universiti Kebangsaan Malaysia,
Bangi, Selangor, Malaysia
('1880066', 'Amir Jamshidnezhad', 'amir jamshidnezhad')E-mail: a.jamshidnejad@yahoo.com
E-mail: jan@ftsm.ukm.my
adce9902dca7f4e8a9b9cf6686ec6a7c0f2a0ba6Two Birds, One Stone: Jointly Learning Binary Code for
Large-scale Face Image Retrieval and Attributes Prediction
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing, 100049, China
School of Information Science and Technology, ShanghaiTech University, Shanghai, 200031, China
('38751558', 'Yan Li', 'yan li')
('3373117', 'Ruiping Wang', 'ruiping wang')
('3035576', 'Haomiao Liu', 'haomiao liu')
('3371529', 'Huajie Jiang', 'huajie jiang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{yan.li, haomiao.liu, huajie.jiang}@vipl.ict.ac.cn, {wangruiping, sgshan, xlchen}@ict.ac.cn
adf7ccb81b8515a2d05fd3b4c7ce5adf5377d9beApprentissage de métrique appliqué à la
détection de changement de page Web et
aux attributs relatifs
thieu Cord*
* Sorbonne Universités, UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris,
France
RÉSUMÉ. Nous proposons dans cet article un nouveau schéma d’apprentissage de métrique.
Basé sur l’exploitation de contraintes qui impliquent des quadruplets d’images, notre approche
vise à modéliser des relations sémantiques de similarités riches ou complexes. Nous étudions
comment ce schéma peut être utilisé dans des contextes tels que la détection de régions impor-
tantes dans des pages Web ou la reconnaissance à partir d’attributs relatifs.
('1728523', 'Nicolas Thome', 'nicolas thome')
ada73060c0813d957576be471756fa7190d1e72dVRPBench: A Vehicle Routing Benchmark Tool
October 19, 2016
('7660594', 'Guilherme A. Zeni', 'guilherme a. zeni')
('7809605', 'Mauro Menzori', 'mauro menzori')
('1788152', 'Luis A. A. Meira', 'luis a. a. meira')
add50a7d882eb38e35fe70d11cb40b1f0059c96fHigh-Fidelity Pose and Expression Normalization for Face Recognition in the Wild
Center for Biometrics and Security Research and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
Pose and expression normalization is a crucial step to recover the canonical
view of faces under arbitrary conditions, so as to improve the face recogni-
tion performance. Most normalization algorithms can be divided in to 2D
and 3D methods. 2D methods either estimate a flow to simulate the 3D
geometry transformation or learn appearance transformations between dif-
ferent poses. 3D methods estimate the depth information with a face model
and normalize faces through 3D transformations.
An ideal normalization is desired to preserve the face appearance with
little artifact and information loss, which we call high-fidelity. However,
most previous methods fail to satisfy that. In this paper, we present a 3D
pose and expression normalization method to recover the canonical-view,
expression-free image with high fidelity. It contains three components: pose
adaptive 3D Morphable Model (3DMM) fitting, identity preserving normal-
ization and invisible region filling, which is briefly summarized in Fig. 1.
Figure 1: Overview of the High-Fidelity Pose and Expression Normalization
(HPEN) method
With an input image, the landmarks are detected with the face alignment
algorithm and we mark the corresponding 3D landmarks on the face model.
Then the 3DMM can be fitted by minimizing the distance between the 2D
landmarks and projected 3D landmarks:
arg
min
f ,R,t3d ,αid ,αexp
(cid:107)s2d − f PR(S + Aidαid + Aexpαexp +t3d)(cid:107)
(1)
where αid is the shape parameter, αexp is the expression parameter. f ,R,t3d
are pose parameters. However, when faces deviate from the frontal pose, the
correspondence between 2D and 3D landmarks will be broken, which we
model as “landmark marching”: when pose changes, the contour landmarks
move along the parallel to the visibility boundary, see Fig. 2(a). To deal with
the phenomenon we propose an approximation method to adjust contour
landmarks during 3DMM fitting. The 3D model are firstly projected with
only yaw and pitch to eliminate in-plane rotation. Then for each parallel, the
point with extreme x coordinate will be chosen as the marching destimation,
see Fig. 2(b).
With the fitted 3DMM, The face can be normalized through 3D trans-
formations. In this paper we also normalize the external face region which
contains discriminative information as well. Firstly we mark three groups of
anchors which are located on the face boundary, face surrounding and image
contour, see Fig. 3(a). Then their depth are estimated by enlarging the fitted
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('1718623', 'Zhen Lei', 'zhen lei')
('1721677', 'Junjie Yan', 'junjie yan')
('1716143', 'Dong Yi', 'dong yi')
('34679741', 'Stan Z. Li', 'stan z. li')
ad784332cc37720f03df1c576e442c9c828a587aFace Recognition Based on Face-Specific Subspace
JDL, Institute of Computing Technology, CAS, P.O. Box 2704, Beijing, China
Harbin Institute of Technology, Harbin, China
('1685914', 'Shiguang Shan', 'shiguang shan')
('1698902', 'Wen Gao', 'wen gao')
('1725937', 'Debin Zhao', 'debin zhao')
ada42b99f882ba69d70fff68c9ccbaff642d5189Semantic Image Segmentation
and
Web-Supervised Visual Learning
D.Phil Thesis
Robotics Research Group
Department of Engineering Science
University of Oxford
Supervisors:
Professor Andrew Zisserman
Dr. Antonio Criminisi
Florian Schroff
St. Anne s College
Trinity, 2009
ad0d4d5c61b55a3ab29764237cd97be0ebb0ddffWeakly Supervised Action Localization by Sparse Temporal Pooling Network
University of California
Irvine, CA, USA
Google
Venice, CA, USA
Seoul National University
Seoul, Korea
('1998374', 'Phuc Nguyen', 'phuc nguyen')
('40282288', 'Ting Liu', 'ting liu')
('2775959', 'Gautam Prasad', 'gautam prasad')
('40030651', 'Bohyung Han', 'bohyung han')
nguyenpx@uci.edu
{liuti, gautamprasad}@google.com
bhhan@snu.ac.kr
adfaf01773c8af859faa5a9f40fb3aa9770a8aa7LARGE SCALE VISUAL RECOGNITION
A DISSERTATION
PRESENTED TO THE FACULTY
OF PRINCETON UNIVERSITY
IN CANDIDACY FOR THE DEGREE
OF DOCTOR OF PHILOSOPHY
RECOMMENDED FOR ACCEPTANCE
BY THE DEPARTMENT OF
COMPUTER SCIENCE
ADVISER: FEI-FEI LI
JUNE 2012
('8342699', 'JIA DENG', 'jia deng')
adf5caca605e07ee40a3b3408f7c7c92a09b0f70Line-based PCA and LDA approaches for Face Recognition
Kyung Hee University South of Korea
('1687579', 'Vo Dinh Minh Nhat', 'vo dinh minh nhat')
('1700806', 'Sungyoung Lee', 'sungyoung lee')
{vdmnhat, sylee}@oslab.khu.ac.kr
adaf2b138094981edd615dbfc4b7787693dbc396Statistical Methods For Facial
Shape-from-shading and Recognition
Submitted for the degree of Doctor of Philosophy
Department of Computer Science
20th February 2007
('1687021', 'William A. P. Smith', 'william a. p. smith')
ad6745dd793073f81abd1f3246ba4102046da022
ad9cb522cc257e3c5d7f896fe6a526f6583ce46fReal-Time Recognition of Facial Expressions for Affective
Computing Applications
by
A M. Eng. Project submitted in conformity with the requirements
for the degree of Master of Engineering
Department of Mechanical and Industrial Engineering
University of Toronto
('26301224', 'Christopher Wang', 'christopher wang')
('26301224', 'Christopher Wang', 'christopher wang')
ad08c97a511091e0f59fc6a383615c0cc704f44aTowards the improvement of self-service
systems via emotional virtual agents
Christopher Martin
School of Computing &
Engineering Systems
University of Abertay
Bell Street, Dundee
School of Computing &
Engineering Systems
University of Abertay
Bell Street, Dundee
School of Computing &
Engineering Systems
University of Abertay
Bell Street, Dundee
School of Social & Health
Sciences
University of Abertay
Bell Street, Dundee
Affective computing and emotional agents have been found to have a positive effect on human-
computer interactions. In order to develop an acceptable emotional agent for use in a self-service
interaction, two stages of research were identified and carried out; the first to determine which
facial expressions are present in such an interaction and the second to determine which emotional
agent behaviours are perceived as appropriate during a problematic self-service shopping task. In
the first stage, facial expressions associated with negative affect were found to occur during self-
service shopping interactions, indicating that facial expression detection is suitable for detecting
negative affective states during self-service interactions. In the second stage, user perceptions of
the emotional facial expressions displayed by an emotional agent during a problematic self-service
interaction were gathered. Overall, the expression of disgust was found to be perceived as
inappropriate while emotionally neutral behaviour was perceived as appropriate, however gender
differences suggested that females perceived surprise as inappropriate. Results suggest that
agents should change their behaviour and appearance based on user characteristics such as
gender.
Keywords: affective computing, virtual agents, emotions, emotion detection, HCI, computer vision, empathy.
1. INTRODUCTION
This paper describes research which contributes
towards the development of an empathetic system
which will detect and improve a user’s affective
state during a problematic self-service interaction
(SSI) through the use of an affective agent. Self-
Service Technologies (SSTs) are those which allow
a person to obtain goods or services from a retailer
or service provider without the need for another
person to be involved in the transaction. SSTs are
used in many situations including high street shops,
supermarkets and ticket kiosks. The use of SSTs
may provide benefits such as improved customer
service (for example allowing 24 hour a day, 7 days
a week service),
labour costs and
improved efficiency (Cho & Fiorito, 2010). Less
than 5% of causes for dissatisfaction with SST
interactions were found to be the fault of the
customer (Meuter et al., 2000; Pujari, 2004),
indicating that there is a need for businesses and
SST manufacturers to improve these interactions in
order to reduce causes for dissatisfaction (Martin et
al., unpublished). The frustration caused by a
negative SSI can have a detrimental effect on a
user’s behavioural intentions towards the retailer,
impacting the likelihood the user will continue doing
reduced
business with them in the future and whether they
will recommend them to other potential users (Lin &
Hsieh, 2006; Johnson et al., 2008). By adopting
affective computing practices in SSI design, such
as giving computers the ability to detect and react
intelligently to human emotions and to express their
own simulated emotions, user experiences may be
improved (Klein et al., 1999; Jaksic et al., 2006;
Wang et al., 2009).
Affective agents have been
to reduce
found
frustration during human-computer
interactions
(HCIs) (Klein et al., 1999; Jaksic et al., 2006),
therefore we are investigating their effectiveness at
improving negative affective states in a SST user
during a shopping scenario. We propose a system
which will detect negative affective states in a user
and express appropriate empathetic reactions
using an affective virtual agent.
Two stages of research were identified. The
purpose of stage 1 (reported in Martin et al., in
press) was to investigate whether emotional facial
expressions are present during SST use,
to
determine whether a vision-based emotion detector
would be suitable for this system. The purpose of
stage 2 (reported in Martin et al., unpublished) was
© The Authors. Published by BISL. Proceedings of the BCS HCI 2012 People & Computers XXVI, Birmingham, UK351Work In Progress
('11111134', 'Leslie Ball', 'leslie ball')
('2529392', 'Jacqueline Archibald', 'jacqueline archibald')
('33069212', 'Lloyd Carson', 'lloyd carson')
c.martin@abertay.ac.uk
l.ball@abertay.ac.uk
j.archibald @abertay.ac.uk
l.carson@abertay.ac.uk
ad2339c48ad4ffdd6100310dcbb1fb78e72fac98Video Fill In the Blank using LR/RL LSTMs with Spatial-Temporal Attentions
Center for Research in Computer Vision, University of Central Florida, Orlando, FL
('33209161', 'Amir Mazaheri', 'amir mazaheri')
('46335319', 'Dong Zhang', 'dong zhang')
('1745480', 'Mubarak Shah', 'mubarak shah')
amirmazaheri@cs.ucf.edu, dzhang@cs.ucf.edu, shah@crcv.ucf.edu
ad247138e751cefa3bb891c2fe69805da9c293d7American Journal of Networks and Communications
2015; 4(4): 90-94
Published online July 7, 2015 (http://www.sciencepublishinggroup.com/j/ajnc)
doi: 10.11648/j.ajnc.20150404.12
ISSN: 2326-893X (Print); ISSN: 2326-8964 (Online)
A Novel Hybrid Method for Face Recognition Based on 2d
Wavelet and Singular Value Decomposition
Computer Engineering, Faculty of Engineering, Kharazmi University of Tehran, Tehran, Iran
Islamic Azad University, Shahrood, Iran
Email address:
To cite this article:
Decomposition. American Journal of Networks and Communications. Vol. 4, No. 4, 2015, pp. 90-94. doi: 10.11648/j.ajnc.20150404.12
('2653670', 'Vahid Haji Hashemi', 'vahid haji hashemi')
('2153844', 'Abdorreza Alavi Gharahbagh', 'abdorreza alavi gharahbagh')
('2653670', 'Vahid Haji Hashemi', 'vahid haji hashemi')
('2153844', 'Abdorreza Alavi Gharahbagh', 'abdorreza alavi gharahbagh')
hajihashemi.vahid@yahoo.com (V. H. Hashemi), R_alavi@iau-shahrood.ac.ir (A. A. Gharahbagh)
adf62dfa00748381ac21634ae97710bb80fc2922ViFaI: A trained video face indexing scheme
1. Introduction
With the increasing prominence of inexpensive
video recording devices (e.g., digital camcorders and
video recording smartphones),
the average user’s
video collection today is increasing rapidly. With this
development, there arises a natural desire to rapidly
access a subset of one’s collection of videos. The solu-
tion to this problem requires an effective video index-
ing scheme. In particular, we must be able to easily
process a video to extract such indexes.
Today, there also exist large sets of labeled (tagged)
face images. One important example is an individual’s
Facebook profile. Such a set of of tagged images of
one’s self, family, friends, and colleagues represents
an extremely valuable potential training set.
In this work, we explore how to leverage the afore-
mentioned training set to solve the video indexing
problem.
2. Problem Statement
Use a labeled (tagged) training set of face images
to extract relevant indexes from a collection of videos,
and use these indexes to answer boolean queries of the
form: “videos with ‘Person 1’ OP1 ‘Person 2’ OP2 ...
OP(N-1) ‘Person N’ ”, where ‘Person N’ corresponds
to a training label (tag) and OPN is a boolean operand
such as AND, OR, NOT, XOR, and so on.
3. Proposed Scheme
In this section, we outline our proposed scheme to
address the problem we postulate in the previous sec-
tion. We provide further details about the system im-
plementation in Section 4.
At a high level, we subdivide the problem into two
key phases: the first ”off-line” executed once, and the
second ”on-line” phase instantiated upon each query.
For the purposes of this work, we define an index as
follows:
('30006340', 'Nayyar', 'nayyar')
('47384529', 'Audrey Wei', 'audrey wei')
hnayyar@stanford.edu
awei1001@stanford.edu
bbc4b376ebd296fb9848b857527a72c82828fc52Attributes for Improved Attributes
University of Maryland
College Park, MD
('3351637', 'Emily Hand', 'emily hand')emhand@cs.umd.edu
bb489e4de6f9b835d70ab46217f11e32887931a2Everything you wanted to know about Deep Learning for Computer Vision but were
afraid to ask
Moacir A. Ponti, Leonardo S. F. Ribeiro, Tiago S. Nazare
ICMC University of S ao Paulo
S˜ao Carlos/SP, 13566-590, Brazil
CVSSP University of Surrey
Guildford, GU2 7XH, UK
tools,
('2227956', 'Tu Bui', 'tu bui')
('10710438', 'John Collomosse', 'john collomosse')
Email: [ponti, leonardo.sampaio.ribeiro, tiagosn]@usp.br
Email: [t.bui, j.collomosse]@surrey.ac.uk
bba281fe9c309afe4e5cc7d61d7cff1413b29558Social Cognitive and Affective Neuroscience, 2017, 984–992
doi: 10.1093/scan/nsx030
Advance Access Publication Date: 11 April 2017
Original article
An unpleasant emotional state reduces working
memory capacity: electrophysiological evidence
1Laboratorio de Neurofisiologia do Comportamento, Departamento de Fisiologia e Farmacologia, Instituto
Biome´dico, Universidade Federal Fluminense, Niteroi, Brazil, 2MograbiLab, Departamento de Psicologia,
Pontifıcia Universidade Catolica do Rio de Janeiro, Rio de Janeiro, Brazil, and 3Laboratorio de Engenharia
Pulmonar, Programa de Engenharia Biome´dica, COPPE, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
('18129331', 'Jessica S. B. Figueira', 'jessica s. b. figueira')
('2993713', 'Leticia Oliveira', 'leticia oliveira')
('38252417', 'Mirtes G. Pereira', 'mirtes g. pereira')
('18138365', 'Luiza B. Pacheco', 'luiza b. pacheco')
('6891211', 'Isabela Lobo', 'isabela lobo')
('5663717', 'Gabriel C. Motta-Ribeiro', 'gabriel c. motta-ribeiro')
('1837214', 'Isabel A. David', 'isabel a. david')
('1837214', 'Isabel A. David', 'isabel a. david')
Fluminense, Rua Hernani Pires de Mello, 101, Niteroi, RJ 24210-130, Brazil. E-mail: isabeldavid@id.uff.br.
bb557f4af797cae9205d5c159f1e2fdfe2d8b096
bb06ef67a49849c169781657be0bb717587990e0Impact of Temporal Subsampling on Accuracy and
Performance in Practical Video Classification
F. Scheidegger∗†, L. Cavigelli∗, M. Schaffner∗, A. C. I. Malossi†, C. Bekas†, L. Benini∗‡
∗ETH Zürich, 8092 Zürich, Switzerland
†IBM Research - Zürich, 8803 Rüschlikon, Switzerland
‡Università di Bologna, Italy
bb22104d2128e323051fb58a6fe1b3d24a9e9a46IAJ=JE BH ==OIEI 1 AIIA?A ?= EBH=JE =EO B?KIAI  JDA IK>JA
ABBA?JELAAII B KH =CHEJD
==OIEI 7IK=O = B=?E= ANFHAIIE ==OIEI IOIJA ?J=EI JDHAA IJ=CAI B=?A =?GKE
9DAJDAH KIEC *=OAIE= ?=IIEAH " & IKFFHJ LA?JH =?DEA 58  H AKH=
HACEI E = IECA ?=IIEAH EI = ? IJH=JACO & 0MALAH J = ?= HACEI
)=OEC .=?E= -NFHAIIE >O .KIEC =EB@I
9A;= +D=C1,2 +DK5C +DA1,3 =@ ;E2EC 0KC1,2,3
11IJEJKJA B 1BH=JE 5?EA?A )?=@AE= 5EE?= 6=EM=
2,AFJ B +FKJAH 5?EA?A =@ 1BH=JE -CEAAHEC =JE= 6=EM= 7ELAHIEJO
3/H=@K=JA 1IJEJKJA B AJMHEC =@ KJEA@E= =JE= 6=EM= 7ELAHIEJO
{wychang, song}@iis.sinica.edu.tw; hung@csie.ntu.edu.tw
)>IJH=?J .A=JKHA HAFHAIAJ=JE =@ ?=IIE?=JE =HA JM =H EIIKAI E B=?E=
ANFHAIIE ==OIEI 1 JDA F=IJ IJ AJD@I KIA@ AEJDAH DEIJE? H ?= HAFHA
L=HE=JEI B ANFHAIIEI =@ DEIJE? HAFHAIAJ=JE IJHAIIAI  C>= @ELAHIE
JEAI 6 J=A JDA =@L=J=CAI B >JD = DO>HE@ HAFHAIAJ=JE EI IKCCAIJA@ E JDEI
F=FAH =@ =EB@ A=HEC EI =FFEA@ J ?D=H=?JAHEA C>= =@ ?= EBH=
JE @EI?HEE=JELAO 7EA IA AJD@I KIEC KIKFAHLEIA@ =EB@ A=H
EC =FFH=?DAI A>A@@A@ =EB@I B JDA DO>HE@ HAFHAIAJ=JE =HA A=HA@ >O
=@FJEC = IKFAHLEIA@ =EB@ A=HEC JA?DEGKA 6 EJACH=JA JDAIA =EB@I
ABBA?JELAO = BKIE ?=IIEAH EI EJH@K?A@ MDE?D ?= DAF J AFO IKEJ=>A
?>E=JE MAECDJI B B=?E= ?FAJI J E@AJEBO = ANFHAIIE +FHADA
IELA ?F=HEII  B=?E= ANFHAIIE HA?CEJE =HA E?K@A@ J @AIJH=JA JDA
 1JH@K?JE
4A=EEC DK= AJEI F=OI = EFHJ=J HA E DK= ?KE?=JE 6 IJK@O
DK= >AD=LEH I?EAJE?=O =@ IOIJA=JE?=O AJE ==OIEI EI = EJHECKEC HA
IA=H?D EIIKA E =O A@I K?D =JJAJE D=I >AA @H=M J JDEI JFE? E ?FKJAH
LEIE =FFE?=JEI IK?D =I DK=?FKJAH EJAH=?JE H>J ?CEJE =@ >AD=LEH
IEJE BA=JKHA ANJH=?JE =@ ?=IIE?=JE
.H BA=JKHA ANJH=?JE = J B AJD@I D=LA >AA FHFIA@ 1 CAAH= IJ AJD
@I HAFHAIAJ BA=JKHAI E AEJDAH DEIJE? H ?= M=OI 0EIJE? HAFHAIAJ=JE KIAI JDA
MDA B=?A BH HAFHAIAJ=JE =@ B?KIAI  JDA B=?E= L=HE=JEI B C>= =FFA=H=?A
1 ?JH=IJ ?= HAFHAIAJ=JE =@FJI ?= B=?E= HACEI H BA=JKHAI =@ CELAI =JJA
JE J JDA IK>JA @ELAHIEJEAI  = B=?A 6DKCD IJ HA?AJ IJK@EAI D=LA >AA @EHA?JA@
JM=H@I ?= HAFHAIAJ=JE % & C@ HAIA=H?D HAIKJI =HA IJE >J=EA@ >O KIEC
DEIJE? =FFH=?D   0A?A EJ EI EJAHAIJEC J ANFEJ >JD B JDAEH >AAJI J @A
LAF = DO>HE@ HAFHAIAJ=JE
1 =@@EJE J BA=JKHA HAFHAIAJ=JE MA =I EJH@K?A = AJD@ BH ?=IIE?=JE
AJMHI @EC = IJHC ?=IIEAH EI JDA ?HA E JDA ANEIJEC B=?E= ANFHAIIE ==O
IEI IJK@EAI 1 JDA =FFH=?DAI JD=J =@FJ ?= B=?E= EBH=JE MAECDJEC JDAIA ?=
bbf28f39e5038813afd74cf1bc78d55fcbe630f1Style Aggregated Network for Facial Landmark Detection
University of Technology Sydney, 2 The University of Sydney
('9929684', 'Xuanyi Dong', 'xuanyi dong')
('1685212', 'Yan Yan', 'yan yan')
('3001348', 'Wanli Ouyang', 'wanli ouyang')
('1698559', 'Yi Yang', 'yi yang')
{xuanyi.dong,yan.yan-3}@student.uts.edu.au;
wanli.ouyang@sydney.edu.au; yi.yang@uts.edu.au
bbe1332b4d83986542f5db359aee1fd9b9ba9967
bbe949c06dc4872c7976950b655788555fe513b8Automatic Frequency Band Selection for
Illumination Robust Face Recognition
Institute of Anthropomatics, Karlsruhe Institute of Technology, Germany
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen'){ekenel,rainer.stiefelhagen}@kit.edu
bbcb4920b312da201bf4d2359383fb4ee3b17ed9RESEARCH ARTICLE
Robust Face Recognition via Multi-Scale
Patch-Based Matrix Regression
Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing
China, 2 School of Computer Science and Engineering, Nanjing University of Science and Technology
Nanjing, 210094, China, 3 School of Automation, Nanjing University of Posts and Telecommunications
Nanjing, 210023, China, 4 School of Computer Science and Technology, Nanjing University of Posts and
Telecommunications, Nanjing, 210023, China
a11111
('3306402', 'Guangwei Gao', 'guangwei gao')
('2700773', 'Jian Yang', 'jian yang')
('1712078', 'Xiaoyuan Jing', 'xiaoyuan jing')
('35919708', 'Pu Huang', 'pu huang')
('3359690', 'Juliang Hua', 'juliang hua')
('1742990', 'Dong Yue', 'dong yue')
* csggao@gmail.com
bb6bf94bffc37ef2970410e74a6b6dc44a7f4febSituation Recognition with Graph Neural Networks
Supplementary Material
Uber Advanced Technologies Group, 5Vector Institute
We present additional analysis and results of our approach in the supplementary material. First, we analyze the verb
prediction performance in Sec. 1. In Sec. 2, we present t-SNE [2] plots to visualize the verb and role embeddings. We present
several examples of the influence of different roles on predicting the verb-frame correctly. This is visualized in Sec. 3 through
propagation matrices similar to Fig. 7 of the main paper. Finally, in Sec. 4 we include several example predictions that our
model makes.
1. Verb Prediction
We present the verb prediction accuracies for our fully-connected model on the development set in Fig. 1. The random
performance is close to 0.2% (504 verbs). About 22% of all verbs are classified correctly over 50% of the time. These
include taxiing, erupting, flossing, microwaving, etc. On the other hand, verbs such as attaching,
making, placing can have very different image representations, and show prediction accuracies of less than 10%.
Our model helps improve the role-noun predictions by sharing information across all roles. Nevertheless, if the verb is
predicted incorrectly, the whole situation is treated as incorrect. Thus, verb prediction performance plays a crucial role.
Figure 1. Verb prediction accuracy on the development set. Some verbs such as taxiing typically have a similar image (a plane on the
tarmac), while verbs such as rubbing or twisting can have very different corresponding images.
taxiinglappingretrievingflickingminingwaxingjugglingcurtsyingcommutingdancingcrushingreadingexaminingdousingdecomposingchoppingdrawingcryingcalmingsniffingmourningsubmergingtwistingcarvingrubbingaskingVerbs0102030405060708090100Accuracy (%)
('8139953', 'Ruiyu Li', 'ruiyu li')
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('2246396', 'Renjie Liao', 'renjie liao')
('1729056', 'Jiaya Jia', 'jiaya jia')
('2422559', 'Raquel Urtasun', 'raquel urtasun')
('37895334', 'Sanja Fidler', 'sanja fidler')
('2043324', 'Hong Kong', 'hong kong')
ryli@cse.cuhk.edu.hk, {makarand,rjliao,urtasun,fidler}@cs.toronto.edu, leojia9@gmail.com
bb7f2c5d84797742f1d819ea34d1f4b4f8d7c197TO APPEAR IN TPAMI
From Images to 3D Shape Attributes
('1786435', 'David F. Fouhey', 'david f. fouhey')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
bbf01aa347982592b3e4c9e4f433e05d30e71305
bbc5f4052674278c96abe7ff9dc2d75071b6e3f3Nonlinear Hierarchical Part-based Regression for Unconstrained Face Alignment
†NEC Laboratories America, Media Analytics
‡Adobe Research
cid:93)University of North Carolina at Charlotte
Rutgers, The State University of New Jersey
('39960064', 'Xiang Yu', 'xiang yu')
('1753384', 'Shaoting Zhang', 'shaoting zhang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
xiangyu@nec-labs.com, zlin@adobe.com, szhang16@uncc.edu, dnm@cs.rutgers.edu
bbfe0527e277e0213aafe068113d719b2e62b09cDog Breed Classification Using Part Localization
Columbia University
University of Maryland
('2454675', 'Jiongxin Liu', 'jiongxin liu')
('20615377', 'Angjoo Kanazawa', 'angjoo kanazawa')
bbf1396eb826b3826c5a800975047beabde2f0de
bb451dc2420e1a090c4796c19716f93a9ef867c9International Journal of Computer Applications (0975 – 8887)
Volume 104 – No.5, October 2014
A Review on: Automatic Movie Character Annotation
by Robust Face-Name Graph Matching
Research Scholar
Sinhgad College of
Engineering, korti, Pandharpur,
Solapur University, INDIA
Gadekar P.R.
Assistant Professor
Sinhgad College of
Engineering, korti, Pandharpur,
Solapur University, INDIA
Bandgar Vishal V.
Assistant Professor
College of Engineering (Poly
Pandharpur, Solapur, INDIA
Bhise Avdhut S.
HOD, Department of
Information Technology,
College of Engineering (Poly
Pandharpur, Solapur, INDIA
bbd1eb87c0686fddb838421050007e934b2d74ab
d73d2c9a6cef79052f9236e825058d5d9cdc13212014-ENST-0040
EDITE - ED 130
Doctorat ParisTech
T H È S E
pour obtenir le grade de docteur délivré par
TELECOM ParisTech
Spécialité « Signal et Images »
présentée et soutenue publiquement par
le 08 juillet 2014
Cutting the Visual World into Bigger Slices for Improved Video
Concept Detection
Amélioration de la détection des concepts dans les vidéos par de plus grandes tranches du Monde
Visuel
Directeur de thèse : Bernard Mérialdo
Jury
M. Philippe-Henri Gosselin, Professeur, INRIA
M. Georges Quénot, Directeur de recherche CNRS, LIG
M. Georges Linares, Professeur, LIA
M. François Brémond, Professeur, INRIA
M. Bernard Mérialdo, Professeur, EURECOM
Rapporteur
Rapporteur
Examinateur
Examinateur
Encadrant
TELECOM ParisTech
école de l’Institut Télécom - membre de ParisTech
('2135932', 'Usman Farrokh Niaz', 'usman farrokh niaz')
d794ffece3533567d838f1bd7f442afee13148fdHand Detection and Tracking in Videos
for Fine-grained Action Recognition
The University of Electro-Communications, Tokyo
1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585 Japan
('1681659', 'Keiji Yanai', 'keiji yanai')
d78077a7aa8a302d4a6a09fb9737ab489ae169a6
d7593148e4319df7a288180d920f2822eeecea0bLIU, YU, FUNES-MORA, ODOBEZ: DIFFERENTIAL APPROACH FOR GAZE ESTIMATION 1
A Differential Approach for Gaze
Estimation with Calibration
Idiap Research Institute
2 Eyeware Tech SA
Kenneth A. Funes-Mora 2
('1697913', 'Gang Liu', 'gang liu')
('50133842', 'Yu Yu', 'yu yu')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
gang.liu@idiap.ch
yu.yu@idiap.ch
kenneth@eyeware.tech
odobez@idiap.ch
d7312149a6b773d1d97c0c2b847609c07b5255ec
d7fe2a52d0ad915b78330340a8111e0b5a66513aUnpaired Photo-to-Caricature Translation on Faces in
the Wild
aNo. 238 Songling Road, Ocean University of
China, Qingdao, China
('4670300', 'Ziqiang Zheng', 'ziqiang zheng')
('50077564', 'Zhibin Yu', 'zhibin yu')
('2336297', 'Haiyong Zheng', 'haiyong zheng')
('49297407', 'Bing Zheng', 'bing zheng')
d7cbedbee06293e78661335c7dd9059c70143a28MobileFaceNets: Efficient CNNs for Accurate Real-
Time Face Verification on Mobile Devices
School of Computer and Information Technology, Beijing Jiaotong University, Beijing
Research Institute, Watchdata Inc., Beijing, China
China
('39326372', 'Sheng Chen', 'sheng chen')
('1681842', 'Yang Liu', 'yang liu')
('46757550', 'Xiang Gao', 'xiang gao')
('2765914', 'Zhen Han', 'zhen han')
{sheng.chen, yang.liu.yj, xiang.gao}@watchdata.com,
zhan@bjtu.edu.cn
d7d9c1fa77f3a3b3c2eedbeb02e8e7e49c955a2fAutomating Image Analysis by Annotating Landmarks with Deep
Neural Networks
February 3, 2017
Running head: Automatic Annotation of Landmarks
Boston University, Boston, MA
University of North Carolina at Chapel Hill, Chapel Hill, NC
Keywords: automatic landmark localization, annotation, pose estimation, deep neural networks, hawkmoths
Contents
('2025025', 'Mikhail Breslav', 'mikhail breslav')
('1711465', 'Tyson L. Hedrick', 'tyson l. hedrick')
('1749590', 'Stan Sclaroff', 'stan sclaroff')
('1723703', 'Margrit Betke', 'margrit betke')
d708ce7103a992634b1b4e87612815f03ba3ab24FCVID: Fudan-Columbia Video Dataset
Available at: http://bigvid.fudan.edu.cn/FCVID/
1 OVERVIEW
Recognizing visual contents in unconstrained videos
has become a very important problem for many ap-
plications, such as Web video search and recommen-
dation, smart content-aware advertising, robotics, etc.
Existing datasets for video content recognition are
either small or do not have reliable manual labels.
In this work, we construct and release a new Inter-
net video dataset called Fudan-Columbia Video Dataset
(FCVID), containing 91,223 Web videos (total duration
4,232 hours) annotated manually according to 239
categories. We believe that the release of FCVID can
stimulate innovative research on this challenging and
important problem.
2 COLLECTION AND ANNOTATION
The categories in FCVID cover a wide range of topics
like social events (e.g., “tailgate party”), procedural
events (e.g., “making cake”), objects (e.g., “panda”),
scenes (e.g., “beach”), etc. These categories were de-
fined very carefully. Specifically, we conducted user
surveys and used the organization structures on
YouTube and Vimeo as references, and browsed nu-
merous videos to identify categories that satisfy the
following three criteria: (1) utility — high relevance
in supporting practical application needs; (2) cover-
age — a good coverage of the contents that people
record; and (3) feasibility — likely to be automatically
recognized in the next several years, and a high
frequency of occurrence that is sufficient for training
a recognition algorithm.
This definition effort led to a set of over 250 candi-
date categories. For each category, in addition to the
official name used in the public release, we manually
defined another alternative name. Videos were then
downloaded from YouTube searches using the official
and the alternative names as search terms. The pur-
pose of using the alternative names was to expand the
candidate video sets. For each search, we downloaded
1,000 videos, and after removing duplicate videos and
some extremely long ones (longer than 30 minutes),
there were around 1,000–1,500 candidate videos for
each category.
All the videos were annotated manually to ensure
a high precision of the FCVID labels. In order to min-
imize subjectivity, nearly 20 annotators were involved
in the task, and a master annotator was assigned to
monitor the entire process and double-check all the
found positive videos. Some of the videos are multi-
labeled, and thus filtering the 1,000–1,500 videos for
each category with focus on just the single category
label is not adequate. As checking the existence of all
the 250+ classes for each video is extremely difficult,
we use the following strategy to narrow down the “la-
bel search space” for each video. We first grouped the
categories according to subjective predictions of label
co-occurrences, e.g., “wedding reception” & “wed-
ding ceremony”, “waterfall” & “river”, “hiking” &
“mountain”, and even “dog” & “birthday”. We then
annotated the videos not only based on the target cat-
egory label, but also according to the identified related
labels. This helped produce a fairly complete label
set for FCVID, but largely reduced the annotation
workload. After removing the rare categories with
less than 100 videos after annotation, the final FCVID
dataset contains 91,223 videos and 239 categories,
where 183 are events and 56 are objects, scenes, etc.
Figure 1 shows the number of videos per category.
“Dog” has the largest number of positive videos
(1,136), while “making egg tarts” is the most infre-
quent category containing only 108 samples. The total
duration of FCVID is 4,232 hours with an average
video duration of 167 seconds. Figure 2 further gives
the average video duration of each category.
The categories are organized using a hierarchy con-
taining 11 high-level groups, as visualized in Figure 3.
3 COMPARISON WITH RELATED DATASETS
We compare FCVID with the following datasets. Most
of them have been widely adopted in the existing
works on video categorization.
KTH and Weizmann: The KTH [1] and the Weiz-
mann [2] datasets are well-known benchmarks for
human action recognition. The former contains 600
videos of 6 human actions performed by 25 people
in four scenarios, and the latter consists of 81 videos
associated with 9 actions performed by 9 actors.
Hollywood Human Action: The Hollywood
dataset [3] contains 8 action classes collected from
32 Hollywood movies with a total of 430 videos.
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('39811558', 'Jun Wang', 'jun wang')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
d78734c54f29e4474b4d47334278cfde6efe963aExploring Disentangled Feature Representation Beyond Face Identification
CUHK-SenseTime Joint Lab, The Chinese University of Hong Kong
SenseTime Group Limited, 3Peking University
('1715752', 'Yu Liu', 'yu liu')
('22181490', 'Fangyin Wei', 'fangyin wei')
('49895575', 'Jing Shao', 'jing shao')
('37145669', 'Lu Sheng', 'lu sheng')
('1721677', 'Junjie Yan', 'junjie yan')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
{yuliu,lsheng,xgwang}@ee.cuhk.edu.hk, weifangyin@pku.edu.cn,
{shaojing,yanjunjie}@sensetime.com
d785fcf71cb22f9c33473cba35f075c1f0f06ffcLearning Active Facial Patches for Expression Analysis
Rutgers University, Piscataway, NJ
Nanjing University of Information Science and Technology, Nanjing, 210044, China
University of Texas at Arlington, Arlington, TX
('29803023', 'Lin Zhong', 'lin zhong')
('1734954', 'Qingshan Liu', 'qingshan liu')
('39606160', 'Peng Yang', 'peng yang')
('40107085', 'Bo Liu', 'bo liu')
('1768190', 'Junzhou Huang', 'junzhou huang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
{linzhong,qsliu,peyang,lb507,dnm}@cs.rutgers.edu, Jzhuang@uta.edu
d79365336115661b0e8dbbcd4b2aa1f504b91af6Variational methods for Conditional Multimodal
Deep Learning
Department of Computer Science and Automation
Indian Institute of Science
('2686270', 'Gaurav Pandey', 'gaurav pandey')
('2440174', 'Ambedkar Dukkipati', 'ambedkar dukkipati')
Email{gp88, ad@csa.iisc.ernet.in
d7b6bbb94ac20f5e75893f140ef7e207db7cd483Griffith Research Online
https://research-repository.griffith.edu.au
Face Recognition across Pose: A
Review
Author
Zhang, Paul, Gao, Yongsheng
Published
2009
Journal Title
Pattern Recognition
DOI
https://doi.org/10.1016/j.patcog.2009.04.017
Copyright Statement
Copyright 2009 Elsevier. This is the author-manuscript version of this paper. Reproduced in accordance
with the copyright policy of the publisher. Please refer to the journal's website for access to the
definitive, published version.
Downloaded from
http://hdl.handle.net/10072/30193
d78373de773c2271a10b89466fe1858c3cab677f
d78fbd11f12cbc194e8ede761d292dc2c02d38a2(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 8, No. 10, 2017
Enhancing Gray Scale Images for Face Detection
under Unstable Lighting Condition
Department of Mathematics and Computer Science,
Faculty of Sciences, PO Box 67 Dschang
University of Dschang, Cameroon
DJIMELI TSAMENE Charly
Department of Mathematics and Computer Science,
Faculty of Sciences, PO Box 67 Dschang,
University of Dschang, Cameroon
techniques compared are:
d72973a72b5d891a4c2d873daeb1bc274b48cddfA New Supervised Dimensionality Reduction Algorithm Using Linear
Discriminant Analysis and Locality Preserving Projection
School of Information Engineering
Guangdong Medical College
Dongguan, Guangdong, China
School of Electronics and Information
South China University of Technology
Guangzhou, Guangdong, China
('2588058', 'DI ZHANG', 'di zhang')
('20374749', 'YUN ZHAO', 'yun zhao')
('31866339', 'MINGHUI DU', 'minghui du')
haihaiwenqi@163.com, zyun@gdmc.edu.cn
ecmhdu@scut.edu.cn
d700aedcb22a4be374c40d8bee50aef9f85d98efRethinking Spatiotemporal Feature Learning:
Speed-Accuracy Trade-offs in Video Classification
1 Google Research
University of California San Diego
('1817030', 'Saining Xie', 'saining xie')
('40559421', 'Chen Sun', 'chen sun')
('1808244', 'Jonathan Huang', 'jonathan huang')
('1736745', 'Zhuowen Tu', 'zhuowen tu')
('1702318', 'Kevin Murphy', 'kevin murphy')
d7d166aee5369b79ea2d71a6edd73b7599597aaaFast Subspace Clustering Based on the
Kronecker Product
Beihang University 2Gri th University 3University of York, UK
('38840844', 'Lei Zhou', 'lei zhou')
('3042223', 'Xiao Bai', 'xiao bai')
('6820648', 'Xianglong Liu', 'xianglong liu')
('40582215', 'Jun Zhou', 'jun zhou')
('38987678', 'Hancock Edwin', 'hancock edwin')
d79f9ada35e4410cd255db39d7cc557017f8111aJournal of Eye Movement Research
7(3):3, 1-8
Evaluation of accurate eye corner detection methods for gaze
estimation
Public University of Navarra, Spain
Childrens National Medical Center, USA
Public University of Navarra, Spain
Public University of Navarra, Spain
Accurate detection of iris center and eye corners appears to be a promising
approach for low cost gaze estimation.
In this paper we propose novel eye
inner corner detection methods. Appearance and feature based segmentation
approaches are suggested. All these methods are exhaustively tested on a realistic
dataset containing images of subjects gazing at different points on a screen.
We have demonstrated that a method based on a neural network presents the
best performance even in light changing scenarios.
In addition to this method,
algorithms based on AAM and Harris corner detector present better accuracies
than recent high performance face points tracking methods such as Intraface.
Keywords: eye tracking, low cost, eye inner corner
Introduction
Research on eye detection and tracking has attracted
much attention in the last decades. Since it is one of the
most stable and representative features of the subject,
eye detection is used in a great variety of applications,
such as subject identification, human computer inter-
action as shown in Morimoto and Mimica (2005) and
gesture recognition as described by Tian, Kanade, and
Cohn (2000) and Bailenson et al. (2008).
Human computer interaction based on eye informa-
tion is one of the most challenging research topics in
the recent years. According to the literature, the first
attempts to track the human gaze using cameras be-
gan in 1974 as shown in the work by Merchant, Mor-
rissette, and Porterfield (1974). Since then, and espe-
cially in the last decades, much effort has been devoted
to improving the performance of eye tracking systems.
The availability of high performance eye tracking sys-
tems has provided advances in fields such as usabil-
ity research as described by Ellis, Candrea, Misner,
Craig, and Lankford (1998) Poole and Ball (2005) and
interaction for severely disabled people in works such
as Bolt (1982), Starker and Bolt (1990) and Vertegaal
(1999). Gaze tracking systems can be used to deter-
mine the fixation point of an individual on a computer
screen, which can in turn be used as a pointer to in-
teract with the computer. Thus, severely disabled peo-
ple who cannot communicate with their environment
using alternative interaction tools can perform several
tasks by means of their gaze. Performance limitations,
such as head movement constraints, limit the employ-
ment of the gaze trackers as interaction tools in other
areas. Moreover, the limited market for eye tracking
systems and the specialized hardware they employ, in-
crease their prices. The eye tracking community has
identified new application fields, such as video games
or the automotive industry, as potential markets for the
technology (Zhang, Bulling, & Gellersen, 2013). How-
ever, simpler (i.e., lower cost) hardware is needed to
reach these areas.
Although web cams offer acceptable resolutions for
eye tracking purposes, the optics used provide a wider
field of view in which the whole face appears. By con-
trast, most of the existing high-performance eye track-
ing systems employ infrared illumination.
Infrared
light-emitting diodes provide a higher image quality
and produce bright pixels in the image from infrared
light reflections on the cornea named as glints. Al-
though some works suggest the combination of light
sources and web cams to track the eyes as described in
Sigut and Sidha (2011), the challenge of low-cost sys-
tems is to avoid the use of light sources to keep the sys-
tems as simple as possible; hence, the image quality de-
creases. High-performance eye tracking systems usu-
ally combine glints and pupil information to compute
the gaze position on the screen. Accurate pupil detec-
tion is not feasible in web cam images, and most works
on this topic focus on iris center. In order to improve
accuracy, other elements such as eye corners or head
position are necessary for gaze estimation applications,
apart from the estimation of both irises. In the work by
Ince and Yang (2009), they consider that the horizontal
and vertical deviation of eye movements through eye-
('2592332', 'Jose Javier Bengoechea', 'jose javier bengoechea')
('2595143', 'Juan J. Cerrolaza', 'juan j. cerrolaza')
('2175923', 'Arantxa Villanueva', 'arantxa villanueva')
('1752979', 'Rafael Cabeza', 'rafael cabeza')
d0e895a272d684a91c1b1b1af29747f92919d823Classification of Mouth Action Units using Local Binary Patterns
The American University in Cairo
Department of Computer Science, AUC, AUC
Avenue, P.O. Box 74 New Cairo 11835, Egypt
The American University in Cairo
Department of Computer Science, AUC, AUC
Avenue, P.O. Box 74 New Cairo 11835, Egypt
('3298267', 'Sarah Adel Bargal', 'sarah adel bargal')
('3337337', 'Amr Goneid', 'amr goneid')
s_bargal@aucegypt.edu
goneid@aucegypt.edu
d082f35534932dfa1b034499fc603f299645862dTAMING WILD FACES: WEB-SCALE, OPEN-UNIVERSE FACE IDENTIFICATION IN
STILL AND VIDEO IMAGERY
by
B.S. University of Central Florida
M.S. University of Central Florida
A dissertation submitted in partial fulfilment of the requirements
for the degree of Doctor of Philosophy
in the Department of Electrical Engineering and Computer Science
in the College of Engineering and Computer Science
at the University of Central Florida
Orlando, Florida
Spring Term
2014
Major Professor: Mubarak Shah
('1873759', 'G. ORTIZ', 'g. ortiz')
d03265ea9200a993af857b473c6bf12a095ca178Multiple deep convolutional neural
networks averaging for face
alignment
Zhouping Yin
Downloaded From: http://electronicimaging.spiedigitallibrary.org/ on 05/28/2015 Terms of Use: http://spiedl.org/terms
('7671296', 'Shaohua Zhang', 'shaohua zhang')
('39584289', 'Hua Yang', 'hua yang')
d0ac9913a3b1784f94446db2f1fb4cf3afda151fExploiting Multi-modal Curriculum in Noisy Web Data for
Large-scale Concept Learning
School of Computer Science, Carnegie Mellon University, PA, USA
School of Mathematics and Statistics, Xi an Jiaotong University, P. R. China
('1915796', 'Junwei Liang', 'junwei liang')
('38782499', 'Lu Jiang', 'lu jiang')
('1803714', 'Deyu Meng', 'deyu meng')
{junweil, lujiang, alex}@cs.cmu.edu, dymeng@mail.xjtu.edu.cn.
d0471d5907d6557cf081edf4c7c2296c3c221a38A Constrained Deep Neural Network for Ordinal Regression
Nanyang Technological University
Rolls-Royce Advanced Technology Centre
50 Nanyang Avenue, Singapore, 639798
6 Seletar Aerospace Rise, Singapore, 797575
('47908585', 'Yanzhu Liu', 'yanzhu liu')
('1799918', 'Chi Keong Goh', 'chi keong goh')
liuy0109@e.ntu.edu.sg, adamskong@ntu.edu.sg
ChiKeong.Goh@Rolls-Royce.com
d0eb3fd1b1750242f3bb39ce9ac27fc8cc7c5af0
d00c335fbb542bc628642c1db36791eae24e02b7Article
Deep Learning-Based Gaze Detection System for
Automobile Drivers Using a NIR Camera Sensor
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu
Received: 5 January 2018; Accepted: 1 February 2018; Published: 3 February 2018
('8683310', 'Rizwan Ali Naqvi', 'rizwan ali naqvi')
('15668895', 'Muhammad Arsalan', 'muhammad arsalan')
('3407484', 'Ganbayar Batchuluun', 'ganbayar batchuluun')
('40376380', 'Hyo Sik Yoon', 'hyo sik yoon')
('4634733', 'Kang Ryoung Park', 'kang ryoung park')
Seoul 100-715, Korea; rizwanali@dongguk.edu (R.A.N.); arsal@dongguk.edu (M.A.);
ganabata87@gmail.com (G.B.); yoonhs@dongguk.edu (H.S.Y.)
* Correspondence: parkgr@dongguk.edu; Tel.: +82-10-3111-7022; Fax: +82-2-2277-8735
d06c8e3c266fbae4026d122ec9bd6c911fcdf51dRole for 2D image generated 3D face models in the rehabilitation of facial palsy
Northumbria University, Newcastle Upon-Tyne NE21XE, UK
Published in Healthcare Technology Letters; Received on 4th April 2017; Revised on 7th June 2017; Accepted on 7th June 2017
The outcome for patients diagnosed with facial palsy has been shown to be linked to rehabilitation. Dense 3D morphable models have been
shown within the computer vision to create accurate representations of human faces even from single 2D images. This has the potential
to provide feedback to both the patient and medical expert dealing with the rehabilitation plan. It is proposed that a framework for the
creation and measuring of patient facial movement consisting of a hybrid 2D facial landmark fitting technique which shows better
accuracy in testing than current methods and 3D model fitting.
1. Introduction: Recent medical studies [1–3] have highlighted
that patients diagnosed and treated with specific types of facial
paralysis such as Bell’s palsy have outcomes that are directly
linked to the rehabilitation provided. While various treatment and
rehabilitation paths exist dependant on the specifics of the facial
palsy diagnosis, the aim is to restore a degree of facial muscle
movement
[4] completed a
comprehensive study over 5 years of the rehabilitation process
and outcomes for 303 facial paralysis patients, the key finding
was the need for specialised therapy plans tailored via feedback
for the best patient outcomes. While Banks et al [5] have shown
that quality qualitative feedback to a clinician is required for the
best development of rehabilitation plans.
to the patient. Lindsay et al
Tracking and providing qualitative feedback on the progress
of rehabilitation for a patient is an area where the application of
computer vision and machine learning techniques could prove to
be highly beneficial. Computer vision methods can provide the
capability of capturing accurate 3D models of the human face
these in turn can be leveraged to analyse and measure changes in
face shape and levels of motion [6].
Applying 3D face modelling techniques in an automated
framework for
tracking facial palsy rehabilitation progression
has a number of potential benefits. 3D face models generated
from a 2D face image can provide a detailed topography of an
individual human face which can be qualitatively measured for
change over time by a computer system. Potential benefits of
such an automated system include providing the clinician
dealing with a patients rehabilitation to gather regular objective
feedback on the condition and tailor therapy without always
needing to physically see the patient or providing continuity of
care if for instance the clinician changes during the rehabilitation
period. Patients will have a visual evidence in which to see the
progress that has been made. It has been indicated that patients
suffering from facial palsy can also be affected by psychol-
ogical and social problems the capacity to track rehabilitation pri-
vately within a comfortable setting like their own home may be
of benefit.
Some previous studies [7] have looked at the process of aiding
diagnosis through the application of computer vision techniques
these have been limited to 2D imaging which measure on a spare
set of landmarks. The hypothesis is that 3D face modelling consist-
ing of thousands of landmarks provides a far richer model of the
face which in turn can present a more accurate measurement
system for facial motion.
In this Letter we propose a framework applicable for accurate
generation of 3D face models of facial palsy patients from 2D
images applying state-of-the-art methods and a proposed method
Healthcare Technology Letters, 2017, Vol. 4, Iss. 4, pp. 145–148
doi: 10.1049/htl.2017.0023
Fig. 1 2D face alignment of 68 landmarks on a facial image which displays
asymmetric movement, like that of a patient suffering from facial palsy
of using geometrical features to track rehabilitation and present
our conclusions.
2. Proposed system overview: The accuracy of
the facial
representation is a key components of any computer-based system
which aims to measure facial motion. We suggest that the more
complex a depiction of the individuals patient facial topography
the greater the potential
is for the desired level of accuracy.
Developing such a system requires a framework of methods to
build and measure such a model.
As camera systems which perceive depth within an image are not
currently common place or require specialist and expensive hard-
ware initially we require a method for face detection and 2D face
145
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
('12667800', 'Gary Storey', 'gary storey')
('40618413', 'Richard Jiang', 'richard jiang')
('1690116', 'Ahmed Bouridane', 'ahmed bouridane')
✉ E-mail: gary.storey@northumbria.ac.uk
d074b33afd95074d90360095b6ecd8bc4e5bb6a2December 11, 2007
12:8 WSPC/INSTRUCTION FILE
bauer-2007-ijhr
International Journal of Humanoid Robotics
c(cid:13) World Scientific Publishing Company
Human-Robot Collaboration: A Survey
Institute of Automatic Control Engineering (LSR
Technische Universit¨at M¨unchen
80290 Munich
Germany
Received 01.05.2007
Revised 29.09.2007
Accepted Day Month Year
As robots are gradually leaving highly structured factory environments and moving into
human populated environments, they need to possess more complex cognitive abilities.
They do not only have to operate efficiently and safely in natural, populated environ-
ments, but also be able to achieve higher levels of cooperation and communication with
humans. Human-robot collaboration (HRC) is a research field with a wide range of ap-
plications, future scenarios, and potentially a high economic impact. HRC is an interdis-
ciplinary research area comprising classical robotics, cognitive sciences, and psychology.
This article gives a survey of the state of the art of human-robot collaboration. Es-
tablished methods for intention estimation, action planning, joint action, and machine
learning are presented together with existing guidelines to hardware design. This article
is meant to provide the reader with a good overview of technologies and methods for
HRC.
Keywords: Human-robot collaboration; intention estimation; action planning; machine
learning.
1. Introduction
Human-robot Collaboration (HRC) is a wide research field with a high economic
impact. Robots have already started moving out of laboratory and manufacturing
environments into more complex human working environments such as homes, of-
fices, hospitals and even outer space. HRC is already used in elderly care1, space
applications2, and rescue robotics3. The design of robot behaviour, appearance,
cognitive, and social skills is highly challenging, and requires interdisciplinary co-
operation between classical robotics, cognitive sciences, and psychology. Humans as
nondeterministic factors make cognitive sciences and artificial intelligence important
research fields in HRC.
This article refers to human-robot collaboration as opposed to human-robot in-
teraction (HRI) as these two terms hold different meanings4. Interaction is a more
general term, including collaboration. Interaction determines action on someone
('1749896', 'Dirk Wollherr', 'dirk wollherr')
('1732126', 'Martin Buss', 'martin buss')
ab@tum.de; dw@tum.de; mb@tum.de
d04d5692461d208dd5f079b98082eda887b62323Subspace learning with frequency regularizer: its application to face recognition
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition,
Institute of Automation, Chinese Academy of Sciences
95 Zhongguancun Donglu, Beijing 100190, China.
('1704114', 'Xiangsheng Huang', 'xiangsheng huang')
('34679741', 'Stan Z. Li', 'stan z. li')
('1718623', 'Zhen Lei', 'zhen lei')
('1716143', 'Dong Yi', 'dong yi')
{zlei,dyi,szli}@cbsr.ia.ac.cn, xiangsheng.huang@ia.ac.cn
d05513c754966801f26e446db174b7f2595805baEverything is in the Face? Represent Faces with
Object Bank
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing, 100190, China
School of Computer Science, Carnegie Mellon University, PA 15213, USA
University of Chinese Academy of Sciences, Beijing 100049, China
('1731144', 'Xin Liu', 'xin liu')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1688086', 'Shaoxin Li', 'shaoxin li')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
{xin.liu, shiguang.shan, shaoxin.li}@vipl.ict.ac.cn, alex@cs.cmu.edu;
d0509afe9c2c26fe021889f8efae1d85b519452aVisual Psychophysics for Making Face
Recognition Algorithms More Explainable
University of Notre Dame, Notre Dame, IN, 46556, USA
Perceptive Automata, Inc
Harvard University, Cambridge, MA 02138, USA
('3849184', 'Brandon RichardWebster', 'brandon richardwebster')
('40901458', 'So Yon Kwon', 'so yon kwon')
('40896426', 'Christopher Clarizio', 'christopher clarizio')
('2503235', 'Samuel E. Anthony', 'samuel e. anthony')
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
d03baf17dff5177d07d94f05f5791779adf3cd5f
d0144d76b8b926d22411d388e7a26506519372ebImproving Regression Performance with Distributional Losses ('29905816', 'Ehsan Imani', 'ehsan imani')
d02e27e724f9b9592901ac1f45830341d37140feDA-GAN: Instance-level Image Translation by Deep Attention Generative
Adversarial Networks
The State Universtiy of New York at Buffalo
The State Universtiy of New York at Buffalo
Microsoft Research
Microsoft Research
('2327045', 'Shuang Ma', 'shuang ma')
('1735257', 'Chang Wen Chen', 'chang wen chen')
('3247966', 'Jianlong Fu', 'jianlong fu')
('1724211', 'Tao Mei', 'tao mei')
shuangma@buffalo.edu
chencw@buffalo.edu
jianf@microsoft.com
tmei@microsoft.com
d02b32b012ffba2baeb80dca78e7857aaeececb0Human Pose Estimation: Extension and Application
Thesis submitted in partial fulfillment
of the requirements for the degree of
Master of Science (By Research)
in
Computer Science and Engineering
by
201002052
Center for Visual Information Technology
International Institute of Information Technology
Hyderabad - 500 032, INDIA
September 2016
('50226534', 'Digvijay Singh', 'digvijay singh')digvijay.singh@research.iiit.ac.in
d0a21f94de312a0ff31657fd103d6b29db823caaFacial Expression Analysis ('1707876', 'Fernando De la Torre', 'fernando de la torre')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
d03e4e938bcbc25aa0feb83d8a0830f9cd3eb3eaFace Recognition with Patterns of Oriented
Edge Magnitudes
1 Vesalis Sarl, Clermont Ferrand, France
2 Gipsa-lab, Grenoble INP, France
('35083213', 'Ngoc-Son Vu', 'ngoc-son vu')
('1788869', 'Alice Caplier', 'alice caplier')
d0d7671c816ed7f37b16be86fa792a1b29ddd79bExploring Semantic Inter-Class Relationships (SIR)
for Zero-Shot Action Recognition
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney, Sydney, Australia
School of Computer Science, Carnegie Mellon University, Pittsburgh, USA
College of Computer Science, Zhejiang University, Zhejiang, China
('2551285', 'Chuang Gan', 'chuang gan')
('2735055', 'Ming Lin', 'ming lin')
('39033919', 'Yi Yang', 'yi yang')
('1755711', 'Yueting Zhuang', 'yueting zhuang')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
ganchuang1990@gmail.com, linming04@gmail.com,
yiyang@cs.cmu.edu, yzhuang@zju.edu.cn, alex@cs.cmu.edu
d01303062b21cd9ff46d5e3ff78897b8499480deMulti-task Learning by Maximizing Statistical Dependence
University of Bath
University of Bath
University of Bath
('51013428', 'Youssef A. Mejjati', 'youssef a. mejjati')
('1792288', 'Darren Cosker', 'darren cosker')
('1808255', 'Kwang In Kim', 'kwang in kim')
d02c54192dbd0798b43231efe1159d6b4375ad363D Reconstruction and Face Recognition Using Kernel-Based
ICA and Neural Networks
Dept. of Electrical Dept. of CSIE Dept. of CSIE
Engineering Chaoyang University Nankai Institute of
National University of Technology Technology
('1734467', 'Cheng-Jian Lin', 'cheng-jian lin')
('1759040', 'Chi-Yung Lee', 'chi-yung lee')
of Kaohsiung s9527618@cyut.edu.tw cylee@nkc.edu.tw
cjlin@nuk.edu.tw
d00787e215bd74d32d80a6c115c4789214da5edbFaster and Lighter Online
Sparse Dictionary Learning
Project report
('2714145', 'Jeremias Sulam', 'jeremias sulam')
d0f54b72e3a3fe7c0e65d7d5a3b30affb275f4c5Towards Universal Representation for Unseen Action Recognition
University of California, Merced
Open Lab, School of Computing, Newcastle University, UK
Inception Institute of Arti cial Intelligence (IIAI), Abu Dhabi, UAE
('1749901', 'Yi Zhu', 'yi zhu')
('50363618', 'Yang Long', 'yang long')
('1735787', 'Yu Guan', 'yu guan')
('40799321', 'Ling Shao', 'ling shao')
be8c517406528edc47c4ec0222e2a603950c2762Harrigan / The new handbook of methods in nonverbal behaviour research 02-harrigan-chap02 Page Proof page 7
17.6.2005
5:45pm
B A S I C R E S E A RC H
M E T H O D S A N D
P RO C E D U R E S
beb3fd2da7f8f3b0c3ebceaa2150a0e65736d1a2RESEARCH PAPER
International Journal of Recent Trends in Engineering Vol 1, No. 1, May 2009,
Adaptive Histogram Equalization and Logarithm
Transform with Rescaled Low Frequency DCT
Coefficients for Illumination Normalization
Department of Computer Science and Engineering
Amity School of Engineering Technology, 580, Bijwasan, New Delhi-110061, India
Affiliated to Guru Gobind Singh Indraprastha University, Delhi, India
illumination normalization. The
lighting conditions. Most of the
('2650871', 'Virendra P. Vishwakarma', 'virendra p. vishwakarma')
('2100294', 'Sujata Pandey', 'sujata pandey')
Email: vpvishwakarma@aset.amity.edu
be86d88ecb4192eaf512f29c461e684eb6c35257Automatic Attribute Discovery and
Characterization from Noisy Web Data
Stony Brook University, Stony Brook NY 11794, USA
Columbia University, New York NY 10027, USA
University of California, Berkeley, Berkeley CA 94720, USA
('1685538', 'Tamara L. Berg', 'tamara l. berg')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('9676096', 'Jonathan Shih', 'jonathan shih')
tlberg@cs.sunysb.edu,
aberg@cs.columbia.edu,
jmshih@berkeley.edu.
be48b5dcd10ab834cd68d5b2a24187180e2b408fFOR PERSONAL USE ONLY
Constrained Low-rank Learning Using Least
Squares Based Regularization
('2420746', 'Ping Li', 'ping li')
('1720236', 'Jun Yu', 'jun yu')
('48958393', 'Meng Wang', 'meng wang')
('1763785', 'Luming Zhang', 'luming zhang')
('1724421', 'Deng Cai', 'deng cai')
('50080046', 'Xuelong Li', 'xuelong li')
beb49072f5ba79ed24750108c593e8982715498eSTUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES
GeneGAN: Learning Object Transfiguration
and Attribute Subspace from Unpaired Data
1 Megvii Inc.
Beijing, China
2 Department of Information Science,
School of Mathematical Sciences,
Peking University
Beijing, China
('35132667', 'Shuchang Zhou', 'shuchang zhou')
('14002400', 'Taihong Xiao', 'taihong xiao')
('1698559', 'Yi Yang', 'yi yang')
('7841666', 'Dieqiao Feng', 'dieqiao feng')
('8159691', 'Qinyao He', 'qinyao he')
('2416953', 'Weiran He', 'weiran he')
shuchang.zhou@gmail.com
xiaotaihong@pku.edu.cn
yangyi@megvii.com
fdq@megvii.com
hqy@megvii.com
hwr@megvii.com
be4a20113bc204019ea79c6557a0bece23da1121DeepCache: Principled Cache for Mobile Deep Vision
We present DeepCache, a principled cache design for deep learning
inference in continuous mobile vision. DeepCache benefits model
execution efficiency by exploiting temporal locality in input video
streams. It addresses a key challenge raised by mobile vision: the
cache must operate under video scene variation, while trading off
among cacheability, overhead, and loss in model accuracy. At the
input of a model, DeepCache discovers video temporal locality by ex-
ploiting the video’s internal structure, for which it borrows proven
heuristics from video compression; into the model, DeepCache prop-
agates regions of reusable results by exploiting the model’s internal
structure. Notably, DeepCache eschews applying video heuristics to
model internals which are not pixels but high-dimensional, difficult-
to-interpret data.
Our implementation of DeepCache works with unmodified deep
learning models, requires zero developer’s manual effort, and is
therefore immediately deployable on off-the-shelf mobile devices.
Our experiments show that DeepCache saves inference execution
time by 18% on average and up to 47%. DeepCache reduces system
energy consumption by 20% on average.
CCS Concepts: • Human-centered computing → Ubiquitous
and mobile computing; • Computing methodologies → Com-
puter vision tasks;
Additional Key Words and Phrases: Deep Learning; Mobile Vision;
Cache
INTRODUCTION
With ubiquitous cameras on mobile and wearable devices,
continuous mobile vision emerges to enable a variety of com-
pelling applications, including cognitive assistance [29], life
style monitoring [61], and street navigation [27]. To support
continuous mobile vision, Convolutional Neural Network
2018. XXXX-XXXX/2018/9-ART $15.00
https://doi.org/10.1145/3241539.3241563
Fig. 1. The overview of DeepCache.
(CNN) is recognized as the state-of-the-art algorithm: a soft-
ware runtime, called deep learning engine, ingests a continu-
ous stream of video images1; for each input frame the engine
executes a CNN model as a cascade of layers, produces in-
termediate results called feature maps, and outputs inference
results. Such CNN executions are known for their high time
and space complexity, stressing resource-constrained mobile
devices. Although CNN execution can be offloaded to the
cloud [2, 34], it becomes increasingly compelling to execute
CNNs on device [27, 44, 52], which ensures fast inference, pre-
serves user privacy, and remains unaffected by poor Internet
connectivity.
To afford costly CNN on resource-constrained mobile/wear-
able devices, we set to exploit a mobile video stream’s tempo-
ral locality, i.e., rich information redundancy among consec-
utive video frames [27, 51, 52]. Accordingly, a deep learning
engine can cache results when it executes CNN over a mo-
bile video, by using input frame contents as cache keys and
inference results as cache values. Such caching is expected
to reduce the engine’s resource demand significantly.
Towards effective caching and result reusing, we face two
major challenges. 1) Reusable results lookup: Classic caches,
e.g., the web browser cache, look up cached values (e.g., web
pages) based on key equivalence (e.g., identical URLs). This
does not apply to a CNN cache: its keys, i.e., mobile video
contents, often undergo moderate scene variation over time.
The variation is caused by environmental changes such as
1We refer to them as a mobile video stream in the remainder of the paper.
, Vol. 1, No. 1, Article . Publication date: September 2018.
('2529558', 'Mengwei Xu', 'mengwei xu')
('46694806', 'Mengze Zhu', 'mengze zhu')
('3180228', 'Yunxin Liu', 'yunxin liu')
('1774176', 'Felix Xiaozhu Lin', 'felix xiaozhu lin')
('8016688', 'Xuanzhe Liu', 'xuanzhe liu')
('8016688', 'Xuanzhe Liu', 'xuanzhe liu')
('2529558', 'Mengwei Xu', 'mengwei xu')
xumengwei@pku.edu.cn; Mengze Zhu, Peking University, MoE, Beijing,
China, zhumz@pku.edu.cn; Yunxin Liu, Microsoft Research, Beijing, China,
yunxin.liu@microsoft.com; Felix Xiaozhu Lin, Purdue ECE, West Lafayette,
Indiana, USA, xzl@purdue.edu; Xuanzhe Liu, Peking University, MoE, Bei-
jing, China, xzl@pku.edu.cn.
becd5fd62f6301226b8e150e1a5ec3180f748ff8Robust and Practical Face Recognition via
Structured Sparsity
1Advanced Digital Sciences Center, Singapore
2 Microsoft Research Asia, Beijing, China
University of Illinois at Urbana-Champaign
('2370507', 'Kui Jia', 'kui jia')
('1926757', 'Tsung-Han Chan', 'tsung-han chan')
('1700297', 'Yi Ma', 'yi ma')
be437b53a376085b01ebd0f4c7c6c9e40a4b1a75ISSN (Online) 2321 – 2004
ISSN (Print) 2321 – 5526
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
IJIREEICE
Face Recognition and Retrieval Using Cross
Age Reference Coding
BE, DSCE, Bangalore1
Assistant Professor, DSCE, Bangalore2
('4427719', 'Chandrakala', 'chandrakala')
bebb8a97b2940a4e5f6e9d3caf6d71af21585edaMapping Emotional Status to Facial Expressions
Tsinghua University
Beijing 100084, P. R. China
('3165307', 'Yangzhou Du', 'yangzhou du')
('2693354', 'Xueyin Lin', 'xueyin lin')
dyz99@mails.tsinghua.edu.cn; lxy-dcs@tsinghua.edu.cn
be07f2950771d318a78d2b64de340394f7d6b717See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/290192867
3D HMM-based Facial Expression Recognition
using Histogram of Oriented Optical Flow
ARTICLE in SYNTHESIS LECTURES ON ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING · DECEMBER 2015
DOI: 10.14738/tmlai.36.1661
READS
12
3 AUTHORS, INCLUDING:
Sheng Kung
Oakland University
Djamel Bouchaffra
Institute of Electrical and Electronics Engineers
1 PUBLICATION 0 CITATIONS
57 PUBLICATIONS 402 CITATIONS
SEE PROFILE
SEE PROFILE
All in-text references underlined in blue are linked to publications on ResearchGate,
letting you access and read them immediately.
Available from: Djamel Bouchaffra
Retrieved on: 11 February 2016
be4f7679797777f2bc1fd6aad8af67cce5e5ce87Interestingness Prediction
by Robust Learning to Rank(cid:2)
School of EECS, Queen Mary University of London, UK
School of Mathematical Sciences, Peking University, China
('35782003', 'Yanwei Fu', 'yanwei fu')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('1700927', 'Tao Xiang', 'tao xiang')
('2073354', 'Shaogang Gong', 'shaogang gong')
('1746280', 'Yuan Yao', 'yuan yao')
{y.fu,t.hospedales,t.xiang,s.gong}@qmul.ac.uk, yuany@math.pku.edu.cn
beb4546ae95f79235c5f3c0e9cc301b5d6fc9374A Modular Approach to Facial Expression Recognition
Cognitive Arti cial Intelligence, Utrecht University, Heidelberglaan 6, 3584 CD, Utrecht
Intelligent Systems Group, Utrecht University, Padualaan 14, 3508 TB, Utrecht
('31822812', 'Michal Sindlar', 'michal sindlar')
('1727399', 'Marco Wiering', 'marco wiering')
sindlar@phil.uu.nl
marco@cs.uu.nl
be28ed1be084385f5d389db25fd7f56cd2d7f7bfExploring Computation-Communication Tradeoffs
in Camera Systems
Paul G. Allen School of Computer Science and Engineering, University of Washington
University of Washington
('19170117', 'Amrita Mazumdar', 'amrita mazumdar')
('47108160', 'Thierry Moreau', 'thierry moreau')
('37270394', 'Meghan Cowan', 'meghan cowan')
('1698528', 'Armin Alaghi', 'armin alaghi')
('1717411', 'Luis Ceze', 'luis ceze')
('1723213', 'Mark Oskin', 'mark oskin')
('46829693', 'Visvesh Sathe', 'visvesh sathe')
{amrita,moreau,cowanmeg}@cs.washington.edu, sungk9@uw.edu, {armin,luisceze,oskin}@cs.washington.edu, sathe@uw.edu
bebea83479a8e1988a7da32584e37bfc463d32d4Discovery of Latent 3D Keypoints via
End-to-end Geometric Reasoning
Google AI
('37016781', 'Supasorn Suwajanakorn', 'supasorn suwajanakorn')
('2704494', 'Jonathan Tompson', 'jonathan tompson')
{supasorn, snavely, tompson, mnorouzi}@google.com
bed06e7ff0b510b4a1762283640b4233de4c18e0Bachelor Project
Czech
Technical
University
in Prague
F3
Faculty of Electrical Engineering
Department of Cybernetics
Face Interpretation Problems on Low
Quality Images
Supervisor: Ing. Jan Čech, Ph.D
May 2018
bec31269632c17206deb90cd74367d1e6586f75fLarge-scale Datasets: Faces with Partial
Occlusions and Pose Variations in the Wild
Wayne State University
Detroit, MI, USA 48120
('2489629', 'Zeyad Hailat', 'zeyad hailat')
('35265528', 'Xuewen Chen', 'xuewen chen')
Email: ∗tarik alafif@wayne.edu, †zmhailat@wayne.edu, ‡melih.aslan@wayne.edu, §xuewen.chen@wayne.edu
be5276e9744c4445fe5b12b785650e8f173f56ffSpatio-temporal VLAD Encoding for
Human Action Recognition in Videos
University of Trento, Italy
University Politehnica of Bucharest, Romania
University of Tokyo, Japan
('3429470', 'Ionut C. Duta', 'ionut c. duta')
('1796198', 'Bogdan Ionescu', 'bogdan ionescu')
('1712839', 'Kiyoharu Aizawa', 'kiyoharu aizawa')
('1703601', 'Nicu Sebe', 'nicu sebe')
{ionutcosmin.duta, niculae.sebe}@unitn.it
bionescu@imag.pub.ro
aizawa@hal.t.u-tokyo.ac.jp
be57d2aaab615ec8bc1dd2dba8bee41a4d038b85Automatic Analysis of Naturalistic Hand-Over-Face Gestures
University of Cambridge
One of the main factors that limit the accuracy of facial analysis systems is hand occlusion. As the face
becomes occluded, facial features are lost, corrupted, or erroneously detected. Hand-over-face occlusions are
considered not only very common but also very challenging to handle. However, there is empirical evidence
that some of these hand-over-face gestures serve as cues for recognition of cognitive mental states. In this
article, we present an analysis of automatic detection and classification of hand-over-face gestures. We detect
hand-over-face occlusions and classify hand-over-face gesture descriptors in videos of natural expressions
using multi-modal fusion of different state-of-the-art spatial and spatio-temporal features. We show experi-
mentally that we can successfully detect face occlusions with an accuracy of 83%. We also demonstrate that
we can classify gesture descriptors (hand shape, hand action, and facial region occluded) significantly better
than a na¨ıve baseline. Our detailed quantitative analysis sheds some light on the challenges of automatic
classification of hand-over-face gestures in natural expressions.
Categories and Subject Descriptors: I.2.10 [Vision and Scene Understanding]: Video Analysis
General Terms: Affective Computing, Body Expressions
Additional Key Words and Phrases: Hand-over-face occlusions, face touches, hand gestures, facial landmarks,
histograms of oriented gradient, space-time interest points
ACM Reference Format:
over-face gestures. ACM Trans. Interact. Intell. Syst. 6, 2, Article 19 (July 2016), 18 pages.
DOI: http://dx.doi.org/10.1145/2946796
1. INTRODUCTION
Over the past few years, there has been an increasing interest in machine under-
standing and recognition of people’s affective and cognitive mental states, especially
based on facial expression analysis. One of the major factors that limits the accuracy
of facial analysis systems is hand occlusion. People often hold their hands near their
faces as a gesture in natural conversation. As many facial analysis systems are based
on geometric or appearance based facial features, such features are lost, corrupted,
or erroneously detected during occlusion. This results in an incorrect analysis of the
person’s facial expression. Although face touches are very common, they are under-
researched, mostly because segmenting of the hand on the face is very challenging,
as face and hand usually have similar colour and texture. Detection of hand-over-face
The research leading to these results received partial funding from the European Community’s Seventh
Framework Programme (FP7/2007-2013) under Grant No. 289021 (ASC-Inclusion). We also thank Yousef
Jameel and Qualcomm for providing funding as well.
Authors’ address: The Computer Laboratory, 15 JJ Thomson Avenue, Cambridge CB3 0FD, United Kingdom;
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for
('2022940', 'Marwa Mahmoud', 'marwa mahmoud')
('39626495', 'Peter Robinson', 'peter robinson')
('2022940', 'Marwa Mahmoud', 'marwa mahmoud')
('39626495', 'Peter Robinson', 'peter robinson')
emails: {Marwa.Mahmoud, Tadas.Baltrusaitis, Peter.Robinson}@cl.cam.ac.uk.
be4f18e25b06f430e2de0cc8fddcac8585b00bebSTUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES
A New Face Recognition Algorithm based on
Dictionary Learning for a Single Training
Sample per Person
Ian Wassell
Computer Laboratory,
University of Cambridge
('1681842', 'Yang Liu', 'yang liu')yl504@cam.ac.uk
ijw24@cam.ac.uk
bef503cdfe38e7940141f70524ee8df4afd4f954
beab10d1bdb0c95b2f880a81a747f6dd17caa9c2DeepDeblur: Fast one-step blurry face images restoration
Tsinghua Unversity
('2766905', 'Lingxiao Wang', 'lingxiao wang')
('2112160', 'Yali Li', 'yali li')
('1678689', 'Shengjin Wang', 'shengjin wang')
wlx16@mails.tsinghua.edu.cn, liyali@ocrserv.ee.tsinghua.edu.cn, wgsgj@tsinghua.edu.cn
b331ca23aed90394c05f06701f90afd550131fe3Zhou et al. EURASIP Journal on Image and Video Processing (2018) 2018:49
https://doi.org/10.1186/s13640-018-0287-5
EURASIP Journal on Image
and Video Processing
R ES EAR CH
Double regularized matrix factorization for
image classification and clustering
Open Access
('39147685', 'Wei Zhou', 'wei zhou')
('7513726', 'Chengdong Wu', 'chengdong wu')
('46583983', 'Jianzhong Wang', 'jianzhong wang')
('9305845', 'Xiaosheng Yu', 'xiaosheng yu')
('50130800', 'Yugen Yi', 'yugen yi')
b3b532e8ea6304446b1623e83b0b9a96968f926cJoint Network based Attention for Action Recognition
1 National Engineering Laboratory for Video Technology, School of EE&CS,
Peking University, Beijing, China
2 Cooperative Medianet Innovation Center, China
3 School of Information and Electronics,
Beijing Institute of Technology, Beijing, China
('38179026', 'Yemin Shi', 'yemin shi')
('1705972', 'Yonghong Tian', 'yonghong tian')
('5765799', 'Yaowei Wang', 'yaowei wang')
('34097174', 'Tiejun Huang', 'tiejun huang')
b37f57edab685dba5c23de00e4fa032a3a6e8841Towards Social Interaction Detection in Egocentric Photo-streams
University of Barcelona and Computer Vision Centre, Barcelona, Spain
Recent advances in wearable camera technology have
led to novel applications in the field of Preventive Medicine.
For some of them, such as cognitive training of elderly peo-
ple by digital memories and detection of unhealthy social
trends associated to neuropsychological disorders, social in-
teraction are of special interest. Our purpose is to address
this problem in the domain of egocentric photo-streams cap-
tured by a low temporal resolution wearable camera (2fpm).
These cameras are suited for collecting visual information
for long period of time, as required by the aforementioned
applications. The major difficulties to be handled in this
context are the sparsity of observations as well as the unpre-
dictability of camera motion and attention orientation due
to the fact that the camera is worn as part of clothing (see
Fig. 1). Inspired by the theory of F-formation which is a
pattern that people tend to follow when interacting [5], our
proposed approach consists of three steps: multi-faces as-
signment, social signals extraction and interaction detection
of the individuals with the camera wearer (see Fig. 2).
1. Multi-face Assignment
While person detection and tracking in classical videos
have been active research areas for a long time, the problem
of people assignment in low temporal resolution egocen-
tric photo-streams is still unexplored. To address such an
issue, we proposed a novel method for multi-face assign-
ment in egocentric photo-streams, we called extended-Bag-
of-Tracklets (eBoT) [2]. This approach basically consists
of 4 major sequential modules: seed and tracklet gener-
ation, grouping tracklets into eBoT, prototypes extraction
and occlusion treatment. Prior to any computation, first, a
temporal segmentation algorithm [6] is applied to extract
segments characterized by similar visual properties. Later
on, a face detector is applied on all the frames of a seg-
ment to detect visible faces on them [8]. Based on the ratio
between the number of frames with detected faces and the
total number of frames of the segment, we extract segments
containing trackable persons. The next steps are applied on
these extracted segments, hereafter referred to as sequences.
Figure 1. Example of social interaction (first row) and non-social
interaction (second row) in egocentric photo-streams.
• Seed and tracklet generation: The set of collected
bounding boxes that surround the face of each per-
son throughout the sequence, are called seeds. For
each seed, a set of correspondences to it is generated
along the sequence by propagating the seed forward
and backward employing the deep-matching technique
[7] that lead to form a tracklet. To propagate a seed
found in a frame, in all the frames of the sequence, the
region of the frames most similar to the seed is found
as the one having the highest deep-matching score.
• Grouping tracklets into Bag-of-tracklets (eBoT):
Assuming that tracklets generated by seeds belong-
ing to the same person in a sequence, are likely to
be similar to each other, we group them into a set of
non-overlapping eBoTs. Since seeds corresponding to
false positive detections generate unreliable tracklets
and unreliable eBoTs, we defined a measure based on
the density of the eBoTs to exclude unreliable eBoTs.
• Prototypes extraction: A prototype extracted from an
eBoT, should best represent all tracklets in the eBoT,
and therefore, it should best localize a person’s face in
each frame. As the prototype frame, the frame whose
bounding box has the biggest intersection with the rest
of the tracklets in that frame is chosen.
• Occlusion treatment: Estimation of occluded frames
is a very helpful feature since it allows us to exclude
occluded frames which do not convey many informa-
tion from final prototypes. To this goal, we define a
frame confidence measure to assign a confidence value
('2084534', 'Maedeh Aghaei', 'maedeh aghaei')
('2837527', 'Mariella Dimiccoli', 'mariella dimiccoli')
('1724155', 'Petia Radeva', 'petia radeva')
aghaei.maya@gmail.com
b3154d981eca98416074538e091778cbc031ca29Pedestrian Attribute Analysis
Using a Top-View Camera in a Public Space
The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
School of Electrical and Computer Engineering, Cornell University
116 Ward Hall, Ithaca, NY 14853, USA
3 JSPS Postdoctoral Fellow for Research Abroad
('2759239', 'Toshihiko Yamasaki', 'toshihiko yamasaki')
('21152852', 'Tomoaki Matsunami', 'tomoaki matsunami')
{yamasaki,matsunami}@hal.t.u-tokyo.ac.jp
b3cb91a08be4117d6efe57251061b62417867de9T. Swearingen and A. Ross. "A label propagation approach for predicting missing biographic labels in
A Label Propagation Approach for
Predicting Missing Biographic Labels
in Face-Based Biometric Records
('3153117', 'Thomas Swearingen', 'thomas swearingen')
('1698707', 'Arun Ross', 'arun ross')
b340f275518aa5dd2c3663eed951045a5b8b0ab1Visual Inference of Human Emotion and Behaviour
Dept of Computer Science
Queen Mary College, London
Dept of Computer Science
Queen Mary College, London
Dept of Computer Science
Queen Mary College, London
England, UK
England, UK
England, UK
('2073354', 'Shaogang Gong', 'shaogang gong')
('10795229', 'Caifeng Shan', 'caifeng shan')
('1700927', 'Tao Xiang', 'tao xiang')
sgg@dcs.qmul.ac.uk
cfshan@dcs.qmul.ac.uk
txiang@dcs.qmul.ac.uk
b3200539538eca54a85223bf0ec4f3ed132d0493Action Anticipation with RBF Kernelized
Feature Mapping RNN
Hartley[0000−0002−5005−0191]
The Australian National University, Australia
('11519650', 'Yuge Shi', 'yuge shi')
b3b467961ba66264bb73ffe00b1830d7874ae8ceFinding Tiny Faces
Robotics Institute
Carnegie Mellon University
Figure 1: We describe a detector that can find around 800 faces out of the reportedly 1000 present, by making use of novel
characterizations of scale, resolution, and context to find small objects. Detector confidence is given by the colorbar on the
right: can you confidently identify errors?
('2894848', 'Peiyun Hu', 'peiyun hu')
('1770537', 'Deva Ramanan', 'deva ramanan')
{peiyunh,deva}@cs.cmu.edu
b3ba7ab6de023a0d58c741d6abfa3eae67227cafZero-Shot Activity Recognition with Verb Attribute Induction
Paul G. Allen School of Computer Science & Engineering
University of Washington
Seattle, WA 98195, USA
('2545335', 'Rowan Zellers', 'rowan zellers')
('1699545', 'Yejin Choi', 'yejin choi')
{rowanz,yejin}@cs.washington.edu
b375db63742f8a67c2a7d663f23774aedccc84e5Brain-inspired Classroom Occupancy
Monitoring on a Low-Power Mobile Platform
Electronic and Information Engineering, University of Bologna, Italy
†Integrated Systems Laboratory, ETH Zurich, Switzerland
('1721381', 'Francesco Conti', 'francesco conti')
('1785226', 'Antonio Pullini', 'antonio pullini')
('1710649', 'Luca Benini', 'luca benini')
f.conti@unibo.it,{pullinia,lbenini}@iis.ee.ethz.ch
b3330adb131fb4b6ebbfacce56f1aec2a61e0869Emotion recognition using facial images
School of Electrical and Electronics Engineering
Department of Electronics and Communication Engineering
SASTRA University, Thanjavur, Tamil Nadu, India
('9365696', 'Siva sankari', 'siva sankari') ramya.ece.sk@gmail.com, siva.ece.ds@gmail.com, knr@ece.sastra.edu
b3c60b642a1c64699ed069e3740a0edeabf1922cMax-Margin Object Detection ('29250541', 'Davis E. King', 'davis e. king')davis@dlib.net
b3f3d6be11ace907c804c2d916830c85643e468dUniversity of Toulouse
University of Toulouse II Le Mirail
PhD in computer sciences / artificial intelligence
A Logical Framework for
Trust-Related Emotions:
Formal and Behavioral Results
by
Co-supervisors:
Toulouse, September 2010
('1759342', 'Manh Hung NGUYEN', 'manh hung nguyen')
('3107309', 'Jean-François BONNEFON', 'jean-françois bonnefon')
('1733042', 'Dominique LONGIN', 'dominique longin')
b3f7c772acc8bc42291e09f7a2b081024a172564 www.ijmer.com Vol. 3, Issue. 5, Sep - Oct. 2013 pp-3225-3230 ISSN: 2249-6645
International Journal of Modern Engineering Research (IJMER)
A novel approach for performance parameter estimation of face
recognition based on clustering, shape and corner detection

('1904292', 'Prashant Jain', 'prashant jain')
b3c398da38d529b907b0bac7ec586c81b851708fFace Recognition under Varying Lighting Conditions Using Self Quotient
Image
Institute of Automation, Chinese Academy of
Sciences, Beijing, 100080, China,
('29948255', 'Haitao Wang', 'haitao wang')
('1744302', 'Yangsheng Wang', 'yangsheng wang')
Email: {htwang,wys}@nlpr.ia.ac.cn
b32cf547a764a4efa475e9c99a72a5db36eeced6UvA-DARE (Digital Academic Repository)
Mimicry of ingroup and outgroup emotional expressions
Sachisthal, M.S.M.; Sauter, D.A.; Fischer, A.H.
Published in:
Comprehensive Results in Social Psychology
DOI:
10.1080/23743603.2017.1298355
Link to publication
Citation for published version (APA):
Sachisthal, M. S. M., Sauter, D. A., & Fischer, A. H. (2016). Mimicry of ingroup and outgroup emotional
expressions. Comprehensive Results in Social Psychology, 1(1-3), 86-105. DOI:
10.1080/23743603.2017.1298355
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s),
other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating
your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask
the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam
The Netherlands. You will be contacted as soon as possible.
Download date: 08 Aug 2018
UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl
b3658514a0729694d86a8b89c875a66cde20480cImproving the Robustness of Subspace Learning
Techniques for Facial Expression Recognition
Aristotle University of Thessaloniki
Box 451, 54124 Thessaloniki, Greece
('2342345', 'Dimitris Bolis', 'dimitris bolis')
('2447585', 'Anastasios Maronidis', 'anastasios maronidis')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
email: {mpolis, amaronidis, tefas, pitas}@aiia.csd.auth.gr (cid:63)
b3b4a7e29b9186e00d2948a1d706ee1605fe5811Paper
Image Preprocessing
for Illumination Invariant Face
Verification
Institute of Radioelectronics, Warsaw University of Technology, Warsaw, Poland
('3031283', 'Mariusz Leszczyński', 'mariusz leszczyński')
b32631f456397462b3530757f3a73a2ccc362342Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
3069
b33e8db8ccabdfc49211e46d78d09b14557d4cbaFace Expression Recognition and Analysis:
1
The State of the Art
College of Computing, Georgia Institute of Technology
('3115428', 'Vinay Bettadapura', 'vinay bettadapura')vinay@gatech.edu
b3afa234996f44852317af382b98f5f557cab25a
df90850f1c153bfab691b985bfe536a5544e438bFACE TRACKING ALGORITHM ROBUST TO POSE,
ILLUMINATION AND FACE EXPRESSION CHANGES: A 3D
PARAMETRIC MODEL APPROACH

via Bramante 65 - 26013, Crema (CR), Italy
Luigi Arnone, Fabrizio Beverina
STMicroelectronics - Advanced System Technology Group
via Olivetti 5 - 20041, Agrate Brianza, Italy
Keywords:
Face tracking, expression changes, FACS, illumination changes.
('3330245', 'Marco Anisetti', 'marco anisetti')
('2061298', 'Valerio Bellandi', 'valerio bellandi')
df8da144a695269e159fb0120bf5355a558f4b02International Journal of Computer Applications (0975 – 8887)
International Conference on Recent Trends in engineering & Technology - 2013(ICRTET'2013)
Face Recognition using PCA and Eigen Face
Approach
ME EXTC [VLSI & Embedded System]
Sinhgad Academy of Engineering
EXTC Department
Pune, India
dfd934ae448a1b8947d404b01303951b79b13801Christopher A. Longmore
University of Plymouth, UK
Bournemouth University, UK
Andrew W. Young
University of York, UK
The importance of internal facial features in learning new
faces
Running head: FACIAL FEATURES IN LEARNING NEW FACES
Address of correspondence:
Chris Longmore
School of Psychology
Faculty of Health and Human Sciences
Plymouth University
Drake Circus
Plymouth
PL4 8AA
Tel: +44 (0)1752 584890
Fax: +44 (0)1752 584808
('39557512', 'Chang Hong Liu', 'chang hong liu')Email: chris.longmore@plymouth.ac.uk
df577a89830be69c1bfb196e925df3055cafc0edShift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions
UC Berkeley
('3130257', 'Bichen Wu', 'bichen wu')
('40417702', 'Alvin Wan', 'alvin wan')
('27577617', 'Xiangyu Yue', 'xiangyu yue')
('1755487', 'Sicheng Zhao', 'sicheng zhao')
('30096597', 'Noah Golmant', 'noah golmant')
('3647010', 'Amir Gholaminejad', 'amir gholaminejad')
('30503077', 'Joseph Gonzalez', 'joseph gonzalez')
('1732330', 'Kurt Keutzer', 'kurt keutzer')
{bichen,alvinwan,xyyue,phj,schzhao,noah.golmant,amirgh,jegonzal,keutzer}@berkeley.edu
df0e280cae018cebd5b16ad701ad101265c369faDeep Attributes from Context-Aware Regional Neural Codes
Image Processing Center, Beihang University
2 Intel Labs China
Columbia University
('2780589', 'Jianwei Luo', 'jianwei luo')
('35423937', 'Jianguo Li', 'jianguo li')
('1715001', 'Jun Wang', 'jun wang')
('1791565', 'Zhiguo Jiang', 'zhiguo jiang')
('6060281', 'Yurong Chen', 'yurong chen')
dfabe7ef245ca68185f4fcc96a08602ee1afb3f7
df51dfe55912d30fc2f792561e9e0c2b43179089Face Hallucination using Linear Models of Coupled
Sparse Support
grid and fuse them to suppress the aliasing caused by under-
sampling [5], [6]. On the other hand, learning based meth-
ods use coupled dictionaries to learn the mapping relations
between low- and high- resolution image pairs to synthesize
high-resolution images from low-resolution images [4], [7].
The research community has lately focused on the latter
category of super-resolution methods, since they can provide
higher quality images and larger magnification factors.
('1805605', 'Reuben A. Farrugia', 'reuben a. farrugia')
('1780587', 'Christine Guillemot', 'christine guillemot')
df2c685aa9c234783ab51c1aa1bf1cb5d71a3dbbSREFI: Synthesis of Realistic Example Face Images
University of Notre Dame, USA
FaceTec, Inc
('40061203', 'Sandipan Banerjee', 'sandipan banerjee')
('3365839', 'John S. Bernhard', 'john s. bernhard')
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
{sbanerj1, wscheire, kwb, flynn}@nd.edu
jsbernhardjr@gmail.com
df054fa8ee6bb7d2a50909939d90ef417c73604cImage Quality-Aware Deep Networks Ensemble for Efficient
Gender Recognition in the Wild
Augmented Vision Lab, Technical University Kaiserslautern, Kaiserslautern, Germany
German Research Center for Arti cial Intelligence (DFKI), Kaiserslautern, Germany
Keywords:
Gender, Face, Deep Neural Networks, Quality, In the Wild
('2585383', 'Mohamed Selim', 'mohamed selim')
('40810260', 'Suraj Sundararajan', 'suraj sundararajan')
('1771057', 'Alain Pagani', 'alain pagani')
('1807169', 'Didier Stricker', 'didier stricker')
{mohamed.selim, alain.pagani, didier.stricker}@dfki.uni-kl.de, s lakshmin13@informatik.uni-kl.de
df80fed59ffdf751a20af317f265848fe6bfb9c91666
Learning Deep Sharable and Structural
Detectors for Face Alignment
('40387982', 'Hao Liu', 'hao liu')
('1697700', 'Jiwen Lu', 'jiwen lu')
('2632601', 'Jianjiang Feng', 'jianjiang feng')
('25060740', 'Jie Zhou', 'jie zhou')
dfd8602820c0e94b624d02f2e10ce6c798193a25STRUCTURED ANALYSIS DICTIONARY LEARNING FOR IMAGE CLASSIFICATION
Department of Electrical and Computer Engineering
North Carolina State University, Raleigh, NC, USA
†Army Research Office, RTP, Raleigh, NC, USA
('49501811', 'Wen Tang', 'wen tang')
('1733181', 'Ashkan Panahi', 'ashkan panahi')
('1769928', 'Hamid Krim', 'hamid krim')
('2622498', 'Liyi Dai', 'liyi dai')
{wtang6, apanahi, ahk}@ncsu.edu, liyi.dai@us.army.mil
dff838ba0567ef0a6c8fbfff9837ea484314efc6Progress Report, MSc. Dissertation: On-line
Random Forest for Face Detection
School of Computer Science
The University of Manchester
May 9, 2014
Contents
1 Introduction
2 Background
3 Research Methods
3.1 What the project involves . . . . . . . . . . . . . . . . . . . . . .
3.2 The project plan and evaluation of the plan . . . . . . . . . . . .
4 Progress
4.1 Quality attributes
4.2 Prototypes
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 PGM Image . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Working with Haar-like features and Integral Image
. . .
4.2.3 Accesing the Webcam Driver . . . . . . . . . . . . . . . .
4.2.4 The On-line Random Forest . . . . . . . . . . . . . . . . .
4.2.5 The First version of the User Interface . . . . . . . . . . .
4.3 Open discussion about the On-line Random Forest . . . . . . . .
5 Next Steps and Conclusions
6 References
10
10
11
11
12
13
15
15
16
17
18
dfa80e52b0489bc2585339ad3351626dee1a8395Human Action Forecasting by Learning Task Grammars ('22237490', 'Tengda Han', 'tengda han')
('36541522', 'Jue Wang', 'jue wang')
('2691929', 'Anoop Cherian', 'anoop cherian')
('2377076', 'Stephen Gould', 'stephen gould')
df71a00071d5a949f9c31371c2e5ee8b478e7dc8Using Opportunistic Face Logging
from Smartphone to Infer Mental
Health: Challenges and Future
Directions
Dartmouth College
Dartmouth College
Dartmouth College
Permission to make digital or hard copies of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. Copyrights for components of this work
('1698066', 'Rui Wang', 'rui wang')
('1690035', 'Andrew T. Campbell', 'andrew t. campbell')
('2253140', 'Xia Zhou', 'xia zhou')
rui.wang@cs.dartmouth.edu
campbell@cs.dartmouth.edu
xia@cs.dartmouth.edu
df9269657505fcdc1e10cf45bbb8e325678a40f5INTERSPEECH 2016
September 8–12, 2016, San Francisco, USA
Open-Domain Audio-Visual Speech Recognition: A Deep Learning Approach
Carnegie Mellon University
('37467623', 'Yajie Miao', 'yajie miao')
('1740721', 'Florian Metze', 'florian metze')
{ymiao,fmetze}@cs.cmu.edu
dfb6aa168177d4685420fcb184def0aa7db7cddbThe Effect of Lighting Direction/Condition on the Performance
of Face Recognition Algorithms
West Virginia University, Morgantown, WV
University of Miami, Coral Gables, FL
('1722978', 'Gamal Fahmy', 'gamal fahmy')
('4562956', 'Ahmed El-Sherbeeny', 'ahmed el-sherbeeny')
('9449390', 'Mohamed Abdel-Mottaleb', 'mohamed abdel-mottaleb')
('16279046', 'Hany Ammar', 'hany ammar')
df2841a1d2a21a0fc6f14fe53b6124519f3812f9Learning Image Attributes
using the Indian Buffet Process
Department of Computer Science
Brown University
Providence, RI 02912
Department of Computer Science
Brown University
Providence, RI 02912
('2059199', 'Soravit Changpinyo', 'soravit changpinyo')
('1799035', 'Erik B. Sudderth', 'erik b. sudderth')
schangpi@cs.brown.edu
sudderth@cs.brown.edu
dfecaedeaf618041a5498cd3f0942c15302e75c3Noname manuscript No.
(will be inserted by the editor)
A Recursive Framework for Expression Recognition: From
Web Images to Deep Models to Game Dataset
Received: date / Accepted: date
('48625314', 'Wei Li', 'wei li')
df5fe0c195eea34ddc8d80efedb25f1b9034d07dRobust Modified Active Shape Model for Automatic Facial Landmark
Annotation of Frontal Faces
('2363348', 'Keshav Seshadri', 'keshav seshadri')
('1794486', 'Marios Savvides', 'marios savvides')
df2494da8efa44d70c27abf23f73387318cf1ca8RESEARCH ARTICLE
Supervised Filter Learning for Representation
Based Face Recognition
College of Computer Science and Information Technology, Northeast Normal University, Changchun
China, 2 Changchun Institute of Optics, Fine Mechanics and Physics, CAS, Changchun, China, 3 School of
Software, Jiangxi Normal University, Nanchang, China, 4 School of Statistics, Capital University of
Economics and Business, Beijing, China
a11111
('2498586', 'Chao Bi', 'chao bi')
('1684635', 'Lei Zhang', 'lei zhang')
('7009658', 'Miao Qi', 'miao qi')
('5858971', 'Caixia Zheng', 'caixia zheng')
('3042163', 'Yugen Yi', 'yugen yi')
('1831935', 'Jianzhong Wang', 'jianzhong wang')
('1751108', 'Baoxue Zhang', 'baoxue zhang')
* wangjz019@nenu.edu.cn (JW); zhangbaoxue@cueb.edu.cn (BZ)
df674dc0fc813c2a6d539e892bfc74f9a761fbc8IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 10, Issue 6 (May. - Jun. 2013), PP 21-29
www.iosrjournals.org
An Image Mining System for Gender Classification & Age
Prediction Based on Facial Features
1.Ms.Dhanashri Shirkey , 2Prof.Dr.S.R.Gupta,
M.E(Scholar),Department Computer Science & Engineering, PRMIT & R, Badnera
Asstt.Prof. Department Computer Science & Engineering, PRMIT & R, Badnera
dad7b8be074d7ea6c3f970bd18884d496cbb0f91Super-Sparse Regression for Fast Age
Estimation From Faces at Test Time
University of Cagliari
Piazza d’Armi, 09123 Cagliari, Italy
WWW home page: http://prag.diee.unica.it
('2272441', 'Ambra Demontis', 'ambra demontis')
('1684175', 'Battista Biggio', 'battista biggio')
('1716261', 'Giorgio Fumera', 'giorgio fumera')
('1710171', 'Fabio Roli', 'fabio roli')
{ambra.demontis,battista.biggio,fumera,roli}@diee.unica.it
daf05febbe8406a480306683e46eb5676843c424Robust Subspace Segmentation with Block-diagonal Prior
National University of Singapore, Singapore
Key Lab. of Machine Perception, School of EECS, Peking University, China
National University of Singapore, Singapore
('33221685', 'Jiashi Feng', 'jiashi feng')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('1678675', 'Huan Xu', 'huan xu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
1{a0066331,eleyans}@nus.edu.sg, 2zlin@pku.edu.cn, 3mpexuh@nus.edu.sg
da4170c862d8ae39861aa193667bfdbdf0ecb363Multi-task CNN Model for Attribute Prediction ('3282196', 'Abrar H. Abdulnabi', 'abrar h. abdulnabi')
('22804340', 'Gang Wang', 'gang wang')
('1697700', 'Jiwen Lu', 'jiwen lu')
('2370507', 'Kui Jia', 'kui jia')
da15344a4c10b91d6ee2e9356a48cb3a0eac6a97
da5bfddcfe703ca60c930e79d6df302920ab9465
dac2103843adc40191e48ee7f35b6d86a02ef019854
Unsupervised Celebrity Face Naming in Web Videos
('2172810', 'Lei Pang', 'lei pang')
('1751681', 'Chong-Wah Ngo', 'chong-wah ngo')
dae420b776957e6b8cf5fbbacd7bc0ec226b3e2eRECOGNIZING EMOTIONS IN SPONTANEOUS FACIAL EXPRESSIONS
Institut f¨ur Nachrichtentechnik
Universit¨at Karlsruhe (TH), Germany
('2500636', 'Michael Grimm', 'michael grimm')
('1787004', 'Kristian Kroschel', 'kristian kroschel')
grimm@int.uni-karlsruhe.de
daa02cf195818cbf651ef81941a233727f71591fFace recognition system on Raspberry Pi
Institute of Electronics and Computer Science
14 Dzerbenes Street, Riga, LV 1006, Latvia
('2059963', 'Olegs Nikisins', 'olegs nikisins')
('2337567', 'Rihards Fuksis', 'rihards fuksis')
('3199162', 'Arturs Kadikis', 'arturs kadikis')
('3310787', 'Modris Greitans', 'modris greitans')
daa52dd09b61ee94945655f0dde216cce0ebd505Recognizing Micro-Actions and Reactions from Paired Egocentric Videos
The University of Tokyo
Carnegie Mellon University
The University of Tokyo
Tokyo, Japan
Pittsburgh, PA, USA
Tokyo, Japan
('1899753', 'Ryo Yonetani', 'ryo yonetani')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
('9467266', 'Yoichi Sato', 'yoichi sato')
yonetani@iis.u-tokyo.ac.jp
kkitani@cs.cmu.edu
ysato@iis.u-tokyo.ac.jp
daba8f0717f3f47c272f018d0a466a205eba6395
daefac0610fdeff415c2a3f49b47968d84692e87New Orleans, Louisiana, June 1 - 6, 2018. c(cid:13)2018 Association for Computational Linguistics
Proceedings of NAACL-HLT 2018, pages 1481–1491
1481
b49affdff167f5d170da18de3efa6fd6a50262a2Author manuscript, published in "Workshop on Faces in 'Real-Life' Images: Detection, Alignment, and Recognition, Marseille : France
(2008)"
b4d694961d3cde43ccef7d8fcf1061fe0d8f97f3Rapid Face Recognition Using Hashing
Australian National University, and NICTA
Australian National University, and NICTA
Canberra, Australia
Canberra, Australia
NICTA, and Australian National University
Canberra, Australia
('3177281', 'Qinfeng Shi', 'qinfeng shi')
('1711119', 'Hanxi Li', 'hanxi li')
('1780381', 'Chunhua Shen', 'chunhua shen')
b4ee1b468bf7397caa7396cfee2ab5f5ed6f2807A short review and primer on electromyography
in human computer interaction applications
Helsinki Collegium for Advanced Studies, University of Helsinki, Finland
Helsinki Institute for Information Technology, Aalto University, Finland
School of Business, Aalto University, Finland
Quantitative Employee unit, Finnish Institute of Occupational Health
POBox 40, Helsinki, 00250, Finland
Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of
Helsinki, Finland
('1751008', 'Niklas Ravaja', 'niklas ravaja')
('1713422', 'Jari Torniainen', 'jari torniainen')
benjamin.cowley@ttl.fi,
b446bcd7fb78adfe346cf7a01a38e4f43760f363To appear in ICB 2018
Longitudinal Study of Child Face Recognition
Michigan State University
East Lansing, MI, USA
Malaviya National Institute of Technology
Jaipur, India
Michigan State University
East Lansing, MI, USA
('32623642', 'Debayan Deb', 'debayan deb')
('2117075', 'Neeta Nain', 'neeta nain')
('1739705', 'Anil K. Jain', 'anil k. jain')
debdebay@msu.edu
nnain.cse@mnit.ac.in
jain@cse.msu.edu
b417b90fa0c288bbaab1aceb8ebc7ec1d3f33172Face Aging with Contextual Generative Adversarial Nets
SKLOIS, IIE, CAS
SKLOIS, IIE, CAS
School of Cyber Security, UCAS
SKLOIS, IIE, CAS
University of Trento, Italy
Qihoo 360 AI Institute, Beijing, China
National University of singapore
SKLOIS, IIE, CAS
School of Cyber Security, UCAS
Nanjing University of Science and
Technology
('38110120', 'Si Liu', 'si liu')
('7760591', 'Renda Bao', 'renda bao')
('39711014', 'Yao Sun', 'yao sun')
('1699978', 'Wei Wang', 'wei wang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('4661961', 'Defa Zhu', 'defa zhu')
('2287686', 'Xiangbo Shu', 'xiangbo shu')
liusi@iie.ac.cn
roger bao@163.com
sunyao@iie.ac.cn
wangwei1990@gmail.com
eleyans@nus.edu.sg
18502408950@163.com
shuxb@njust.edu.cn
b41374f4f31906cf1a73c7adda6c50a78b4eb498This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Iterative Gaussianization: From ICA to
Random Rotations
('2732577', 'Valero Laparra', 'valero laparra')
('1684246', 'Gustavo Camps-Valls', 'gustavo camps-valls')
('2186866', 'Jesús Malo', 'jesús malo')
b42a97fb47bcd6bfa72e130c08960a77ee96f9abFACIAL EXPRESSION RECOGNITION BASED ON GRAPH-PRESERVING SPARSE
NON-NEGATIVE MATRIX FACTORIZATION
Institute of Information Science
Beijing Jiaotong University
Beijing 100044, P.R. China
Qiuqi Ruan
ACCESS Linnaeus Center
KTH Royal Institute of Technology, Stockholm
School of Electrical Engineering
('3247912', 'Ruicong Zhi', 'ruicong zhi')
('1749334', 'Markus Flierl', 'markus flierl')
{05120370, qqruan}@bjtu.edu.cn
{ruicong, mflierl, bastiaan}@kth.se
b4d209845e1c67870ef50a7c37abaf3770563f3eGHODRATI, GAVVES, SNOEK: VIDEO TIME
Video Time: Properties, Encoders and
Evaluation
Cees G. M. Snoek
QUVA Lab
University of Amsterdam
Netherlands
('3060081', 'Amir Ghodrati', 'amir ghodrati')
('2304222', 'Efstratios Gavves', 'efstratios gavves')
{a.ghodrati,egavves,cgmsnoek}@uva.nl
b4d7ca26deb83cec1922a6964c1193e8dd7270e7
b4ee64022cc3ccd14c7f9d4935c59b16456067d3Unsupervised Cross-Domain Image Generation ('40084473', 'Davis Rempe', 'davis rempe')
('9184695', 'Haotian Zhang', 'haotian zhang')
b40290a694075868e0daef77303f2c4ca1c43269第 40 卷 第 4 期
2014 年 4 月
自 动 化 学 报
ACTA AUTOMATICA SINICA
Vol. 40, No. 4
April, 2014
融合局部与全局信息的头发形状模型
王 楠 1 艾海舟 1
摘 要 头发在人体表观中具有重要作用, 然而, 因为缺少有效的形状模型, 头发分割仍然是一个非常具有挑战性的问题. 本
文提出了一种基于部件的模型, 它对头发形状以及环境变化更加鲁棒. 该模型将局部与全局信息相结合以描述头发的形状. 局
部模型通过一系列算法构建, 包括全局形状词表生成, 词表分类器学习以及参数优化; 而全局模型刻画不同的发型, 采用支持
向量机 (Support vector machine, SVM) 来学习, 它为所有潜在的发型配置部件并确定势函数. 在消费者图片上的实验证明
了本文算法在头发形状多变和复杂环境等条件下的准确性与有效性.
关键词 头发形状建模, 部件模型, 部件配置算法, 支持向量机
引用格式 王楠, 艾海舟. 融合局部与全局信息的头发形状模型. 自动化学报, 2014, 40(4): 615−623
DOI 10.3724/SP.J.1004.2014.00615
Combining Local and Global Information for Hair Shape Modeling
AI Hai-Zhou1
('3666771', 'WANG Nan', 'wang nan')
b4362cd87ad219790800127ddd366cc465606a78Sensors 2015, 15, 26756-26768; doi:10.3390/s151026756
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
A Smartphone-Based Automatic Diagnosis System for Facial
Nerve Palsy
Interdisciplinary Program of Bioengineering, Seoul National University, Seoul 03080, Korea
Head and Neck Surgery, Seoul National University
College of Medicine, Seoul National University
Seoul 03080, Korea
Fax: +82-2-870-3863 (Y.H.K.); +82-2-3676-1175 (K.S.P.).
Academic Editor: Ki H. Chon
Received: 31 July 2015 / Accepted: 19 October 2015 / Published: 21 October 2015
('31812715', 'Hyun Seok Kim', 'hyun seok kim')
('2189639', 'So Young Kim', 'so young kim')
('40219387', 'Young Ho Kim', 'young ho kim')
('1972762', 'Kwang Suk Park', 'kwang suk park')
E-Mail: khs0330kr@bmsil.snu.ac.kr
Boramae Medical Center, Seoul 07061, Korea; E-Mail: sossi81@hanmail.net
* Authors to whom correspondence should be addressed; E-Mails: yhkiment@gmail.com (Y.H.K.);
pks@bmsil.snu.ac.kr (K.S.P.); Tel.: +82-2-870-2442 (Y.H.K.); +82-2-2072-3135 (K.S.P.);
b4f4b0d39fd10baec34d3412d53515f1a4605222Every Picture Tells a Story:
Generating Sentences from Images
1 Computer Science Department
University of Illinois at Urbana-Champaign
2 Computer Vision Group, School of Mathematics
Institute for studies in theoretical Physics and Mathematics(IPM
('2270286', 'Ali Farhadi', 'ali farhadi')
('1888731', 'Mohsen Hejrati', 'mohsen hejrati')
('21160985', 'Mohammad Amin Sadeghi', 'mohammad amin sadeghi')
('35527128', 'Peter Young', 'peter young')
('3125805', 'Cyrus Rashtchian', 'cyrus rashtchian')
('3118681', 'Julia Hockenmaier', 'julia hockenmaier')
{afarhad2,pyoung2,crashtc2,juliahmr,daf}@illinois.edu
{m.a.sadeghi,mhejrati}@gmail.com
b4b0bf0cbe1a2c114adde9fac64900b2f8f6fee4Autonomous Learning Framework Based on Online Hybrid
Classifier for Multi-view Object Detection in Video
aSchool of Electronic Information and Mechanics, China University of Geosciences, Wuhan, Hubei 430074, China
bSchool of Automation, China University of Geosciences, Wuhan, Hubei 430074, China
cHuizhou School Affiliated to Beijing Normal University, Huizhou 516002, China
dNational Key Laboratory of Science and Technology on Multispectral Information Processing, School of Automation, Huazhong
University of Science and Technology, Wuhan, 430074, China
('2588731', 'Dapeng Luo', 'dapeng luo')
b43b6551ecc556557b63edb8b0dc39901ed0343bICA AND GABOR REPRESENTATION FOR FACIAL EXPRESSION RECOGNITION
I. Buciu C. Kotropoulos
and I. Pitas
Aristotle University of Thessaloniki
GR-54124, Thessaloniki, Box 451, Greece, {nelu,costas,pitas}@zeus.csd.auth.gr
a255a54b8758050ea1632bf5a88a201cd72656e1Nonparametric Facial Feature Localization
J. K. Aggarwal
Computer and Vision Research Center
The University of Texas at Austin
('2622649', 'Birgi Tamersoy', 'birgi tamersoy')
('1713065', 'Changbo Hu', 'changbo hu')
birgi@utexas.edu
changbo.hu@gmail.com
aggarwaljk@mail.utexas.edu
a2b9cee7a3866eb2db53a7d81afda72051fe9732Reconstructing a Fragmented Face from an Attacked
Secure Identification Protocol
Department of Computer Science
University of Texas at Austin
May 6, 2011
('39573884', 'Andy Luong', 'andy luong')
('1794409', 'Kristen Grauman', 'kristen grauman')
aluong@cs.utexas.edu
a285b6edd47f9b8966935878ad4539d270b406d1Sensors 2011, 11, 9573-9588; doi:10.3390/s111009573
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Facial Expression Recognition Based on Local Binary Patterns
and Kernel Discriminant Isomap
Taizhou University, Taizhou 317000, China
School of Physics and Electronic Engineering, Taizhou University, Taizhou 318000, China
Tel.: +86-576-8513-7178; Fax: ++86-576-8513-7178.
Received: 31 August 2011; in revised form: 27 September 2011 / Accepted: 9 October 2011 /
Published: 11 October 2011
('48551029', 'Xiaoming Zhao', 'xiaoming zhao')
('1695589', 'Shiqing Zhang', 'shiqing zhang')
E-Mail: tzczsq@163.com
* Author to whom correspondence should be addressed; E-Mail: tzxyzxm@163.com;
a2bd81be79edfa8dcfde79173b0a895682d62329Multi-Objective Vehicle Routing Problem Applied to
Large Scale Post Office Deliveries
Zenia
aSchool of Technology, University of Campinas
Paschoal Marmo, 1888, Limeira, SP, Brazil
('1788152', 'Luis A. A. Meira', 'luis a. a. meira')
('37279198', 'Paulo S. Martins', 'paulo s. martins')
('7809605', 'Mauro Menzori', 'mauro menzori')
a2359c0f81a7eb032cff1fe45e3b80007facaa2aTowards Structured Analysis of Broadcast Badminton Videos
C.V.Jawahar
CVIT, KCIS, IIIT Hyderabad
('2964097', 'Anurag Ghosh', 'anurag ghosh')
('48039353', 'Suriya Singh', 'suriya singh')
{anurag.ghosh, suriya.singh}@research.iiit.ac.in, jawahar@iiit.ac.in
a2eb90e334575d9b435c01de4f4bf42d2464effcA NEW SPARSE IMAGE REPRESENTATION
ALGORITHM APPLIED TO FACIAL
EXPRESSION RECOGNITION
Ioan Buciu and Ioannis Pitas
Department of Informatics
Aristotle University of Thessaloniki, GR-541 24 Thessaloniki, Greece
Phone: +30-231-099-6361
Fax: +30-231-099-8453
Web: http://poseidon.csd.auth.gr
E-mail: nelu,pitas@zeus.csd.auth.gr
a25106a76af723ba9b09308a7dcf4f76d9283589 Available Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 3, Issue. 4, April 2014, pg.139 – 146
RESEARCH ARTICLE
Local Octal Pattern: A Proficient Feature
Extraction for Face Recognition
Computer Science and Engineering, Easwari Engineering College, India
Computer Science and Engineering, Anna University, India
('3263740', 'S Chitrakala', 's chitrakala')1 nithya.jagan90@gamil.com
2 suchitra.s@srmeaswari.ac.in
3 ckgops@gmail.com
a2d9c9ed29bbc2619d5e03320e48b45c15155195
a29a22878e1881d6cbf6acff2d0b209c8d3f778bBenchmarking Still-to-Video Face Recognition
via Partial and Local Linear Discriminant
Analysis on COX-S2V Dataset
Key Lab of Intelligent Information Processing, Institute of Computing Technology
Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
3OMRON Social Solutions Co. Ltd, Kyoto, Japan
College of Information Science and Engineering, Xinjiang University
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1705483', 'Haihong Zhang', 'haihong zhang')
('1710195', 'Shihong Lao', 'shihong lao')
('1710220', 'Xilin Chen', 'xilin chen')
{zhiwu.huang, shiguang.shan}@vipl.ict.ac.cn,
angelazhang@ssb.kusatsu.omron.co.jp, lao@ari.ncl.omron.co.jp,
ghalipk@xju.edu.cn, xilin.chen@vipl.ict.ac.cn
a2429cc2ccbabda891cc5ae340b24ad06fcdbed5Discovering the Signatures of Joint Attention in Child-Caregiver Interaction
Department of Computer Science
Department of Psychology
Stanford University
Department of Psychology
Stanford University
Department of Computer Science
Stanford University
Department of Psychology
Stanford University
('2536223', 'Michael C. Frank', 'michael c. frank')
('7211962', 'Laura Soriano', 'laura soriano')
('3147852', 'Guido Pusiol', 'guido pusiol')
('3216322', 'Li Fei-Fei', 'li fei-fei')
guido@cs.stanford.edu
lsoriano@stanford.edu
feifeili@stanford.edu
mcfrank@stanford.edu
a2b54f4d73bdb80854aa78f0c5aca3d8b56b571d
a27735e4cbb108db4a52ef9033e3a19f4dc0e5faIntention from Motion ('40063519', 'Andrea Zunino', 'andrea zunino')
('3393678', 'Jacopo Cavazza', 'jacopo cavazza')
('34465973', 'Atesh Koul', 'atesh koul')
('37783905', 'Andrea Cavallo', 'andrea cavallo')
('1834966', 'Cristina Becchio', 'cristina becchio')
('1727204', 'Vittorio Murino', 'vittorio murino')
a2bcfba155c990f64ffb44c0a1bb53f994b68a15The Photoface Database
Imperial College London
180 Queen’s Gate, London SW7 2AZ UK.
Machine Vision Lab, Faculty of Environment and Technology, University of the West of England
cid:63) Faculty of Computing, Information Systems and Mathematics, Kingston University London
Frenchay Campus, Bristol BS16 1QY UK.
Exhibition Road, South Kensington Campus, London SW7 2AZ UK.
River House, 53-57 High Street, Kingston upon Thames, Surrey KT1 1LQ UK.
Imperial College London
Informatics and Telematics Institute, Centre of Research and Technology - Hellas
6th km Xarilaou - Thermi, Thessaloniki 57001 Greece
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1689047', 'Vasileios Argyriou', 'vasileios argyriou')
('2871609', 'Maria Petrou', 'maria petrou')
{s.zafeiriou,maria.petrou}@imperial.ac.uk, vasileios.argyriou@kinston.ac.uk
{mark.hansen,gary.atkinson,melvyn.smith,lyndon.smith}@uwe.ac.uk. ∗
a2fbaa0b849ecc74f34ebb36d1442d63212b29d2 Volume 5, Issue 6, June 2015 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
An Efficient Approach to Face Recognition of Surgically
Altered Images
Department of computer science and engineering
SUS college of Engineering and Technology
Tangori, District, Mohali, Punjab, India
a50b4d404576695be7cd4194a064f0602806f3c4In Proceedings of BMVC, Edimburgh, UK, September 2006
Efficiently estimating facial expression and
illumination in appearance-based tracking
†ESCET, U. Rey Juan Carlos
C/ Tulip´an, s/n
28933 M´ostoles, Spain
‡Facultad Inform´atica, UPM
Campus de Montegancedo s/n
28660 Boadilla del Monte, Spain
http://www.dia.fi.upm.es/~pcr
('1778998', 'Luis Baumela', 'luis baumela')
a59cdc49185689f3f9efdf7ee261c78f9c180789JOURNAL OF INFORMATION SCIENCE AND ENGINEERING XX, XXX-XXX (2015)
A New Approach for Learning Discriminative Dictionary
for Pattern Classification
THUY THI NGUYEN1, BINH THANH HUYNH2 AND SANG VIET DINH2
1Faculty of Information Technology
Vietnam National University of Agriculture
Trau Quy town, Gialam, Hanoi, Vietnam
2School of Information and Communication Technology
Hanoi University of Science and Technology
No 1, Dai Co Viet Street, Hanoi, Vietnam
Dictionary learning (DL) for sparse coding based classification has been widely re-
searched in pattern recognition in recent years. Most of the DL approaches focused on
the reconstruction performance and the discriminative capability of the learned dictionary.
This paper proposes a new method for learning discriminative dictionary for sparse rep-
resentation based classification, called Incoherent Fisher Discrimination Dictionary
Learning (IFDDL). IFDDL combines the Fisher Discrimination Dictionary Learning
(FDDL) method, which learns a structured dictionary where the class labels and the dis-
crimination criterion are exploited, and the Incoherent Dictionary Learning (IDL) method,
which learns a dictionary where the mutual incoherence between pairs of atoms is ex-
ploited. In the combination, instead of considering the incoherence between atoms in a
single shared dictionary as in IDL, we propose to incorporate the incoherence between
pairs of atoms within each sub-dictionary, which represent a specific object class. This
aims to increase discrimination capacity of between basic atoms in sub-dictionaries. The
combination allows one to exploit the advantages of both methods and the discrimination
capacity of the entire dictionary. Extensive experiments have been conducted on bench-
mark image data sets for Face recognition (ORL database, Extended Yale B database, AR
database) and Digit recognition (the USPS database). The experimental results show that
our proposed method outperforms most of state-of-the-art methods for sparse coding and
DL based classification, meanwhile maintaining similar complexity.
Keywords: dictionary learning, sparse coding, fisher criterion, pattern recognition, object
classification
1. INTRODUCTION
Sparse representation (or sparse coding) has been widely used in many problems of
image processing and computer vision [1, 2], audio processing [3, 4], as well as classifi-
cation [5-9] and archived very impressive results. In this model, an input signal is de-
composed by a sparse linear combination of a few atoms from an over-complete diction-
ary. In general, the goal of sparse representation is to represent input signals by a linear
combination of atoms (or words). This is done by minimizing the reconstruction error
under a sparsity constraint:
min
D X
||
A DX
||
X
||
||
Received February 15, 2015; revised June 18, 2015; accepted July 9, 2015.
Communicated by Hsin-Min Wang.
xxx
(1)
E-mail: myngthuy@gmail.com
E-mail: {binhht; sangdv}@soict.hust.edu.vn
a5e5094a1e052fa44f539b0d62b54ef03c78bf6aDetection without Recognition for Redaction
Rochester Institute of Technology - 83 Lomb Memorial Drive, Rochester, NY USA
2Conduent, Conduent Labs - US, 800 Phillips Rd, MS128, Webster, NY USA, 14580
('3424086', 'Shagan Sah', 'shagan sah')
('40492623', 'Ram Longman', 'ram longman')
('29980978', 'Ameya Shringi', 'ameya shringi')
('1736673', 'Robert Loce', 'robert loce')
('39834006', 'Majid Rabbani', 'majid rabbani')
('32847225', 'Raymond Ptucha', 'raymond ptucha')
Email: sxs4337@rit.edu
a5c8fc1ca4f06a344b53dc81ebc6d87f54896722Learning to see people like people
University of California, San Diego
9500 Gilman Dr, La Jolla, CA 92093
University of California, San Diego
9500 Gilman Dr, La Jolla, CA 92093
Purdue University
610 Purdue Mall, West Lafayette, IN 47907
Garrison Cottrell
University of California, San Diego
9500 Gilman Dr, La Jolla, CA 92093
('9409376', 'Amanda Song', 'amanda song')
('13212680', 'Chad Atalla', 'chad atalla')
('11157727', 'Linjie Li', 'linjie li')
feijuejuanling@gmail.com
li2477@purdue.edu
catalla@ucsd.edu
gary@ucsd.edu
a5ade88747fa5769c9c92ffde9b7196ff085a9ebWhy is Facial Expression Analysis in the Wild
Challenging?
Institute for Anthropomatics
Karlsruhe Institute of Technology, Germany
Hazım Kemal Ekenel
Faculty of Computer and Informatics
Istanbul Technical University, Turkey
Institute for Anthropomatics
Karlsruhe Institute of Technology, Germany
('40303076', 'Tobias Gehrig', 'tobias gehrig')tobias.gehrig@kit.edu
ekenel@itu.edu.tr
a56c1331750bf3ac33ee07004e083310a1e63ddcVol. xx, pp. x
c(cid:13) xxxx Society for Industrial and Applied Mathematics
x–x
Efficient Point-to-Subspace Query in (cid:96)1 with Application to Robust Object
Instance Recognition
('1699024', 'Ju Sun', 'ju sun')
('2580421', 'Yuqian Zhang', 'yuqian zhang')
('1738310', 'John Wright', 'john wright')
a54e0f2983e0b5af6eaafd4d3467b655a3de52f4Face Recognition Using Convolution Filters and
Neural Networks
Head, Dept. of E&E,PEC
Sec-12, Chandigarh – 160012
Department of CSE & IT, PEC
Sec-12, Chandigarh – 160012
C.P. Singh
Physics Department, CFSL,
Sec-36, Chandigarh - 160036
a
of
to: (a)
potential method
('1734714', 'V. Rihani', 'v. rihani')
('2927010', 'Amit Bhandari', 'amit bhandari')
vrihani@yahoo.com
amit.bhandari@yahoo.com
cpureisingh@yahoo.com
a5625cfe16d72bd00e987857d68eb4d8fc3ce4fbVFSC: A Very Fast Sparse Clustering to Cluster Faces
from Videos
University of Science, VNU-HCMC, Ho Chi Minh city, Vietnam
('2187730', 'Dinh-Luan Nguyen', 'dinh-luan nguyen')
('1780348', 'Minh-Triet Tran', 'minh-triet tran')
1212223@student.hcmus.edu.vn
tmtriet@fit.hcmus.edu.vn
a5f11c132eaab258a7cea2d681875af09cddba65A spatiotemporal model with visual attention for
video classification
Department of Electrical and Computer Engineering
University of California San Diego, La Jolla, California, USA
paper proposes a spatiotemporal model in which CNN and
RNN are concatenated, as shown in Fig. 1.
('2493180', 'Mo Shan', 'mo shan')
('50365495', 'Nikolay Atanasov', 'nikolay atanasov')
Email: {moshan, natanasov}@eng.ucsd.edu
a546fd229f99d7fe3cf634234e04bae920a2ec33RESEARCH ARTICLE
Fast Fight Detection
1 Department of Systems Engineering and Automation, E.T.S.I. Industriales, Ciudad Real, Castilla-La
Mancha, Spain, Imperial College, London, UK
('5463808', 'Ismael Serrano Gracia', 'ismael serrano gracia')
('8952654', 'Oscar Deniz Suarez', 'oscar deniz suarez')
('8219927', 'Gloria Bueno Garcia', 'gloria bueno garcia')
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
* ismael.serrano@uclm.es (ISG); oscar.deniz@uclm.es (ODS); gloria.bueno@uclm.es (GBG)
a538b05ebb01a40323997629e171c91aa28b8e2fRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton
University of Toronto, Toronto, ON M5S 2G4, Canada
('4989209', 'Vinod Nair', 'vinod nair')vnair@cs.toronto.edu
hinton@cs.toronto.edu
a57ee5a8fb7618004dd1def8e14ef97aadaaeef5Fringe Projection Techniques: Whither we are?
Applied computing and mechanics laboratory, Swiss Federal Institute of Technology, 1015 Lausanne, Switzerland
During recent years, the use of fringe projection techniques
for generating three-dimensional (3D) surface information has
become one of the most active research areas in optical metrol-
ogy.
Its applications range from measuring the 3D shape of
MEMS components to the measurement of flatness of large
panels (2.5 m × .45 m). The technique has found various ap-
plications in diverse fields: biomedical applications such as
3D intra-oral dental measurements [1], non-invasive 3D imag-
ing and monitoring of vascular wall deformations [2], human
body shape measurement for shape guided radiotherapy treat-
ment [3, 4], lower back deformation measurement [5], detection
and monitoring of scoliosis [6], inspection of wounds [7, 8]
and skin topography measurement for use in cosmetology [9,
10, 11];
industrial and scientific applications such as char-
acterization of MEMS components [12, 13], vibration analy-
sis [14, 15], refractometry [16], global measurement of free
surface deformations [17, 18], local wall thickness measure-
ment of forced sheet metals [19], corrosion analysis [20, 21],
measurement of surface roughness [22, 23], reverse engineer-
ing [24, 25, 26], quality control of printed circuit board man-
ufacturing [27, 28, 29] and heat-flow visualization [30]; kine-
matics applications such as measuring the shape and position
of a moving object/creature [31, 32] and the study of kinemat-
ical parameters of dragonfly in free flight [33, 34]; biometric
identification applications such as 3D face reconstruction for
the development of robust face recognition systems [35, 36];
cultural heritage and preservation [37, 38, 39] etc.
One of the outstanding features of some of the fringe pro-
jection techniques is their ability to provide high-resolution,
whole-field 3D reconstruction of objects in a non-contact man-
ner at video frame rates. This feature has backed the technique
to pervade new areas of applications such as security systems,
gaming and virtual reality. To gain insights into the series of
contributions that have helped in unfolding the technique to ac-
quire this feature, the reader is referred to the review articles in
this special issue by Song Zhang, and Xianyu Su et al.
A typical fringe projection profilometry system is shown in
Fig 1.
It consists of a projection unit, an image acquisition
unit and a processing/analysis unit. Measurement of shape
through fringe projection techniques involves (1) projecting a
structured pattern (usually a sinusoidal fringe pattern) onto the
object surface, (2) recording the image of the fringe pattern
that is phase modulated by the object height distribution, (3)
calculating the phase modulation by analyzing the image with
one of the fringe analysis techniques (such as Fourier transform
Figure 1: Fringe projection profilometry system
method, phase stepping and spatial phase detection methods-
most of them generate wrapped phase distribution) (4) using a
suitable phase unwrapping algorithm to get continuous phase
distribution which is proportional to the object height varia-
tions, and finally (5) calibrating the system for mapping the
unwrapped phase distribution to real world 3-D co-ordinates.
Fig. 2 shows the flowchart that depicts different steps involved
in the measurement of height distribution of an object using the
fringe projection technique and the role of each step. A pic-
torial representation of the same with more details is shown in
Fig. 3.
During the last three decades, fringe projection techniques
have developed tremendously due to the contribution of large
number of researchers and the developments can be broadly
categorized as follows: design or structure of the pattern
used for projection [40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
method of generating and projecting the patterns [50, 51, 52,
53, 54, 55, 56, 57, 58, 59, 60, 61, 62], study of errors
caused by the equipment used and proposing possible correc-
tions [63, 64, 65, 66], developing new fringe analysis meth-
ods to extract underlying phase distribution [67, 68, 69, 70,
71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83], improv-
ing existing fringe analysis methods [84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100], phase unwrapping
algorithms [101, 102, 103, 104, 105, 106, 107, 108, 109], cal-
ibration techniques [110, 111, 112, 113, 114, 115, 116, 117,
118, 119, 120, 121, 122, 123], scale of measurement (mi-
Preprint submitted to Optics and Lasers in Engineering
September 1, 2009
('1694155', 'Sai Siva Gorthi', 'sai siva gorthi')
('32741407', 'Pramod Rastogi', 'pramod rastogi')
a5ae7fe2bb268adf0c1cd8e3377f478fca5e4529Exemplar Hidden Markov Models for Classification of Facial Expressions in
Videos
Univ. of California San Diego
Univ. of Canberra, Australian
Univ. of California San Diego
Marian Bartlett
California, USA
National University
Australia
California, USA
('1735697', 'Abhinav Dhall', 'abhinav dhall')
('39707211', 'Karan Sikka', 'karan sikka')
ksikka@ucsd.edu
mbartlett@ucsd.edu
abhinav.dhall@anu.edu
a55efc4a6f273c5895b5e4c5009eabf8e5ed0d6a818
Continuous Head Movement Estimator for
Driver Assistance: Issues, Algorithms,
and On-Road Evaluations
Mohan Manubhai Trivedi, Fellow, IEEE
('1947383', 'Ashish Tawari', 'ashish tawari')
('1841835', 'Sujitha Martin', 'sujitha martin')
a51d5c2f8db48a42446cc4f1718c75ac9303cb7aCross-validating Image Description Datasets and Evaluation Metrics
Department of Computer Science
University of Shef eld, UK
('2635321', 'Josiah Wang', 'josiah wang'){j.k.wang, r.gaizauskas}@sheffield.ac.uk
a52d9e9daf2cb26b31bf2902f78774bd31c0dd88Understanding and Designing Convolutional Networks
for Local Recognition Problems
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2016-97
http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-97.html
May 13, 2016
('34703740', 'Jonathan Long', 'jonathan long')
a51882cfd0706512bf50e12c0a7dd0775285030dCross-Modal Face Matching: Beyond Viewed
Sketches
Beijing University of Posts and Telecommunications, Beijing, China. 2School of
Electronic Engineering and Computer Science Queen Mary University of London
London E1 4NS, United Kingdom
('2961830', 'Shuxin Ouyang', 'shuxin ouyang')
('1705408', 'Yi-Zhe Song', 'yi-zhe song')
('7823169', 'Xueming Li', 'xueming li')
a5c04f2ad6a1f7c50b6aa5b1b71c36af76af06be
a503eb91c0bce3a83bf6f524545888524b29b166
a5a44a32a91474f00a3cda671a802e87c899fbb4Moments in Time Dataset: one million
videos for event understanding
('2526653', 'Mathew Monfort', 'mathew monfort')
('1804424', 'Bolei Zhou', 'bolei zhou')
('3298267', 'Sarah Adel Bargal', 'sarah adel bargal')
('50112310', 'Alex Andonian', 'alex andonian')
('12082007', 'Tom Yan', 'tom yan')
('40544169', 'Kandan Ramakrishnan', 'kandan ramakrishnan')
('33421444', 'Quanfu Fan', 'quanfu fan')
('1856025', 'Carl Vondrick', 'carl vondrick')
('31735139', 'Aude Oliva', 'aude oliva')
a52581a7b48138d7124afc7ccfcf8ec3b48359d0http://www.jos.org.cn
Tel/Fax: +86-10-62562563
ISSN 1000-9825, CODEN RUXUEW
Journal of Software, Vol.17, No.3, March 2006, pp.525−534
DOI: 10.1360/jos170525
© 2006 by Journal of Software. All rights reserved.
基于 3D 人脸重建的光照、姿态不变人脸识别
柴秀娟 1+, 山世光 2, 卿来云 2, 陈熙霖 2, 高 文 1,2
1(哈尔滨工业大学 计算机学院,黑龙江 哈尔滨 150001)
2(中国科学院 计算技术研究所 ICT-ISVISION 面像识别联合实验室,北京 100080)
Pose and Illumination Invariant Face Recognition Based on 3D Face Reconstruction
Harbin Institute of Technology, Harbin 150001, China
ICT-ISVISION Joint RandD Laboratory for Face Recognition, Institute of Computer Technology, The Chinese Academy of Sciences
Beijing 100080, China)
Chai XJ, Shan SG, Qing LY, Chen XL, Gao W. Pose and illumination invariant face recognition based on 3D
face reconstruction. Journal of Software, 2006,17(3):525−534. http://www.jos.org.cn/1000-9825/17/525.htm
('2100752', 'GAO Wen', 'gao wen')E-mail: jos@iscas.ac.cn
+ Corresponding author: Phn: +86-10-58858300 ext 314, Fax: +86-10-58858301, E-mail: xjchai@jdl.ac.cn, http://www.jdl.ac.cn/
bd0265ba7f391dc3df9059da3f487f7ef17144dfData-Driven Sparse Sensor Placement
University of Washington, Seattle, WA 98195, United States
University of Washington, Seattle, WA 98195, United States
University of Washington, Seattle, WA 98195, United States
('37119658', 'Krithika Manohar', 'krithika manohar')
('1824880', 'Bingni W. Brunton', 'bingni w. brunton')
('1937069', 'J. Nathan Kutz', 'j. nathan kutz')
('3083169', 'Steven L. Brunton', 'steven l. brunton')
bd572e9cbec095bcf5700cb7cd73d1cdc2fe02f4Hindawi
Computational Intelligence and Neuroscience
Volume 2018, Article ID 7068349, 13 pages
https://doi.org/10.1155/2018/7068349
Review Article
Deep Learning for Computer Vision: A Brief Review
Technological Educational Institute of Athens, 12210 Athens, Greece
National Technical University of Athens, 15780 Athens, Greece
Received 17 June 2017; Accepted 27 November 2017; Published 1 February 2018
Academic Editor: Diego Andina
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques
in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some
of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep
Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure,
advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object
detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future
directions in designing deep learning schemes for computer vision problems and the challenges involved therein.
1. Introduction
Deep learning allows computational models of multiple
processing layers to learn and represent data with multiple
('3393001', 'Nikolaos Doulamis', 'nikolaos doulamis')
('2594647', 'Athanasios Voulodimos', 'athanasios voulodimos')
('3393144', 'Anastasios Doulamis', 'anastasios doulamis')
('1806369', 'Eftychios Protopapadakis', 'eftychios protopapadakis')
('2594647', 'Athanasios Voulodimos', 'athanasios voulodimos')
Correspondence should be addressed to Athanasios Voulodimos; thanosv@mail.ntua.gr
bd6099429bb7bf248b1fd6a1739e744512660d55Submitted 11/09; Revised 5/10; Published 8/10
Regularized Discriminant Analysis, Ridge Regression and Beyond
College of Computer Science and Technology
Zhejiang University
Hangzhou, Zhejiang 310027, China
Computer Science Division and Department of Statistics
University of California
Berkeley, CA 94720-1776, USA
Editor: Inderjit Dhillon
('1739312', 'Zhihua Zhang', 'zhihua zhang')
('1779165', 'Guang Dai', 'guang dai')
('1682914', 'Congfu Xu', 'congfu xu')
('1694621', 'Michael I. Jordan', 'michael i. jordan')
ZHZHANG@ZJU.EDU.CN
GUANG.GDAI@GMAIL.COM
XUCONGFU@ZJU.EDU.CN
JORDAN@CS.BERKELEY.EDU
bd0e100a91ff179ee5c1d3383c75c85eddc81723Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action
Detection∗
Technical University of Munich, Munich, 2KTH Royal Institute of Technology, Stockholm
Polytechnic University of Catalonia, Barcelona, 4National Taiwan University, Taipei, 5University of
Tokyo, Tokyo, 6National Institute of Informatics, Tokyo
('39393520', 'Mohammadamin Barekatain', 'mohammadamin barekatain')
('19185012', 'Hsueh-Fu Shih', 'hsueh-fu shih')
('47427148', 'Samuel Murray', 'samuel murray')
('1943224', 'Kotaro Nakayama', 'kotaro nakayama')
('47972365', 'Yutaka Matsuo', 'yutaka matsuo')
('2356111', 'Helmut Prendinger', 'helmut prendinger')
m.barekatain@tum.de, miquelmr@kth.se, r03945026@ntu.edu.tw, samuelmu@kth.se,
nakayama@weblab.t.u-tokyo.ac.jp, matsuo@weblab.t.u-tokyo.ac.jp, helmut@nii.ac.jp
bd8f3fef958ebed5576792078f84c43999b1b207BUAA-iCC at ImageCLEF 2015 Scalable
Concept Image Annotation Challenge
Intelligent Recognition and Image Processing Lab, Beihang University, Beijing
100191, P.R.China
http://irip.buaa.edu.cn/
School of Information Technology and Management, University of International
Business and Economics, Beijing 100029, P.R.China
('40013375', 'Yunhong Wang', 'yunhong wang')
('2097309', 'Jiaxin Chen', 'jiaxin chen')
('34288046', 'Ningning Liu', 'ningning liu')
('1712838', 'Li Zhang', 'li zhang')
yhwang@buaa.edu.cn; chenjiaxinX@gmail.com.
ningning.liu@uibe.edu.cn
bd9eb65d9f0df3379ef96e5491533326e9dde315
bd07d1f68486052b7e4429dccecdb8deab1924db
bd0201b32e7eca7818468f2b5cb1fb4374de75b9 International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 02 Issue: 02 | May-2015 www.irjet.net p-ISSN: 2395-0072
FACIAL EMOTION EXPRESSIONS RECOGNITION WITH BRAIN ACTIVITES
USING KINECT SENSOR V2
Ph.D student Hesham A. ALABBASI, Doctoral School of Automatic Control and Computers,
University POLITEHNICA of Bucharest, Bucharest, Romania
Bucharest, Bucharest, Romania.
Alin Moldoveanu, Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest
Bucharest, Romania.
Ph.D student Zaid Shhedi, Doctoral School of Automatic Control and Computers, University
POLITEHNICA of Bucharest, Bucharest, Romania.
is emotional
sensor, Face tracking SDK, Neural network, Brain
activities.
Key Words: Facial expressions, Facial features, Kinect
visual Studio 2013 (C++) and Matlab 2015 to recognize
eight expressions.
---------------------------------------------------------------------***---------------------------------------------------------------------
('3124644', 'Florica Moldoveanu', 'florica moldoveanu')
bd8e2d27987be9e13af2aef378754f89ab20ce10
bd236913cfe07896e171ece9bda62c18b8c8197eDeep Learning with Energy-efficient Binary Gradient Cameras
∗NVIDIA,
Carnegie Mellon University
('39131476', 'Suren Jayasuriya', 'suren jayasuriya')
('39775678', 'Orazio Gallo', 'orazio gallo')
('2931118', 'Jinwei Gu', 'jinwei gu')
('1690538', 'Jan Kautz', 'jan kautz')
bd379f8e08f88729a9214260e05967f4ca66cd65Learning Compositional Visual Concepts with Mutual Consistency
School of Electrical and Computer Engineering, Cornell University, Ithaca NY
Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca NY
3Siemens Corporate Technology, Princeton NJ
Figure 1: We propose ConceptGAN, a framework that can jointly learn, transfer and compose concepts to generate semantically meaningful
images, even in subdomains with no training data (highlighted) while the state-of-the-art methods such as CycleGAN [49] fail to do so.
('3303727', 'Yunye Gong', 'yunye gong')
('1976152', 'Srikrishna Karanam', 'srikrishna karanam')
('3311781', 'Ziyan Wu', 'ziyan wu')
('2692770', 'Kuan-Chuan Peng', 'kuan-chuan peng')
('39497207', 'Jan Ernst', 'jan ernst')
('1767099', 'Peter C. Doerschuk', 'peter c. doerschuk')
{yg326,pd83}@cornell.edu,{first.last}@siemens.com
bd13f50b8997d0733169ceba39b6eb1bda3eb1aaOcclusion Coherence: Detecting and Localizing Occluded Faces
University of California at Irvine, Irvine, CA
('1898210', 'Golnaz Ghiasi', 'golnaz ghiasi')
('3157443', 'Charless C. Fowlkes', 'charless c. fowlkes')
bd21109e40c26af83c353a3271d0cd0b5c4b4adeAttentive Sequence to Sequence Translation for Localizing Clips of Interest
by Natural Language Descriptions
Zhejiang University
University of Technology Sydney
Zhejiang University
University of Technology Sydney
Hikvision Research Institute
('1819984', 'Ke Ning', 'ke ning')
('2948393', 'Linchao Zhu', 'linchao zhu')
('50140409', 'Ming Cai', 'ming cai')
('1698559', 'Yi Yang', 'yi yang')
('2603725', 'Di Xie', 'di xie')
ningke@zju.edu.cn
zhulinchao7@gmail.com
Yi.Yang@uts.edu.au
xiedi@hikvision.com
bd8b7599acf53e3053aa27cfd522764e28474e57Learning Long Term Face Aging Patterns
from Partially Dense Aging Databases
Jinli Suo1,2,3
Graduate University of Chinese Academy of Sciences(CAS), 100190, China
2Key Lab of Intelligent Information Processing of CAS,
Institute of Computing Technology, CAS, Beijing, 100190, China
Lotus Hill Institute for Computer Vision and Information Science, 436000, China
School of Electronic Engineering and Computer Science, Peking University, 100871, China
('1698902', 'Wen Gao', 'wen gao')
('1710220', 'Xilin Chen', 'xilin chen')
('1685914', 'Shiguang Shan', 'shiguang shan')
wgao@pku.edu.cn
jlsuo@jdl.ac.cn
{xlchen,sgshan}@ict.ac.cn
bd8f77b7d3b9d272f7a68defc1412f73e5ac3135SphereFace: Deep Hypersphere Embedding for Face Recognition
Georgia Institute of Technology
Carnegie Mellon University
Sun Yat-Sen University
('36326884', 'Weiyang Liu', 'weiyang liu')
('1751019', 'Zhiding Yu', 'zhiding yu')
('1779453', 'Le Song', 'le song')
wyliu@gatech.edu, {yandongw,yzhiding}@andrew.cmu.edu, lsong@cc.gatech.edu
bd26dabab576adb6af30484183c9c9c8379bf2e0SCUT-FBP: A Benchmark Dataset for
Facial Beauty Perception
School of Electronic and Information Engineering
South China University of Technology, Guangzhou 510640, China
('2361818', 'Duorui Xie', 'duorui xie')
('2521432', 'Lingyu Liang', 'lingyu liang')
('1703322', 'Lianwen Jin', 'lianwen jin')
('1720015', 'Jie Xu', 'jie xu')
('4997446', 'Mengru Li', 'mengru li')
*Email: lianwen.jin@gmail.com
bd78a853df61d03b7133aea58e45cd27d464c3cfA Sparse Representation Approach to Facial
Expression Recognition Based on LBP plus LFDA
Computer science and Engineering Department,
Government College of Engineering, Aurangabad [Autonomous
Station Road, Aurangabad, Maharashtra, India.
bd9c9729475ba7e3b255e24e7478a5acb393c8e9Interpretable Partitioned Embedding for Customized Fashion Outfit
Composition
Zhejiang University, Hangzhou, China
Arizona State University, Phoenix, Arizona
♭Alibaba Group, Hangzhou, China
('7357719', 'Zunlei Feng', 'zunlei feng')
('46218293', 'Zhenyun Yu', 'zhenyun yu')
('7607499', 'Yezhou Yang', 'yezhou yang')
('9633703', 'Yongcheng Jing', 'yongcheng jing')
('46179768', 'Junxiao Jiang', 'junxiao jiang')
('1727111', 'Mingli Song', 'mingli song')
bd2d7c7f0145028e85c102fe52655c2b6c26aeb5Attribute-based People Search: Lessons Learnt from a
Practical Surveillance System
Rogerio Feris
IBM Watson
http://rogerioferis.com
Russel Bobbitt
IBM Watson
Lisa Brown
IBM Watson
IBM Watson
('1767897', 'Sharath Pankanti', 'sharath pankanti')bobbitt@us.ibm.com
lisabr@us.ibm.com
sharat@us.ibm.com
bd9157331104a0708aa4f8ae79b7651a5be797c6SLAC: A Sparsely Labeled Dataset for Action Classification and Localization
Massachusetts Institute of Technology, 2Facebook Applied Machine Learning, 3Dartmouth College
('1683002', 'Hang Zhao', 'hang zhao')
('3305169', 'Zhicheng Yan', 'zhicheng yan')
('1804138', 'Heng Wang', 'heng wang')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
('1690178', 'Antonio Torralba', 'antonio torralba')
{hangzhao, torralba}@mit.edu, {zyan3, hengwang, torresani}@fb.com
bdbba95e5abc543981fb557f21e3e6551a563b45International Journal of Computational Intelligence and Applications
Vol. 17, No. 2 (2018) 1850008 (15 pages)
#.c The Author(s)
DOI: 10.1142/S1469026818500086
Speeding up the Hyperparameter Optimization of Deep
Convolutional Neural Networks
Knowledge Technology, Department of Informatics
Universit€at Hamburg
Vogt-K€olln-Str. 30, Hamburg 22527, Germany
Received 15 August 2017
Accepted 23 March 2018
Published 18 June 2018
Most learning algorithms require the practitioner to manually set the values of many hyper-
parameters before the learning process can begin. However, with modern algorithms, the
evaluation of a given hyperparameter setting can take a considerable amount of time and the
search space is often very high-dimensional. We suggest using a lower-dimensional represen-
tation of the original data to quickly identify promising areas in the hyperparameter space. This
information can then be used to initialize the optimization algorithm for the original, higher-
dimensional data. We compare this approach with the standard procedure of optimizing the
hyperparameters only on the original input.
We perform experiments with various state-of-the-art hyperparameter optimization algo-
rithms such as random search, the tree of parzen estimators (TPEs), sequential model-based
algorithm con¯guration (SMAC), and a genetic algorithm (GA). Our experiments indicate that
it is possible to speed up the optimization process by using lower-dimensional data repre-
sentations at the beginning, while increasing the dimensionality of the input later in the opti-
mization process. This is independent of the underlying optimization procedure, making the
approach promising for many existing hyperparameter optimization algorithms.
Keywords: Hyperparameter optimization; hyperparameter importance; convolutional neural
networks; genetic algorithm; Bayesian optimization.
1. Introduction
The performance of many contemporary machine learning algorithms depends cru-
cially on the speci¯c initialization of hyperparameters such as the general architec-
ture, the learning rate, regularization parameters, and many others.1,2 Indeed,
This is an Open Access article published by World Scienti¯c Publishing Company. It is distributed under
the terms of the Creative Commons Attribution 4.0 (CC-BY) License. Further distribution of this work is
permitted, provided the original work is properly cited.
1850008-1
Int. J. Comp. Intel. Appl. 2018.17. Downloaded from www.worldscientific.comby WSPC on 07/18/18. Re-use and distribution is strictly not permitted, except for Open Access articles.
('11634287', 'Tobias Hinz', 'tobias hinz')
('2632932', 'Sven Magg', 'sven magg')
('1736513', 'Stefan Wermter', 'stefan wermter')
*hinz@informatik.uni-hamburg.de
†navarro@informatik.uni-hamburg.de
‡magg@informatik.uni-hamburg.de
wermter@informatik.uni-hamburg.de
bd70f832e133fb87bae82dfaa0ae9d1599e52e4bCombining Classifier for Face Identification
HCI Lab., Samsung Advanced Institute of Technology, Yongin, Korea
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
('1748684', 'Josef Kittler', 'josef kittler')
taekyun@sait.samsung.co.kr
J.Kittler@surrey.ac.uk
d1dfdc107fa5f2c4820570e369cda10ab1661b87Super SloMo: High Quality Estimation of Multiple Intermediate Frames
for Video Interpolation
Erik Learned-Miller1
1UMass Amherst
2NVIDIA 3UC Merced
('40175280', 'Huaizu Jiang', 'huaizu jiang')
('3232265', 'Deqing Sun', 'deqing sun')
('2745026', 'Varun Jampani', 'varun jampani')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
('1690538', 'Jan Kautz', 'jan kautz')
{hzjiang,elm}@cs.umass.edu,{deqings,vjampani,jkautz}@nvidia.com, mhyang@ucmerced.edu
d185f4f05c587e23c0119f2cdfac8ea335197ac0 33
Chapter III
Facial Expression Analysis,
Modeling and Synthesis:
Overcoming the Limitations of
Artificial Intelligence with the Art
of the Soluble
Eindhoven University of Technology, The Netherlands
Ritsumeikan University, Japan
('1728894', 'Christoph Bartneck', 'christoph bartneck')
('1709339', 'Michael J. Lyons', 'michael j. lyons')
d140c5add2cddd4a572f07358d666fe00e8f4fe1Statistically Learned Deformable Eye Models
Imperial College London
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('37539937', 'Bingqing Qu', 'bingqing qu')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
d1dae2993bdbb2667d1439ff538ac928c0a593dcInternational Journal of Computational Intelligence and Informatics, Vol. 3: No. 1, April - June 2013
Gamma Correction Technique Based Feature Extraction
for Face Recognition System
P Kumar
Electronics and Communication Engineering
K S Rangasamy College of Technology
Electronics and Communication Engineering
K S Rangasamy College of Technology
Tamilnadu, India
Tamilnadu, India
('9316812', 'B Vinothkumar', 'b vinothkumar')Vinoeee58@gmail.com
kumar@ksrct.ac.in
d1f58798db460996501f224fff6cceada08f59f9Transferrable Representations for Visual Recognition
Jeffrey Donahue
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2017-106
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-106.html
May 14, 2017
d115c4a66d765fef596b0b171febca334cea15b5Combining Stacked Denoising Autoencoders and
Random Forests for Face Detection
Swansea University
Singleton Park, Swansea SA2 8PP, United Kingdom
http://csvision.swan.ac.uk
('6248353', 'Jingjing Deng', 'jingjing deng')
('2168049', 'Xianghua Xie', 'xianghua xie')
('13154093', 'Michael Edwards', 'michael edwards')
*x.xie@swansea.ac.uk
d1a43737ca8be02d65684cf64ab2331f66947207IJB–S: IARPA Janus Surveillance Video Benchmark (cid:3)
Kevin O’Connor z
('1718102', 'Nathan D. Kalka', 'nathan d. kalka')
('48889427', 'Stephen Elliott', 'stephen elliott')
('8033275', 'Brianna Maze', 'brianna maze')
('40205896', 'James A. Duncan', 'james a. duncan')
('40577714', 'Julia Bryan', 'julia bryan')
('6680444', 'Anil K. Jain', 'anil k. jain')
d122d66c51606a8157a461b9d7eb8b6af3d819b0Vol-3 Issue-4 2017
IJARIIE-ISSN(O)-2395-4396
AUTOMATED RECOGNITION OF FACIAL
EXPRESSIONS
METs Institute of Engineering
Adgoan,Nashik,Maharashtra.
Adgoan, Nashik, Maharashtra.
d142e74c6a7457e77237cf2a3ded4e20f8894e1aHUMAN EMOTION ESTIMATION FROM
EEG AND FACE USING STATISTICAL
FEATURES AND SVM
1,3Department of Information Technologies,
University of telecommunications and post, Sofia, Bulgaria
2,4Department of Telecommunications,
University of telecommunications and post, Sofia, Bulgaria
('40110188', 'Strahil Sokolov', 'strahil sokolov')
('3050423', 'Yuliyan Velchev', 'yuliyan velchev')
('2283935', 'Svetla Radeva', 'svetla radeva')
('2512835', 'Dimitar Radev', 'dimitar radev')
d1082eff91e8009bf2ce933ac87649c686205195(will be inserted by the editor)
Pruning of Error Correcting Output Codes by
Optimization of Accuracy-Diversity Trade off
S¨ureyya ¨Oz¨o˘g¨ur Aky¨uz · Terry
Windeatt · Raymond Smith
Received: date / Accepted: date
d1959ba4637739dcc6cc6995e10fd41fd6604713Rochester Institute of Technology
RIT Scholar Works
Theses
5-2017
Thesis/Dissertation Collections
Deep Learning for Semantic Video Understanding
Follow this and additional works at: http://scholarworks.rit.edu/theses
Recommended Citation
Kulhare, Sourabh, "Deep Learning for Semantic Video Understanding" (2017). Thesis. Rochester Institute of Technology. Accessed
from
This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion
('10376365', 'Sourabh Kulhare', 'sourabh kulhare')sk1846@rit.edu
in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact ritscholarworks@rit.edu.
d1881993c446ea693bbf7f7d6e750798bf958900Large-Scale YouTube-8M Video Understanding with Deep Neural Networks
Institute for System Programming
Institute for System Programming
ispras.ru
('34125461', 'Manuk Akopyan', 'manuk akopyan')
('19228325', 'Eshsou Khashba', 'eshsou khashba')
manuk@ispras.ru
d1d6f1d64a04af9c2e1bdd74e72bd3ffac329576Neural Face Editing with Intrinsic Image Disentangling
Stony Brook University 2Adobe Research 3 CentraleSup elec, Universit e Paris-Saclay
('2496409', 'Zhixin Shu', 'zhixin shu')1{zhshu,samaras}@cs.stonybrook.edu
2{yumer,hadap,sunkaval,elishe}@adobe.com
d69df51cff3d6b9b0625acdcbea27cd2bbf4b9c0
d61578468d267c2d50672077918c1cda9b91429bAvailable Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 3, Issue. 9, September 2014, pg.314 – 323
RESEARCH ARTICLE
Face Image Retrieval Using Pose Specific
Set Sparse Feature Representation
Viswajyothi College of Engineering and Technology Kerala, India
Viswajyothi College of Engineering and Technology Kerala, India
('3163376', 'Sebastian George', 'sebastian george')afeefengg@gmail.com
d687fa99586a9ad229284229f20a157ba2d41aeaJournal of Intelligent Learning Systems and Applications, 2013, 5, 115-122
http://dx.doi.org/10.4236/jilsa.2013.52013 Published Online May 2013 (http://www.scirp.org/journal/jilsa)
115
Face Recognition Based on Wavelet Packet Coefficients
and Radial Basis Function Neural Networks
Virudhunagar Hindu Nadars Senthikumara Nadar College, Virudhunagar
Computer Applications, Ayya Nadar Janaki Ammal College, Sivakasi, India
Received December 12th, 2012; revised April 19th, 2013; accepted April 26th, 2013
tributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any me-
dium, provided the original work is properly cited.
Email: *kathirvalavakumar@yahoo.com, jebaarul07@yahoo.com
d69719b42ee53b666e56ed476629a883c59ddf66Learning Facial Action Units from Web Images with
Scalable Weakly Supervised Clustering
Aleix M. Martinez3
School of Comm. and Info. Engineering, Beijing University of Posts and Telecom
Robotics Institute, Carnegie Mellon University
The Ohio State University
('2393320', 'Kaili Zhao', 'kaili zhao')
d647099e571f9af3a1762f895fd8c99760a3916eExploring Facial Expressions with Compositional Features
Rutgers University
110 Frelinghuysen Road, Piscataway, NJ 08854, USA
('39606160', 'Peng Yang', 'peng yang')
('1734954', 'Qingshan Liu', 'qingshan liu')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
peyang@cs.rutgers.edu, qsliu@cs.rutgers.edu, dnm@cs.rutgers.edu
d69271c7b77bc3a06882884c21aa1b609b3f76ccFaceBoxes: A CPU Real-time Face Detector with High Accuracy
CBSR and NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
('3220556', 'Shifeng Zhang', 'shifeng zhang'){shifeng.zhang,xiangyu.zhu,zlei,hailin.shi,xiaobo.wang,szli}@nlpr.ia.ac.cn
d6a9ea9b40a7377c91c705f4c7f206a669a9eea2Visual Representations for Fine-grained
Categorization
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2015-244
http://www.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-244.html
December 17, 2015
('40565777', 'Ning Zhang', 'ning zhang')
d6ca3dc01de060871839d5536e8112b551a7f9ffSleep-deprived Fatigue Pattern Analysis using Large-Scale Selfies from Social Media
Computer Science Department
Computer Science Department
University of Rochester
University of Rochester
Rochester, USA
Rochester, USA
Department of Psychiatry
University of Rochester
Rochester, USA
Computer Science Department
University of Rochester
Rochester, USA
('1901094', 'Xuefeng Peng', 'xuefeng peng')
('33642939', 'Jiebo Luo', 'jiebo luo')
('39226140', 'Catherine Glenn', 'catherine glenn')
('35678395', 'Li-Kai Chi', 'li-kai chi')
('13171221', 'Jingyao Zhan', 'jingyao zhan')
xpeng4@u.rochester.edu
jiebo.luo@rochester.edu
catherine.glenn@rochester.edu
{lchi3, jzhan}@u.rochester.edu
d671a210990f67eba9b2d3dda8c2cb91575b4a7aJournal of Machine Learning Research ()
Submitted ; Published
Social Environment Description from Data Collected with a
Wearable Device
Computer Vision Center
Autonomous University of Barcelona
Barcelona, Spain
Editor: Radeva Petia, Pujol Oriol
('7629833', 'Pierluigi Casale', 'pierluigi casale')pierluigi@cvc.uab.cat
d61e794ec22a4d4882181da17316438b5b24890fDetecting Sensor Level Spoof Attacks Using Joint
Encoding of Temporal and Spatial Features
The Hong Kong Polytechnic University, Hong Kong
('1690410', 'Jun Liu', 'jun liu')
('1684016', 'Ajay Kumar', 'ajay kumar')
d65b82b862cf1dbba3dee6541358f69849004f30Contents lists available at ScienceDirect
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / c v i u
2.5D Elastic graph matching
Imperial College, London, UK
a r t i c l e
i n f o
a b s t r a c t
Article history:
Received 29 November 2009
Accepted 1 December 2010
Available online 17 March 2011
Keywords:
Elastic graph matching
3D face recognition
Multiscale mathematical morphology
Geodesic distances
In this paper, we propose novel elastic graph matching (EGM) algorithms for face recognition assisted by
the availability of 3D facial geometry. More specifically, we conceptually extend the EGM algorithm in
order to exploit the 3D nature of human facial geometry for face recognition/verification. In order to
achieve that, first we extend the matching module of the EGM algorithm in order to capitalize on the
2.5D facial data. Furthermore, we incorporate the 3D geometry into the multiscale analysis used and
build a novel geodesic multiscale morphological pyramid of dilations/erosions in order to fill the graph
jets. We show that the proposed advances significantly enhance the performance of EGM algorithms.
We demonstrate the efficiency of the proposed advances in the face recognition/verification problem
using photometric stereo.
Ó 2011 Elsevier Inc. All rights reserved.
1. Introduction
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('2871609', 'Maria Petrou', 'maria petrou')
d6102a7ddb19a185019fd2112d2f29d9258f6decProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
3721
d6bfa9026a563ca109d088bdb0252ccf33b76bc6Unsupervised Temporal Segmentation of Facial Behaviour
Department of Computer Science and Engineering, IIT Kanpur
('2094658', 'Abhishek Kar', 'abhishek kar')
('2676758', 'Prithwijit Guha', 'prithwijit guha')
{akar,amit}@iitk.ac.in, prithwijit.guha@tcs.com
d67dcaf6e44afd30c5602172c4eec1e484fc7fb7Illumination Normalization for Robust Face Recognition
Using Discrete Wavelet Transform
Mahanakorn University of Technology
51 Cheum-Sampan Rd., Nong Chok, Bangkok, THAILAND 10530
('2337544', 'Amnart Petpon', 'amnart petpon')
('1805935', 'Sanun Srisuk', 'sanun srisuk')
ta tee473@hotmail.com, sanun@mut.ac.th
d6c7092111a8619ed7a6b01b00c5f75949f137bfA Novel Feature Extraction Technique for Facial Expression
Recognition
1 Department of Computer Science, School of Applied Statistics,
National Institute of Development Administration
Bangkok, 10240, Thailand

2 Department of Computer Science, School of Applied Statistics,
National Institute of Development Administration
Bangkok, 10240, Thailand
('7484236', 'Mohammad Shahidul Islam', 'mohammad shahidul islam')
('2291161', 'Surapong Auwatanamongkol', 'surapong auwatanamongkol')
d68dbb71b34dfe98dee0680198a23d3b53056394VIVA Face-off Challenge: Dataset Creation and Balancing Privacy
University of California, San Diego
9500 Gilman Drive, La Jolla, CA 92093
1. Introduction
Vision for intelligent vehicles is a growing area of re-
search [5] for many practical reasons including the rela-
tively inexpensive nature of camera sensing units and even
more the non-contact and non-intrusive manner of obser-
vation. The latter is of critical importance when observing
the driver inside the vehicle cockpit because no sensing unit
should impede the driver’s primary task of driving. One
of the key tasks in observing the driver is to estimate the
driver’s gaze direction. From a vision sensing perspective,
for driver gaze estimation, two of the fundamental building
blocks are face detection and head pose estimation.
Figure 1. A sample of challenging instances due to varying illumi-
nation, occlusions and camera perspectives.
In literature, vision based systems for face detection and
head pose estimation have progressed significantly in the
last decade. However, the limits of the state-of-the-art sys-
tems have not been tested thoroughly on a common pool
of challenging dataset as the one we propose in this work.
Using our database, we want to benchmark existing algo-
rithms to highlight problems and deficiencies in current
approaches and, simultaneously, progress the development
of future algorithms to tackle this problem. Furthermore,
while introducing a new benchmarking database, we also
raise awareness of privacy protection systems [4] necessary
to protect the identity of driver’s in such databases.
2. In-the Wild Dataset
In recent years, literature has introduced a few in-the-
wild datasets (e.g. Helen [2] and COFW [1]) but nothing
like the challenges from real-world driving scenario are pre-
sented in such databases. Therefore, we introduce a never
before seen challenging database of driver’s faces under
varying illumination (e.g. sunny and cloudy), in the pres-
ence of typical partially occluding objects (e.g. eyewear and
hats) or actions (e.g. hand movements),in blur from head
motions, under different camera configurations and from
different drivers. A small sample of these challenging in-
stances are depicted in Figure 1.
Three major efforts have been put forth in creating this
challenging database. One is in the data collection itself
which was done by instrumenting vehicles at UCSD-LISA
and having multiple drivers drive the instrumented vehicle
year around. Second is in extracting challenging instances
from more than a hundred hours of video data. The final
effort has been in ground truth annotations (e.g. face posi-
tion and head pose). Preliminary evaluation of the state-of-
the art head pose algorithms on a small validation part of
this dataset is shown in Table 1. Here detection rate is the
number of sample images where an algorithm produced an
output over the total number of sample images. It is evident
that no one algorithm is yet to reach high detection rate and
low error values in head pose.
3. Balancing Privacy
In current literature, there is a lack of publicly available
naturalistic driving data largely due to concerns over indi-
vidual privacy. Camera sensors looking at a driver, which
('1841835', 'Sujitha Martin', 'sujitha martin')
('1713989', 'Mohan M. Trivedi', 'mohan m. trivedi')
scmartin@ucsd.edu, mtrivedi@ucsd.edu
d666ce9d783a2d31550a8aa47da45128a67304a7On Relating Visual Elements to City Statistics
University of California, Berkeley
Maneesh Agrawala†
University of California, Berkeley
University of California, Berkeley
(c) Visual Elements for Thefts in San Francisco
(a) Predicted High Theft Location in Oakland
(b) Predicted Low Theft Location in Oakland
(d) Predicted Theft Rate in Oakland
Figure 1: Our system automatically computes a predictor from a set of Google StreetView images of areas where a statistic was observed. In this example
we use a predictor generated from reports of theft in San Francisco to predict the probability of thefts occurring in Oakland. Our system can predict high
theft rate areas (a) and low theft rates area (b) based solely on street-level images from the areas. Visually, the high theft area exhibits a marked quality of
disrepair (bars on the windows, unkempt facades, etc), a visual cue that the probability of theft is likely higher. Our method automatically computes machine
learning models that detect visual elements similar to these cues (c) from San Francisco. To compute predictions, we use the models to detect the presence of
these visual elements in an image and combine all of the detections according to an automatically learned set of weights. Our resulting predictions are 63%
accurate in this case and can be computed everywhere in Oakland (d) as they only rely on images as input.
('2288243', 'Sean M. Arietta', 'sean m. arietta')
('1752236', 'Ravi Ramamoorthi', 'ravi ramamoorthi')
d6fb606e538763282e3942a5fb45c696ba38aee6
bcee40c25e8819955263b89a433c735f82755a03Biologically inspired vision for human-robot
interaction
M. Saleiro, M. Farrajota, K. Terzi´c, S. Krishna, J.M.F. Rodrigues, and J.M.H.
du Buf
Vision Laboratory, LARSyS, University of the Algarve, 8005-139 Faro, Portugal
{masaleiro, mafarrajota, kterzic, jrodrig, dubuf}@ualg.pt,
saikrishnap2003@gmail.com,
bc6de183cd8b2baeebafeefcf40be88468b04b74Age Group Recognition using Human Facial Images
International Journal of Computer Applications (0975 – 8887)
Volume 126 – No.13, September 2015
Dept. of Electronics and Telecommunication
Government College of Engineering
Aurangabad, Maharashtra, India
('31765215', 'Shailesh S. Kulkarni', 'shailesh s. kulkarni')
bcf19b964e7d1134d00332cf1acf1ee6184aff001922
IEICE TRANS. INF. & SYST., VOL.E100–D, NO.8 AUGUST 2017
LETTER
Trajectory-Set Feature for Action Recognition
SUMMARY We propose a feature for action recognition called
Trajectory-Set (TS), on top of the improved Dense Trajectory (iDT).
The TS feature encodes only trajectories around densely sampled inter-
est points, without any appearance features. Experimental results on the
UCF50 action dataset demonstrates that TS is comparable to state-of-the-
arts, and outperforms iDT; the accuracy of 95.0%, compared to 91.7% by
iDT.
key words: action recognition, trajectory, improved Dense Trajectory
the two-stream CNN [2] that uses a single frame and a opti-
cal flow stack. In their paper stacking trajectories was also
reported but did not perform well, probably the sparseness
of trajectories does not fit to CNN architectures. In contrast,
we take a hand-crafted approach that can be fused later with
CNN outputs.
1.
Introduction
Action recognition has been well studied in the computer
vision literature [1] because it is an important and challeng-
ing task. Deep learning approaches have been proposed
recently [2]–[4], however still a hand-crafted feature, im-
proved Dense Trajectory (iDT) [5], [6], is comparable in
performance. Moreover, top performances of deep learn-
ing approaches are obtained by combining the iDT fea-
ture [3], [7], [8].
In this paper, we propose a novel hand-crafted feature
for action recognition, called Trajectory-Set (TS), that en-
codes trajectories in a local region of a video. The con-
tribution of this paper is summarized as follows. We pro-
pose another hand-crafted feature that can be combined with
deep learning approaches. Hand-crafted features are com-
plement to deep learning approaches, however a little effort
has been done in this direction after iDT. Second, the pro-
posed TS feature focuses on the better handling of motions
in the scene. The iDT feature uses trajectories of densely
samples interest points in a simple way, while we explore
here the way to extract a rich information from trajectories.
The proposed TS feature is complement to appearance in-
formation such as HOG and objects in the scene, which can
be computed separately and combined afterward in a late
fusion fashion.
There are two relate works relevant to our work. One
is trajectons [9] that uses a global dictionary of trajectories
in a video to cluster representative trajectories as snippets.
Our TS feature is computed locally, not globally, inspired
by the success of local image descriptors [10]. The other is
Manuscript received March 2, 2017.
Manuscript revised April 27, 2017.
Manuscript publicized May 10, 2017.
The authors are with Hiroshima University, Higashihiroshima
shi, 739–8527 Japan.
DOI: 10.1587/transinf.2017EDL8049
2. Dense Trajectory
Here we briefly summarize the improved dense trajectory
(iDT) [6] on which we base for the proposed method. First,
the image pyramid for a particular frame at time t in a video
is constructed, and interest points are densely sampled at
each level of the pyramid. Next, interest points are tracked
in the following L frames (L = 15 by default). Then, the
iDT is computed by using local features such as HOG (His-
togram of Oriented Gradient) [10], HOF (Histogram of Op-
tical Flow), and MBH (Motion Boundary Histograms) [11]
along the trajectory tube; a stack of patches centered at the
trajectory in the frames.
, pt1
In fact, Tt0,tL
For example, between two points in time t0 and tL, a
, . . . , ptL in frames {t0, t1,
trajectory Tt0,tL has points pt0
. . . , tL}.
is a vector of displacement be-
tween frames rather than point coordinates, that is, Tt0,tL
(v0, v1, . . . , vL−1) where vi = pi−1 − pi. Local features such as
HOGti are computed with a patch centered at pti in frame at
time ti.
To improve the performance, the global motion is re-
moved by computing homography, and background trajec-
tories are removed by using a people detector. The Fisher
vector encoding [12] is used to compute an iDT feature of a
video.
3. Proposed Trajectory-Set Feature
We think that extracted trajectories might have rich informa-
tion discriminative enough for classifying different actions,
even although trajectories have no appearance information.
As shown in Fig. 1, different actions are expected to have
different trajectories, regardless of appearance, texture, or
shape of the video frame contents. However a single trajec-
tory Tt0,tL may be severely affected by inaccurate tracking
results and an irregular motion in the frame.
We instead propose to aggregate nearby trajectories to
form a Trajectory-Set (TS) feature. First, a frame is divided
into non-overlapping cells of M × M pixels as shown in
Copyright c(cid:2) 2017 The Institute of Electronics, Information and Communication Engineers
('47916686', 'Kenji Matsui', 'kenji matsui')
('1744862', 'Toru Tamaki', 'toru tamaki')
('1688940', 'Bisser Raytchev', 'bisser raytchev')
('1686272', 'Kazufumi Kaneda', 'kazufumi kaneda')
a) E-mail: tamaki@hiroshima-u.ac.jp
bc9003ad368cb79d8a8ac2ad025718da5ea36bc4Technische Universit¨at M¨unchen
Bildverstehen und Intelligente Autonome Systeme
Facial Expression Recognition With A
Three-Dimensional Face Model
Vollst¨andiger Abdruck der von der Fakult¨at f¨ur Informatik der Technischen Uni-
versit¨at M¨unchen zur Erlangung des akademischen Grades eines
Doktors der Naturwissenschaften
genehmigten Dissertation.
Vorsitzender:
Univ.-Prof. Dr. Johann Schlichter
Pr¨ufer der Dissertation: 1. Univ.-Prof. Dr. Bernd Radig (i.R.)
2. Univ.-Prof. Gudrun J. Klinker, Ph.D.
Die Dissertation wurde am 04.07.2011 bei der Technischen Universit¨at M¨unchen
eingereicht und durch die Fakult¨at f¨ur Informatik am 02.12.2011 angenommen.
('50565622', 'Christoph Mayer', 'christoph mayer')
bc15a2fd09df7046e7e8c7c5b054d7f06c3cefe9Using Deep Autoencoders for Facial Expression
Recognition
COMSATS Institute of Information Technology, Islamabad
Information Technology University (ITU), Punjab, Lahore, Pakistan
National University of Sciences and Technology (NUST), Islamabad, Pakistan
('24040678', 'Siddique Latif', 'siddique latif')
('1734917', 'Junaid Qadir', 'junaid qadir')
engr.ussman@gmail.com, slatif.msee15seecs@seecs.edu.pk, junaid.qadir@itu.edu.pk
bcc346f4a287d96d124e1163e4447bfc47073cd8
bc27434e376db89fe0e6ef2d2fabc100d2575ec6Faceless Person Recognition;
Privacy Implications in Social Media
Max-Planck Institute for Informatics
Person A training samples.
Is this person A ?
Fig. 1: An illustration of one of the scenarios considered: can a vision system
recognise that the person in the right image is the same as the tagged person in
the left images, even when the head is obfuscated?
('2390510', 'Seong Joon Oh', 'seong joon oh')
('1798000', 'Rodrigo Benenson', 'rodrigo benenson')
('1739548', 'Mario Fritz', 'mario fritz')
('1697100', 'Bernt Schiele', 'bernt schiele')
{joon, benenson, mfritz, schiele}@mpi-inf.mpg.de
bcc172a1051be261afacdd5313619881cbe0f676978-1-5090-4117-6/17/$31.00 ©2017 IEEE
2197
ICASSP 2017
bcfeac1e5c31d83f1ed92a0783501244dde5a471
bc12715a1ddf1a540dab06bf3ac4f3a32a26b135An Analysis of the State of the Art in Multiple Object Tracking
Tracking the Trackers:
Technical University Munich, Germany
University of Adelaide, Australia
3Photogrammetry and Remote Sensing, ETH Z¨urich, Switzerland
4TU Darmstadt, Germany
('34761498', 'Anton Milan', 'anton milan')
('1803034', 'Konrad Schindler', 'konrad schindler')
('34493380', 'Stefan Roth', 'stefan roth')
bc910ca355277359130da841a589a36446616262Conditional High-order Boltzmann Machine:
A Supervised Learning Model for Relation Learning
1Center for Research on Intelligent Perception and Computing
National Laboratory of Pattern Recognition
2Center for Excellence in Brain Science and Intelligence Technology
Institute of Automation, Chinese Academy of Sciences
('39937384', 'Yan Huang', 'yan huang')
('40119691', 'Wei Wang', 'wei wang')
('22985667', 'Liang Wang', 'liang wang')
{yhuang, wangwei, wangliang}@nlpr.ia.ac.cn
bc2852fa0a002e683aad3fb0db5523d1190d0ca5
bc866c2ced533252f29cf2111dd71a6d1724bd49Sensors 2014, 14, 19561-19581; doi:10.3390/s141019561
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
A Multi-Modal Face Recognition Method Using Complete Local
Derivative Patterns and Depth Maps
Institute of Microelectronics, Tsinghua University, Beijing 100084, China
Tel.: +86-10-6279-4398.
External Editor: Vittorio M.N. Passaro
Received: 8 August 2014; in revised form: 3 October 2014 / Accepted: 13 October 2014 /
Published: 20 October 2014
('3817476', 'Shouyi Yin', 'shouyi yin')
('34585208', 'Xu Dai', 'xu dai')
('12263637', 'Peng Ouyang', 'peng ouyang')
('1743798', 'Leibo Liu', 'leibo liu')
('1803672', 'Shaojun Wei', 'shaojun wei')
E-Mails: daixu@gmail.com (X.D.); oyangpeng12@163.com (P.O.); liulb@tsinghua.edu.cn (L.L.);
wsj@tsinghua.edu.cn (S.W.)
* Author to whom correspondence should be addressed; E-Mail: yinsy@tsinghua.edu.cn;
bc8e11b8cdf0cfbedde798a53a0318e8d6f67e17Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Deep Learning for Fixed Model Reuse∗
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, 210023, China
('1708973', 'Yang Yang', 'yang yang')
('1721819', 'De-Chuan Zhan', 'de-chuan zhan')
('3750883', 'Ying Fan', 'ying fan')
('2192443', 'Yuan Jiang', 'yuan jiang')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
{yangy, zhandc, fany, jiangy, zhouzh}@lamda.nju.edu.cn
bcb99d5150d792001a7d33031a3bd1b77bea706b
bc811a66855aae130ca78cd0016fd820db1603ecTowards three-dimensional face recognition in the real
To cite this version:
HAL Id: tel-00998798
https://tel.archives-ouvertes.fr/tel-00998798
Submitted on 2 Jun 2014
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
teaching and research institutions in France or
destin´ee au d´epˆot et `a la diffusion de documents
recherche fran¸cais ou ´etrangers, des laboratoires
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
('47144044', 'Li', 'li')
bc98027b331c090448492eb9e0b9721e812fac84Journal of Intelligent Learning Systems and Applications, 2012, 4, 266-273
http://dx.doi.org/10.4236/jilsa.2012.44027 Published Online November 2012 (http://www.SciRP.org/journal/jilsa)
Face Representation Using Combined Method of Gabor
Filters, Wavelet Transformation and DCV and Recognition
Using RBF
VHNSN College, Virudhunagar, ANJA College
Sivakasi, India.
Received April 27th, 2012; revised July 19th, 2012; accepted July 26th, 2012
('39000426', 'Kathirvalavakumar Thangairulappan', 'kathirvalavakumar thangairulappan')
('15392239', 'Jebakumari Beulah Vasanthi Jeyasingh', 'jebakumari beulah vasanthi jeyasingh')
Email: *kathirvalavakumar@yahoo.com, jebaarul07@yahoo.com
bc9af4c2c22a82d2c84ef7c7fcc69073c19b30abMoCoGAN: Decomposing Motion and Content for Video Generation
Snap Research
NVIDIA
('1715440', 'Sergey Tulyakov', 'sergey tulyakov')
('9536217', 'Ming-Yu Liu', 'ming-yu liu')
('50030951', 'Xiaodong Yang', 'xiaodong yang')
('1690538', 'Jan Kautz', 'jan kautz')
stulyakov@snap.com
{mingyul,xiaodongy,jkautz}@nvidia.com
bcac3a870501c5510df80c2a5631f371f2f6f74aCVPR
#1387
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
CVPR 2013 Submission #1387. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
CVPR
#1387
Structured Face Hallucination
Anonymous CVPR submission
Paper ID 1387
ae8d5be3caea59a21221f02ef04d49a86cb80191Published as a conference paper at ICLR 2018
SKIP RNN: LEARNING TO SKIP STATE UPDATES IN
RECURRENT NEURAL NETWORKS
†Barcelona Supercomputing Center, ‡Google Inc,
Universitat Polit`ecnica de Catalunya, Columbia University
('2447185', 'Brendan Jou', 'brendan jou')
('1711068', 'Jordi Torres', 'jordi torres')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{victor.campos, jordi.torres}@bsc.es, bjou@google.com,
xavier.giro@upc.edu, shih.fu.chang@columbia.edu
aed321909bb87c81121c841b21d31509d6c78f69
ae936628e78db4edb8e66853f59433b8cc83594f
ae0765ebdffffd6e6cc33c7705df33b7e8478627Self-Reinforced Cascaded Regression for Face Alignment
DUT-RU International School of Information Science and Engineering, Dalian University of Technology, Dalian, China
2Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian, China
School of Mathematical Science, Dalian University of Technology, Dalian, China
('1710408', 'Xin Fan', 'xin fan')
('34469457', 'Risheng Liu', 'risheng liu')
('3453975', 'Kang Huyan', 'kang huyan')
('3013708', 'Yuyao Feng', 'yuyao feng')
('7864960', 'Zhongxuan Luo', 'zhongxuan luo')
{xin.fan, rsliu, zxluo}@dlut.edu.cn, huyankang@hotmail.com yyaofeng@gmail.com
aefc7c708269b874182a5c877fb6dae06da210d4Deep Learning of Invariant Features via Simulated
Fixations in Video
Stanford University, CA
Stanford University, CA
NEC Laboratories America, Inc., Cupertino, CA
('2860351', 'Will Y. Zou', 'will y. zou')
('1682028', 'Shenghuo Zhu', 'shenghuo zhu')
('1701538', 'Andrew Y. Ng', 'andrew y. ng')
('38701713', 'Kai Yu', 'kai yu')
{wzou, ang}@cs.stanford.edu
{zsh, kyu}@sv.nec-labs.com
ae2cf545565c157813798910401e1da5dc8a6199Mahkonen et al. EURASIP Journal on Image and Video
Processing (2018) 2018:61
https://doi.org/10.1186/s13640-018-0303-9
EURASIP Journal on Image
and Video Processing
RESEARCH
Open Access
Cascade of Boolean detector
combinations
('3292563', 'Katariina Mahkonen', 'katariina mahkonen')
aebb9649bc38e878baef082b518fa68f5cda23a5
aeaf5dbb3608922246c7cd8a619541ea9e4a7028Weakly Supervised Facial Action Unit Recognition through Adversarial Training
University of Science and Technology of China, Hefei, Anhui, China
('46217896', 'Guozhu Peng', 'guozhu peng')
('1791319', 'Shangfei Wang', 'shangfei wang')
gzpeng@mail.ustc.edu.cn, sfwang@ustc.edu.cn
ae836e2be4bb784760e43de88a68c97f4f9e44a1Semi-Supervised Dimensionality Reduction∗
1National Laboratory for Novel Software Technology
Nanjing University, Nanjing 210093, China
2Department of Computer Science and Engineering
Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
('51326748', 'Daoqiang Zhang', 'daoqiang zhang')
('46228434', 'Zhi-Hua Zhou', 'zhi-hua zhou')
('1680768', 'Songcan Chen', 'songcan chen')
dqzhang@nuaa.edu.cn
zhouzh@nju.edu.cn
s.chen@nuaa.edu.cn
ae5bb02599244d6d88c4fe466a7fdd80aeb91af4Analysis of Recognition Algorithms using Linear, Generalized Linear, and
Generalized Linear Mixed Models
Dept. of Computer Science
Colorado State University
Fort Colllins, CO 80523
Dept. of Statistics
Colorado State University
Fort Collins, CO 80523
('1757322', 'J. Ross Beveridge', 'j. ross beveridge')
('1750370', 'Geof H. Givens', 'geof h. givens')
ae18ccb35a1a5d7b22f2a5760f706b1c11bf39a9Sensing Highly Non-Rigid Objects with RGBD
Sensors for Robotic Systems
A Dissertation
Presented to
the Graduate School of
Clemson University
In Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy
Computer Engineering
by
May 2013
Accepted by:
Dr. Stanley T. Birchfield, Committee Chair
('2181472', 'Bryan Willimon', 'bryan willimon')
('26607413', 'Ian D. Walker', 'ian d. walker')
('1724942', 'Adam W. Hoover', 'adam w. hoover')
('2171076', 'Damon L. Woodard', 'damon l. woodard')
aeeea6eec2f063c006c13be865cec0c350244e5bInduced Disgust, Happiness and Surprise: an Addition to the MMI Facial
Expression Database
Imperial College London / Twente University
Department of Computing / EEMCS
180 Queen’s Gate / Drienerlolaan 5
London / Twente
('1795528', 'Michel F. Valstar', 'michel f. valstar')
('1694605', 'Maja Pantic', 'maja pantic')
Michel.Valstar@imperial.ac.uk, M.Pantic@imperial.ac.uk
ae9257f3be9f815db8d72819332372ac59c1316bP SY CH O L O GIC AL SC I E NC E
Research Article
Deciphering the Enigmatic Face
The Importance of Facial Dynamics in Interpreting Subtle
Facial Expressions
University of Pittsburgh and 2University of British Columbia, Vancouver, British Columbia, Canada
('2059653', 'Zara Ambadar', 'zara ambadar')
ae89b7748d25878c4dc17bdaa39dd63e9d442a0dOn evaluating face tracks in movies
To cite this version:
in movies. IEEE International Conference on Image Processing (ICIP 2013), Sep 2013, Melbourne,
Australia. 2013.
HAL Id: hal-00870059
https://hal.inria.fr/hal-00870059
Submitted on 4 Oct 2013
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('2889451', 'Alexey Ozerov', 'alexey ozerov')
('2712091', 'Jean-Ronan Vigouroux', 'jean-ronan vigouroux')
('39255836', 'Louis Chevallier', 'louis chevallier')
('1799777', 'Patrick Pérez', 'patrick pérez')
('2889451', 'Alexey Ozerov', 'alexey ozerov')
('2712091', 'Jean-Ronan Vigouroux', 'jean-ronan vigouroux')
('39255836', 'Louis Chevallier', 'louis chevallier')
('1799777', 'Patrick Pérez', 'patrick pérez')
ae1de0359f4ed53918824271c888b7b36b8a5d41Low-cost Automatic Inpainting for Artifact Suppression in Facial Images
Thomaz4
Scienti c Visualization and Computer Graphics, University of Groningen, Nijenborgh 9, Groningen, The Netherlands
2Department of Computing, National Laboratory of Scientific Computation, Petr´opolis, Brazil
Paran a Federal University, Curitiba, Brazil
University Center of FEI, S ao Bernardo do Campo, Brazil
Keywords:
Image inpainting, Face reconstruction, Statistical Decision, Image Quality Index
('1686665', 'Alexandru Telea', 'alexandru telea'){a.sobiecki, a.c.telea}@rug.nl, gilson@lncc.br, neves@ufpr.br, cet@fei.edu.br
ae4390873485c9432899977499c3bf17886fa149FACIAL EXPRESSION RECOGNITION USING
DIGITALISED FACIAL FEATURES BASED ON
ACTIVE SHAPE MODEL
Institute for Arts, Science and Technology
Glyndwr University
Wrexham, United Kingdom
('39048426', 'Nan Sun', 'nan sun')
('11832393', 'Zheng Chen', 'zheng chen')
('1818364', 'Richard Day', 'richard day')
bruce.n.sun@gmail.com1
z.chen@glyndwr.ac.uk2
r.day@glyndwr.ac.uk3
aeff403079022683b233decda556a6aee3225065DeepFace: Face Generation using Deep Learning ('31560532', 'Hardie Cate', 'hardie cate')
('6415321', 'Fahim Dalvi', 'fahim dalvi')
('8815003', 'Zeshan Hussain', 'zeshan hussain')
ccate@stanford.edu
fdalvi@cs.stanford.edu
zeshanmh@stanford.edu
ae753fd46a744725424690d22d0d00fb05e53350000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
Describing Clothing by Semantic Attributes
Anonymous ECCV submission
Paper ID 727
aea4128ba18689ff1af27b90c111bbd34013f8d5Efficient k-Support Matrix Pursuit
National University of Singapore
School of Software, Sun Yat-sen University, China
School of Information Science and Technology, Sun Yat-sen University, China
School of Computer Science, South China Normal University, China
('2356867', 'Hanjiang Lai', 'hanjiang lai')
('2493641', 'Yan Pan', 'yan pan')
('33224509', 'Canyi Lu', 'canyi lu')
('1704995', 'Yong Tang', 'yong tang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
{laihanj,canyilu}@gmail.com, panyan5@mail.sysu.edu.cn,
ytang@scnu.edu.cn, eleyans@nus.edu.sg
ae2c71080b0e17dee4e5a019d87585f2987f0508Research Paper: Emotional Face Recognition in Children
With Attention Deficit/Hyperactivity Disorder: Evidence
From Event Related Gamma Oscillation
CrossMark
School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
Research Center for Cognitive and Behavioral Sciences, Tehran University of Medical Sciences, Tehran, Iran
Amirkabir University of Technology, Tehran, Iran
Use your device to scan
and read the article online
Citation: Sarraf Razavi, M., Tehranidoost, M., Ghassemi, F., Purabassi, P., & Taymourtash, A. (2017). Emotional Face Rec-
ognition in Children With Attention Deficit/Hyperactivity Disorder: Evidence From Event Related Gamma Oscillation. Basic
and Clinical Neuroscience, 8(5):419-426. https://doi.org/10.18869/NIRP.BCN.8.5.419
: : https://doi.org/10.18869/NIRP.BCN.8.5.419
Article info:
Received: 03 Feb. 2017
First Revision: 29 Feb. 2017
Accepted: 11 Jul. 2017
Key Words:
Emotional face
recognition, Event-
Related Oscillation
(ERO), Gamma band
activity, Attention Deficit
Hyperactivity Disorder
(ADHD)
A B S T R A C T
Introduction: Children with attention-deficit/hyperactivity disorder (ADHD) have some
impairment in emotional relationship which can be due to problems in emotional processing.
The present study investigated neural correlates of early stages of emotional face processing in
this group compared with typically developing children using the Gamma Band Activity (GBA).
Methods: A total of 19 children diagnosed with ADHD (Combined type) based on DSM-IV
classification were compared with 19 typically developing children matched on age, gender, and
IQ. The participants performed an emotional face recognition while their brain activities were
recorded using an event-related oscillation procedure.
Results: The results indicated that ADHD children compared to normal group showed a significant
reduction in the gamma band activity, which is thought to reflect early perceptual emotion
discrimination for happy and angry emotions (P<0.05).
Conclusion: The present study supports the notion that individuals with ADHD have some
impairments in early stage of emotion processing which can cause their misinterpretation of
emotional faces.
1. Introduction
DHD is a common neurodevelopmental
disorder characterized by inattentiveness
and hyperactivity/impulsivity (American
Psychiatric Association, 2013). Individu-
als with ADHD also show problems in social and emo-
tional functions, including the effective assessment of
the emotional state of others. It is important to set the
adaptive behavior of human facial expressions in social
interactions (Cadesky, Mota, & Schachar, 2000; Corbett
& Glidden, 2000). Based on the evidence, frontotem-
poral-posterior and fronto striatal cerebellar systems
are involved in emotional functions. These regions may
contribute to impairments of emotional recognition in
ADHD (Corbett & Glidden, 2000; Dickstein, Bannon,
Xavier Castellanos, & Milham, 2006; Durston, Van
Belle, & De Zeeuw, 2011).
* Corresponding Author:
Amirkabir University of Technology, Tehran, Iran
Tel:+98 (912) 3260661
419
Basic and ClinicalSeptember, October 2017, Volume 8, Number 5
('29928144', 'Mahdiyeh Sarraf Razavi', 'mahdiyeh sarraf razavi')
('7171067', 'Mehdi Tehranidoost', 'mehdi tehranidoost')
('34494047', 'Farnaz Ghassemi', 'farnaz ghassemi')
('29839761', 'Parivash Purabassi', 'parivash purabassi')
('29933673', 'Athena Taymourtash', 'athena taymourtash')
('34494047', 'Farnaz Ghassemi', 'farnaz ghassemi')
E-mail: ghassemi@aut.ac.ir
ae4e2c81c8a8354c93c4b21442c26773352935dd
ae85c822c6aec8b0f67762c625a73a5d08f5060dThis is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2014.2353624
IEEE TRANSACTION ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. M, NO. N, MONTH YEAR
Retrieving Similar Styles to Parse Clothing
('1721910', 'Kota Yamaguchi', 'kota yamaguchi')
('1772294', 'M. Hadi Kiapour', 'm. hadi kiapour')
('35258350', 'Luis E. Ortiz', 'luis e. ortiz')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
ae5f32e489c4d52e7311b66060c7381d932f4193Appearance-and-Relation Networks for Video Classification
State Key Laboratory for Novel Software Technology, Nanjing University, China
2Computer Vision Laboratory, ETH Zurich, Switzerland
3Google Research
('33345248', 'Limin Wang', 'limin wang')
('47113208', 'Wei Li', 'wei li')
('50135099', 'Wen Li', 'wen li')
('1681236', 'Luc Van Gool', 'luc van gool')
ae71f69f1db840e0aa17f8c814316f0bd0f6fbbfContents lists available at ScienceDirect
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / c o m p h u m b e h
Full length article
That personal profile image might jeopardize your rental opportunity!
On the relative impact of the seller's facial expressions upon buying
behavior on Airbnb™*
a Faculty of Technology, Westerdals Oslo School of Arts, Communication and Technology, Oslo, Norway
b School of Business, Reykjavik University, Reykjavik, Iceland
c Cardiff Business School, Cardiff University, Cardiff, United Kingdom
a r t i c l e i n f o
a b s t r a c t
Article history:
Received 29 November 2016
Received in revised form
2 February 2017
Accepted 9 February 2017
Available online 10 February 2017
Keywords:
Sharing economy
Peer-to-peer
Facial expressions
Evolutionary psychology
Approach and avoidance
Conjoint study
Airbnb is an online marketplace for peer-to-peer accommodation rental services. In contrast to tradi-
tional rental services, personal profile images, i.e. the sellers' facial images, are present along with the
housing on offer. This study aims to investigate the impact of a seller's facial image and their expression
upon buyers' behavior in this context. The impact of facial expressions was investigated together with
other relevant variables (price and customer ratings). Findings from a conjoint study (n ¼ 139) show that
the impact of a seller's facial expression on buying behavior in an online peer-to-peer context is sig-
nificant. A negative facial expression and absence of facial image (head silhouette) abates approach and
evokes avoidance tendencies to explore a specific web page on Airbnb, and, simultaneously decrease the
likelihood to rent. The reverse effect was true for neutral and positive facial expressions. We found that a
negative and positive facial expression had more impact on likelihood to rent, for women than for men.
Further analysis shows that the absence of facial image and an angry facial expression cannot be
compensated for by a low price and top customer ratings related to likelihood to rent. Practitioners
should keep in mind that the presence/absence of facial images and their inherent expressions have a
significant impact in the peer-to-peer accommodation rental services.
© 2017 Elsevier Ltd. All rights reserved.
1. Introduction
The sharing economy, characterized by peer-to-peer trans-
actions, has seen immense growth recently. These marketplaces are
defined by direct transactions between individuals (buyers and
sellers), while the marketplace itself is provided by a third party
(Botsman & Rogers, 2011). According to a recent survey by Penn
Schoen Berland (2016), 22% of American adults have already
offered something to this market, and 42% had used the service to
buy a product or a service. PricewaterhouseCoopers (PwC) (2014),
has predicted that these sharing economy sectors will be worth
* The authors express their thanks to Dr. R. G. Vishnu Menon for assistance with
the conjoint analysis.
* Corresponding author. Westerdals Oslo School of Arts, Communication and
Technology, Faculty of Technology, Christian Kroghs Gate 32, 0186, Oslo, Norway.
http://dx.doi.org/10.1016/j.chb.2017.02.029
0747-5632/© 2017 Elsevier Ltd. All rights reserved.
around $335 billion by 2025. Their research further indicates that
the most important growth sectors are lending and crowd funding,
online staffing, and peer-to-peer accommodation. Participants in
the peer-to-peer market tend to be motivated by new economic,
environmental, and social factors (Bucher, Fieseler, & Lutz, 2016;
B€ocker & Meelen, 2016; Schor, 2014) as this marketplace has
some additional attributes compared to more traditional forms of
commerce. The behavior of buyers on the peer-to-peer marketplace
is, however, not well understood.
Airbnb is a peer-to-peer platform that facilitates accommoda-
tion rental services. This marketplace offers intangible experienced
goods (Levitt, 1981, pp. 94e102), which are typically produced and
consumed simultaneously (Gr€onroos, 1978). The sellers are co-
producers of the service experience. Thus, the quality of renting
an apartment on Airbnb cannot be verified before the buyer has
started using the service. The Sellers on Airbnb are, therefore, an
integrated part of the service that is delivered, and are expected to
fulfill the buyer's needs throughout their stay. Consequently,
('2372119', 'Asle Fagerstrøm', 'asle fagerstrøm')
('10665177', 'Sanchit Pawar', 'sanchit pawar')
('3617093', 'Valdimar Sigurdsson', 'valdimar sigurdsson')
('3232722', 'Mirella Yani-De-Soriano', 'mirella yani-de-soriano')
E-mail address: asle.fagerstrom@westerdals.no (A. Fagerstrøm).
d893f75206b122973cdbf2532f506912ccd6fbe0Facial Expressions with Some Mixed
Expressions Recognition Using Neural
Networks
Dr.R.Parthasarathi, V.Lokeswar Reddy, K.Vishnuthej, G.Vishnu Vandan
Department of Information Technology
Pondicherry Engineering College
Puducherry-605014, India
d861c658db2fd03558f44c265c328b53e492383aAutomated Face Extraction and Normalization of 3D Mesh Data ('10423763', 'Jia Wu', 'jia wu')
('1905646', 'Raymond Tse', 'raymond tse')
('1809809', 'Linda G. Shapiro', 'linda g. shapiro')
d84a48f7d242d73b32a9286f9b148f5575acf227Global and Local Consistent Age Generative
Adversarial Networks
Center for Research on Intelligent Perception and Computing, CASIA, Beijing, China
National Laboratory of Pattern Recognition, CASIA, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
('2112221', 'Peipei Li', 'peipei li')
('33079499', 'Yibo Hu', 'yibo hu')
('39763795', 'Qi Li', 'qi li')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
Email: peipei.li, yibo.hu@cripac.ia.ac.cn, qli,rhe,znsun@nlpr.ia.ac.cn
d8f0bda19a345fac81a1d560d7db73f2b4868836UNIVERSITY OF CALIFORNIA
RIVERSIDE
Online Activity Understanding and Labeling in Natural Videos
A Dissertation submitted in partial satisfaction
of the requirements for the degree of
Doctor of Philosophy
in
Computer Science
by
August 2016
Dissertation Committee:
Dr. Amit K. Roy-Chowdhury, Chairperson
Dr. Eamonn Keogh
Dr. Evangelos Christidis
Dr. Christian Shelton
('38514801', 'Mahmudul Hasan', 'mahmudul hasan')
d82b93f848d5442f82154a6011d26df8a9cd00e7NEURAL NETWORK BASED AGE CLASSIFICATION USING
LINEAR WAVELET TRANSFORMS
1Department of Computer Science & Engineering,
Sathyabama University Old Mamallapuram Road, Chennai, India
Electronics Engineering, National Institute of Technical Teachers
Training & Research, Taramani, Chennai, India
E-mail : 1nithyaranjith2002@yahoo.co.in, 2gkvel@rediffmail.com
d8722ffbca906a685abe57f3b7b9c1b542adfa0cUniversity of Twente
Faculty: Electrical Engineering, Mathematics and Computer Science
Department: Computer Science
Group: Human Media Interaction
Facial Expression Analysis for Human
Computer Interaction
Recognizing emotions in an intelligent tutoring system by facial
expression analysis from a video stream
M. Ghijsen
November 2004
Examination committee:
Dr. D.K.J. Heylen
Prof.dr.ir. A Nijholt
Dr.ir. H.J.A. op den Akker
Dr. M. Poel
Ir. R.J. Rienks
d8896861126b7fd5d2ceb6fed8505a6dff83414fIn-Plane Rotational Alignment of Faces by Eye and Eye-Pair Detection
M.F. Karaaba1, O. Surinta1, L.R.B. Schomaker1 and M.A. Wiering1
Institute of Arti cial Intelligence and Cognitive Engineering (ALICE), University of Groningen
Nijenborgh 9, Groningen 9747AG, The Netherlands
Keywords:
Eye-pair Detection, Eye Detection, Face Alignment, Face Recognition, Support Vector Machine
{m.f.karaaba, o.surinta, l.r.b.schomaker, m.a.wiering}@rug.nl
d83d2fb5403c823287f5889b44c1971f049a1c93Motiv Emot
DOI 10.1007/s11031-013-9353-6
O R I G I N A L P A P E R
Introducing the sick face
Ó Springer Science+Business Media New York 2013
('3947094', 'Sherri C. Widen', 'sherri c. widen')
d8b568392970b68794a55c090c4dd2d7f90909d2PDA Face Recognition System
Using Advanced Correlation
Filters
Chee Kiat Ng
2005
Advisor: Prof. Khosla/Reviere
d83ae5926b05894fcda0bc89bdc621e4f21272daversion of the following thesis:
Frugal Forests: Learning a Dynamic and Cost Sensitive
Feature Extraction Policy for Anytime Activity Classification
('1794409', 'Kristen Grauman', 'kristen grauman')
('1728389', 'Peter Stone', 'peter stone')
d86fabd4498c8feaed80ec342d254fb877fb92f5Y. GOUTSU: REGION-OBJECT RELEVANCE-GUIDED VRD
Region-Object Relevance-Guided
Visual Relationship Detection
National Institute of Informatics
Tokyo, Japan
('2897806', 'Yusuke Goutsu', 'yusuke goutsu')goutsu@nii.ac.jp
d8bf148899f09a0aad18a196ce729384a4464e2bFACIAL EXPRESSION RECOGNITION AND EXPRESSION
INTENSITY ESTIMATION
A dissertation submitted to the
Graduate School—New Brunswick
Rutgers, The State University of New Jersey
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy
Graduate Program in Computer Science
Written under the direction of
and approved by
New Brunswick, New Jersey
May, 2011
('1683829', 'PENG YANG', 'peng yang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
d80a3d1f3a438e02a6685e66ee908446766fefa9ZHANG ET AL.: QUANTIFYING FACIAL AGE BY POSTERIOR OF AGE COMPARISONS
Quantifying Facial Age by Posterior of
Age Comparisons
1 SenseTime Group Limited
2 Department of Information Engineering,
The Chinese University of Hong Kong
('6693591', 'Yunxuan Zhang', 'yunxuan zhang')
('46457827', 'Li Liu', 'li liu')
('46651787', 'Cheng Li', 'cheng li')
('1717179', 'Chen Change Loy', 'chen change loy')
zhangyunxuan@sensetime.com
liuli@sensetime.com
chengli@sensetime.com
ccloy@ie.cuhk.edu.hk
d850aff9d10a01ad5f1d8a1b489fbb3998d0d80eUNIVERSITY OF CALIFORNIA
IRVINE
Recognizing and Segmenting Objects in the Presence of Occlusion and Clutter
DISSERTATION
submitted in partial satisfaction of the requirements
for the degree of
DOCTOR OF PHILOSOPHY
in Computer Science
by
Dissertation Committee:
Professor Charless Fowlkes, Chair
Professor Deva Ramanan
Professor Alexander Ihler
2016
('1898210', 'Golnaz Ghiasi', 'golnaz ghiasi')
d89cfed36ce8ffdb2097c2ba2dac3e2b2501100dRobust Face Recognition via Multimodal Deep
Face Representation
('37990555', 'Changxing Ding', 'changxing ding')
('1692693', 'Dacheng Tao', 'dacheng tao')
ab8f9a6bd8f582501c6b41c0e7179546e21c5e91Nonparametric Face Verification Using a Novel
Face Representation
('3326805', 'Hae Jong Seo', 'hae jong seo')
('1718280', 'Peyman Milanfar', 'peyman milanfar')
ab58a7db32683aea9281c188c756ddf969b4cdbdEfficient Solvers for Sparse Subspace Clustering ('50333204', 'Stephen Becker', 'stephen becker')
ab734bac3994b00bf97ce22b9abc881ee8c12918Log-Euclidean Metric Learning on Symmetric Positive Definite Manifold
with Application to Image Set Classification
†Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing, 100049, China
§Cooperative Medianet Innovation Center, China
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('3046528', 'Xianqiu Li', 'xianqiu li')
('1710220', 'Xilin Chen', 'xilin chen')
ZHIWU.HUANG@VIPL.ICT.AC.CN
WANGRUIPING@ICT.AC.CN
SGSHAN@ICT.AC.CN
XIANQIU.LI@VIPL.ICT.AC.CN
XLCHEN@ICT.AC.CN
aba770a7c45e82b2f9de6ea2a12738722566a149Face Recognition in the Scrambled Domain via Salience-Aware
Ensembles of Many Kernels
Jiang, R., Al-Maadeed, S., Bouridane, A., Crookes, D., & Celebi, M. E. (2016). Face Recognition in the
Scrambled Domain via Salience-Aware Ensembles of Many Kernels. IEEE Transactions on Information
Forensics and Security, 11(8), 1807-1817. DOI: 10.1109/TIFS.2016.2555792
Published in:
Document Version:
Peer reviewed version
Queen's University Belfast - Research Portal
Link to publication record in Queen's University Belfast Research Portal
Publisher rights
c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting
republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists,
or reuse of any copyrighted components of this work in other works.
General rights
Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other
copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated
with these rights.
Take down policy
The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to
ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the
Download date:05. Nov. 2018
Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk.
ab0f9bc35b777eaefff735cb0dd0663f0c34ad31Semi-Supervised Learning of Geospatial Objects
Through Multi-Modal Data Integration
Electrical Engineering and Computer Science
University of California, Merced, CA
('1698559', 'Yi Yang', 'yi yang')Email: snewsam@ucmerced.edu
abb396490ba8b112f10fbb20a0a8ce69737cd492Robust Face Recognition Using Color
Information
New Jersey Institute of Technology
('2047820', 'Zhiming Liu', 'zhiming liu')
('39664966', 'Chengjun Liu', 'chengjun liu')
Newark, New Jersey 07102, USA. femail:zl9@njit.edug
ab989225a55a2ddcd3b60a99672e78e4373c0df1Sample, Computation vs Storage Tradeoffs for
Classification Using Tensor Subspace Models
('9039699', 'Mohammadhossein Chaghazardi', 'mohammadhossein chaghazardi')
('1980683', 'Shuchin Aeron', 'shuchin aeron')
abac0fa75281c9a0690bf67586280ed145682422Describable Visual Attributes for Face Images
Submitted in partial fulfillment of the
requirements for the degree
of Doctor of Philosophy
in the Graduate School of Arts and Sciences
COLUMBIA UNIVERSITY
2011
('40192613', 'Neeraj Kumar', 'neeraj kumar')
ab6776f500ed1ab23b7789599f3a6153cdac84f7International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 1212
ISSN 2229-5518
A Survey on Various Facial Expression
Techniques
('2122870', 'Joy Bhattacharya', 'joy bhattacharya')
ab1719f573a6c121d7d7da5053fe5f12de0182e7Combining Visual Recognition
and Computational Linguistics
Linguistic Knowledge for Visual Recognition
and Natural Language Descriptions
of Visual Content
Thesis for obtaining the title of
Doctor of Engineering Science
(Dr.-Ing.)
of the Faculty of Natural Science and Technology I
of Saarland University
by
Saarbrücken
March 2014
('34849128', 'Marcus Rohrbach', 'marcus rohrbach')
ab2b09b65fdc91a711e424524e666fc75aae7a51Multi-modal Biomarkers to Discriminate Cognitive State*
1MIT Lincoln Laboratory, Lexington, Massachusetts, USA
2USARIEM, 3NSRDEC
1. Introduction
Multimodal biomarkers based on behavorial, neurophysiolgical, and cognitive measurements have
recently obtained increasing popularity in the detection of cognitive stress- and neurological-based
disorders. Such conditions are significantly and adversely affecting human performance and quality
of life for a large fraction of the world’s population. Example modalities used in detection of these
conditions include voice, facial expression, physiology, eye tracking, gait, and EEG analysis.
Toward the goal of finding simple, noninvasive means to detect, predict and monitor cognitive
stress and neurological conditions, MIT Lincoln Laboratory is developing biomarkers that satisfy
three criteria. First, we seek biomarkers that reflect core components of cognitive status such as
working memory capacity, processing speed, attention, and arousal. Second, and as importantly, we
seek biomarkers that reflect timing and coordination relations both within components of each
modality and across different modalities. This is based on the hypothesis that neural coordination
across different parts of the brain is essential in cognition (Figure 1). An example of timing and
coordination within a modality is the set of finely timed and synchronized physiological
components of speech production, while an example of coordination across modalities is the timing
and synchrony that occurs across speech and facial expression while speaking. Third, we seek
multimodal biomarkers that contribute in a complementary fashion under various channel and
background conditions. In this chapter, as an illustration of this biomarker approach we focus on
cognitive stress and the particular case of detecting different cognitive load levels. We also briefly
show how similar feature-extraction principles can be applied to a neurological condition through
the example of major depression disorder (MDD). MDD is one of several neurological disorders
where multi-modal biomarkers based on principles of timing and coordination are important for
detection [11]-[22]. In our cognitive load experiments, we use two easily obtained noninvasive
modalities, voice and face, and show how these two modalities can be fused to produce results on
par with more invasive, “gold-standard” EEG measurements. Vocal and facial biomarkers will also
be used in our MDD case study. In both application areas we focus on timing and coordination
relations within the components of each modality.
* Distribution A: public release.This work is sponsored by the Assistant Secretary of Defense for Research & Engineering under Air Force contract
#FA8721-05-C-0002. Opinions,interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States
Government.
('1718470', 'Thomas F. Quatieri', 'thomas f. quatieri')
('48628822', 'James R. Williamson', 'james r. williamson')
('2794344', 'Christopher J. Smalt', 'christopher j. smalt')
('38799981', 'Tejash Patel', 'tejash patel')
('2894484', 'Brian S. Helfer', 'brian s. helfer')
('3051832', 'Daryush D. Mehta', 'daryush d. mehta')
('35718569', 'Kristin Heaton', 'kristin heaton')
('47534051', 'Marianna Eddy', 'marianna eddy')
('49739272', 'Joseph Moran', 'joseph moran')
[quatieri,jrw]@ll.mit.edu
ab87dfccb1818bdf0b41d732da1f9335b43b74aeSUBMITTED TO IEEE TRANSACTIONS ON SIGNAL PROCESSING
Structured Dictionary Learning for Classification
('36657778', 'Yuanming Suo', 'yuanming suo')
('31507586', 'Minh Dao', 'minh dao')
('35210356', 'Umamahesh Srinivas', 'umamahesh srinivas')
('3346079', 'Vishal Monga', 'vishal monga')
('1709073', 'Trac D. Tran', 'trac d. tran')
abc1ef570bb2d7ea92cbe69e101eefa9a53e1d72Raisonnement abductif en logique de
description exploitant les domaines concrets
spatiaux pour l’interprétation d’images
1. LTCI, Télécom ParisTech, Université Paris-Saclay, Paris, France
Universit Paris-Dauphine, PSL Research University, CNRS, UMR
LAMSADE, 75016 Paris, France
RÉSUMÉ. L’interprétation d’images a pour objectif non seulement de détecter et reconnaître des
objets dans une scène mais aussi de fournir une description sémantique tenant compte des in-
formations contextuelles dans toute la scène. Le problème de l’interprétation d’images peut être
formalisé comme un problème de raisonnement abductif, c’est-à-dire comme la recherche de la
meilleure explication en utilisant une base de connaissances. Dans ce travail, nous présentons
une nouvelle approche utilisant une méthode par tableau pour la génération et la sélection
d’explications possibles d’une image donnée lorsque les connaissances, exprimées dans une
logique de description, comportent des concepts décrivant les objets mais aussi les relations
spatiales entre ces objets. La meilleure explication est sélectionnée en exploitant les domaines
concrets pour évaluer le degré de satisfaction des relations spatiales entre les objets.
('4156317', 'Yifan Yang', 'yifan yang')
('1773774', 'Jamal Atif', 'jamal atif')
('1695917', 'Isabelle Bloch', 'isabelle bloch')
{yifan.yang,isabelle.bloch}@telecom-paristech.fr
jamal.atif@dauphine.fr
abba1bf1348a6f1b70a26aac237338ee66764458Facial Action Unit Detection Using Attention and Relation Learning
Shanghai Jiao Tong University, China
School of Computer Science and Technology, Tianjin University, China
School of Computer Science and Engineering, Nanyang Technological University, Singapore
4 Tencent YouTu, China
School of Computer Science and Software Engineering, East China Normal University, China
('3403352', 'Zhiwen Shao', 'zhiwen shao')
('1771215', 'Zhilei Liu', 'zhilei liu')
('1688642', 'Jianfei Cai', 'jianfei cai')
('10609538', 'Yunsheng Wu', 'yunsheng wu')
('8452947', 'Lizhuang Ma', 'lizhuang ma')
shaozhiwen@sjtu.edu.cn, zhileiliu@tju.edu.cn, asjfcai@ntu.edu.sg
simonwu@tencent.com, ma-lz@cs.sjtu.edu.cn
abdd17e411a7bfe043f280abd4e560a04ab6e992Pose-Robust Face Recognition via Deep Residual Equivariant Mapping
The Chinese University of Hong Kong
2SenseTime Research
('9963152', 'Kaidi Cao', 'kaidi cao')
('46651787', 'Cheng Li', 'cheng li')
{ry017, ccloy, xtang}@ie.cuhk.edu.hk
{caokaidi, chengli}@sensetime.com
ab1dfcd96654af0bf6e805ffa2de0f55a73c025d
abeda55a7be0bbe25a25139fb9a3d823215d7536UNIVERSITATPOLITÈCNICADECATALUNYAProgramadeDoctorat:AUTOMÀTICA,ROBÒTICAIVISIÓTesiDoctoralUnderstandingHuman-CentricImages:FromGeometrytoFashionEdgarSimoSerraDirectors:FrancescMorenoNoguerCarmeTorrasMay2015
ab427f0c7d4b0eb22c045392107509451165b2baLEARNING SCALE RANGES FOR THE EXTRACTION OF REGIONS OF
INTEREST
Western Kentucky University
Department of Mathematics and Computer Science
College Heights Blvd, Bowling Green, KY
('1682467', 'Qi Li', 'qi li')
('2446364', 'Zachary Bessinger', 'zachary bessinger')
ab1900b5d7cf3317d17193e9327d57b97e24d2fc
ab8fb278db4405f7db08fa59404d9dd22d38bc83UNIVERSITÉ DE GENÈVE
Département d'Informatique
FACULTÉ DES SCIENCES
Implicit and Automated Emotional
Tagging of Videos
THÈSE
présenté à la Faculté des sciences de l'Université de Genève
pour obtenir le grade de Docteur ès sciences, mention informatique
par
de
Téhéran (IRAN)
Thèse No 4368
GENÈVE
Repro-Mail - Université de Genève
2011
('1809085', 'Thierry Pun', 'thierry pun')
('2463695', 'Mohammad SOLEYMANI', 'mohammad soleymani')
e5e5f31b81ed6526c26d277056b6ab4909a56c6cRevisit Multinomial Logistic Regression in Deep Learning:
Data Dependent Model Initialization for Image Recognition
University of Illinois at Urbana-Champaign
2Ping An Property&Casualty Insurance Company of China,
3Microsoft
('50563570', 'Bowen Cheng', 'bowen cheng')
('1972288', 'Rong Xiao', 'rong xiao')
('3133575', 'Yandong Guo', 'yandong guo')
('1689532', 'Yuxiao Hu', 'yuxiao hu')
('38504661', 'Jianfeng Wang', 'jianfeng wang')
('48571185', 'Lei Zhang', 'lei zhang')
1bcheng9@illinois.edu
2xiaorong283@pingan.com.cn
3yandong.guo@live.com, yuxiaohu@msn.com, {jianfw, leizhang}@microsoft.com
e5737ffc4e74374b0c799b65afdbf0304ff344cb
e506cdb250eba5e70c5147eb477fbd069714765bHeterogeneous Face Recognition
By
Brendan F. Klare
A Dissertation
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy
Computer Science and Engineering
2012
e572c42d8ef2e0fadedbaae77c8dfe05c4933fbfA Visual Historical Record of American High School Yearbooks
A Century of Portraits:
University of California Berkeley
Brown University
University of California Berkeley
('2361255', 'Shiry Ginosar', 'shiry ginosar')
('2660664', 'Kate Rakelly', 'kate rakelly')
('33385802', 'Sarah Sachs', 'sarah sachs')
('2130100', 'Brian Yin', 'brian yin')
('1763086', 'Alexei A. Efros', 'alexei a. efros')
e5823a9d3e5e33e119576a34cb8aed497af20eeaDocFace+: ID Document to Selfie* Matching ('9644181', 'Yichun Shi', 'yichun shi')
('6680444', 'Anil K. Jain', 'anil k. jain')
e5dfd17dbfc9647ccc7323a5d62f65721b318ba9
e510f2412999399149d8635a83eca89c338a99a1Journal of Advanced Computer Science and Technology, 1 (4) (2012) 266-283
c(cid:13)Science Publishing Corporation
www.sciencepubco.com/index.php/JACST
Face Recognition using Block-Based
DCT Feature Extraction
1Department of Electronics and Communication Engineering,
M S Ramaiah Institute of Technology, Bangalore, Karnataka, India
2Department of Electronics and Communication Engineering,
S J B Institute of Technology, Bangalore, Karnataka, India
('2472608', 'K Manikantan', 'k manikantan')
('3389602', 'Vaishnavi Govindarajan', 'vaishnavi govindarajan')
('35084871', 'V V S Sasi Kiran', 'v v s sasi kiran')
('1687245', 'S Ramachandran', 's ramachandran')
E-mail: kmanikantan@msrit.edu
E-mail: vaish.india@gmail.com
E-mail: sasikiran.f4@gmail.com
E-mail: ramachandr@gmail.com
e56c4c41bfa5ec2d86c7c9dd631a9a69cdc05e69Human Activity Recognition Based on Wearable
Sensor Data: A Standardization of the
State-of-the-Art
Smart Surveillance Interest Group, Computer Science Department
Universidade Federal de Minas Gerais, Brazil
('2954974', 'Antonio C. Nazare', 'antonio c. nazare')
('1679142', 'William Robson Schwartz', 'william robson schwartz')
Email: {arturjordao, antonio.nazare, jessicasena, william}@dcc.ufmg.br
e59813940c5c83b1ce63f3f451d03d34d2f68082Faculty of Informatics - Papers (Archive)
Faculty of Engineering and Information Sciences
University of Wollongong
Research Online
2008
A real-time facial expression recognition system for
online games
Publication Details
Zhan, C., Li, W., Ogunbona, P. & Safaei, F. (2008). A real-time facial expression recognition system for online games. International
Journal of Computer Games Technology, 2008 (Article No. 10), 1-7.
Research Online is the open access institutional repository for the
University of Wollongong. For further information contact the UOW
('3283367', 'Ce Zhan', 'ce zhan')
('1685696', 'Wanqing Li', 'wanqing li')
('1719314', 'Philip Ogunbona', 'philip ogunbona')
('1803733', 'Farzad Safaei', 'farzad safaei')
University of Wollongong, czhan@uow.edu.au
University of Wollongong, wanqing@uow.edu.au
University of Wollongong, philipo@uow.edu.au
University of Wollongong, farzad@uow.edu.au
Library: research-pubs@uow.edu.au
e5b301ee349ba8e96ea6c71782295c4f06be6c31The Case for Onloading Continuous High-Datarate Perception to the Phone
University of Washington
Microsoft Research
('1871038', 'Seungyeop Han', 'seungyeop han')
('3041721', 'Matthai Philipose', 'matthai philipose')
e569f4bd41895028c4c009e5b46b935056188e91SIMONYAN et al.: FISHER VECTOR FACES IN THE WILD
Fisher Vector Faces in the Wild
Visual Geometry Group
Department of Engineering Science
University of Oxford
Omkar M. Parkhi
Andrea Vedaldi
Andrew Zisserman
('34838386', 'Karen Simonyan', 'karen simonyan')karen@robots.ox.ac.uk
omkar@robots.ox.ac.uk
vedaldi@robots.ox.ac.uk
az@robots.ox.ac.uk
e5fbffd3449a2bfe0acb4ec339a19f5b88fff783WILES, KOEPKE, ZISSERMAN: SELF-SUP. FACIAL ATTRIBUTE FROM VIDEO
Self-supervised learning of a facial attribute
embedding from video
Visual Geometry Group
University of Oxford
Oxford, UK
('8792285', 'Olivia Wiles', 'olivia wiles')
('47104886', 'A. Sophia Koepke', 'a. sophia koepke')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
ow@robots.ox.ac.uk
koepke@robots.ox.ac.uk
az@robots.ox.ac.uk
e5342233141a1d3858ed99ccd8ca0fead519f58bISSN: 2277 – 9043
International Journal of Advanced Research in Computer Science and Electronics Engineering (IJARCSEE)
Volume 2, Issue 2, February 2013
Finger print and Palm print based Multibiometric
Authentication System with GUI Interface
PG Scholar, Dr.Pauls Engineering College, Villupuram District, Tamilnadu, India
Dr.Pauls Engineering College, Villupuram District, Tamilnadu, India
e52be9a083e621d9ed29c8e9914451a6a327ff59UvA-DARE (Digital Academic Repository)
Communication and Automatic Interpretation of Affect from Facial Expressions
Salah, A.A.; Sebe, N.; Gevers, T.
Published in:
Affective computing and interaction: psychological, cognitive, and neuroscientific perspectives
Link to publication
Citation for published version (APA):
Salah, A. A., Sebe, N., & Gevers, T. (2010). Communication and Automatic Interpretation of Affect from Facial
Expressions. In D. Gökçay, & G. Yildirim (Eds.), Affective computing and interaction: psychological, cognitive,
and neuroscientific perspectives (pp. 157-183). Hershey, PA: Information Science Reference.
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s),
other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating
your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask
the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam
The Netherlands. You will be contacted as soon as possible.
Download date: 12 Sep 2017
UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl
e5d53a335515107452a30b330352cad216f88fc3Generalized Loss-Sensitive Adversarial Learning
with Manifold Margins
Laboratory for MAchine Perception and LEarning (MAPLE)
http://maple.cs.ucf.edu/
University of Central Florida, Orlando FL 32816, USA
('46232436', 'Marzieh Edraki', 'marzieh edraki')
('2272096', 'Guo-Jun Qi', 'guo-jun qi')
m.edraki@knights.ucf.edu, guojun.qi@ucf.edu
e5799fd239531644ad9270f49a3961d7540ce358KINSHIP CLASSIFICATION BY MODELING FACIAL FEATURE HEREDITY
Cornell University 2Eastman Kodak Company
('2666471', 'Ruogu Fang', 'ruogu fang')
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
e5eb7fa8c9a812d402facfe8e4672670541ed108Performance of PCA Based Semi-supervised
Learning in Face Recognition Using MPEG-7
Edge Histogram Descriptor
Department of Computer Science and Engineering
Bangladesh University of Engineering and Technology(BUET
Dhaka-1000, Bangladesh
('3034202', 'Sheikh Motahar Naim', 'sheikh motahar naim')
('9248625', 'Abdullah Al Farooq', 'abdullah al farooq')
('1990532', 'Md. Monirul Islam', 'md. monirul islam')
Email: {shafin buet, naim sbh2007, saurav00001}@yahoo.com, mmislam@cse.buet.ac.bd
e22adcd2a6a7544f017ec875ce8f89d5c59e09c8Published in Proc. of IEEE 9th International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Los
Angeles, CA), October 2018.
Gender Privacy: An Ensemble of Semi Adversarial Networks for Confounding
Arbitrary Gender Classifiers
Computer Science and Engineering, Michigan State University, East Lansing, USA
University of Wisconsin Madison, USA
('5456235', 'Vahid Mirjalili', 'vahid mirjalili')
('2562040', 'Sebastian Raschka', 'sebastian raschka')
('1698707', 'Arun Ross', 'arun ross')
mirjalil@cse.msu.edu
mail@sebastianraschka.com
rossarun@cse.msu.edu
e27c92255d7ccd1860b5fb71c5b1277c1648ed1e
e200c3f2849d56e08056484f3b6183aa43c0f13a
e2d265f606cd25f1fd72e5ee8b8f4c5127b764dfReal-Time End-to-End Action Detection
with Two-Stream Networks
School of Engineering, University of Guelph
Vector Institute for Arti cial Intelligence
Canadian Institute for Advanced Research
('35933395', 'Alaaeldin El-Nouby', 'alaaeldin el-nouby')
('3861110', 'Graham W. Taylor', 'graham w. taylor')
{aelnouby,gwtaylor}@uoguelph.ca
e293a31260cf20996d12d14b8f29a9d4d99c4642Published as a conference paper at ICLR 2017
LR-GAN: LAYERED RECURSIVE GENERATIVE AD-
VERSARIAL NETWORKS FOR IMAGE GENERATION
Virginia Tech
Blacksburg, VA
Facebook AI Research
Menlo Park, CA
Georgia Institute of Technology
Atlanta, GA
('2404941', 'Jianwei Yang', 'jianwei yang')
('39248118', 'Anitha Kannan', 'anitha kannan')
('1746610', 'Dhruv Batra', 'dhruv batra')
jw2yang@vt.edu
akannan@fb.com
{dbatra, parikh}@gatech.edu
e20e2db743e8db1ff61279f4fda32bf8cf381f8eDeep Cross Polarimetric Thermal-to-visible Face Recognition
West Virginia University
('6779960', 'Seyed Mehdi Iranmanesh', 'seyed mehdi iranmanesh')
('35477977', 'Ali Dabouei', 'ali dabouei')
('2700951', 'Hadi Kazemi', 'hadi kazemi')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
{seiranmanesh, ad0046, hakazemi}@mix.wvu.edu, {nasser.nasrabadi}@mail.wvu.edu
f437b3884a9e5fab66740ca2a6f1f3a5724385eaHuman Identification Technical Challenges
DARPA
3701 N. Fairfax Dr
Arlington, VA 22203
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')jphillips@darpa.mil
f412d9d7bc7534e7daafa43f8f5eab811e7e4148Durham Research Online
Deposited in DRO:
16 December 2014
Version of attached le:
Accepted Version
Peer-review status of attached le:
Peer-reviewed
Citation for published item:
Kirk, H. E. and Hocking, D. R. and Riby, D. M. and Cornish, K. M. (2013) 'Linking social behaviour and
anxiety to attention to emotional faces in Williams syndrome.', Research in developmental disabilities., 34
(12). pp. 4608-4616.
Further information on publisher's website:
http://dx.doi.org/10.1016/j.ridd.2013.09.042
Publisher's copyright statement:
NOTICE: this is the author's version of a work that was accepted for publication in Research in Developmental
Disabilities. Changes resulting from the publishing process, such as peer review, editing, corrections, structural
formatting, and other quality control mechanisms may not be reected in this document. Changes may have been made
to this work since it was submitted for publication. A denitive version was subsequently published in Research in
Developmental Disabilities, 34, 12, December 2013, 10.1016/j.ridd.2013.09.042.
Additional information:
Use policy
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for
personal research or study, educational, or not-for-prot purposes provided that:
• a full bibliographic reference is made to the original source
• a link is made to the metadata record in DRO
• the full-text is not changed in any way
The full-text must not be sold in any format or medium without the formal permission of the copyright holders.
Please consult the full DRO policy for further details.
Durham University Library, Stockton Road, Durham DH1 3LY, United Kingdom
Tel : +44 (0)191 334 3042 | Fax : +44 (0)191 334 2971
http://dro.dur.ac.uk
f43eeb578e0ca48abfd43397bbd15825f94302e4Optical Computer Recognition of Facial Expressions
Associated with Stress Induced by Performance
Demands
DINGES DF, RIDER RL, DORRIAN J, MCGLINCHEY EL, ROGERS NL,
CIZMAN Z, GOLDENSTEIN SK, VOGLER C, VENKATARAMAN S, METAXAS
DN. Optical computer recognition of facial expressions associated
with stress induced by performance demands. Aviat Space Environ
Med 2005; 76(6, Suppl.):B172– 82.
Application of computer vision to track changes in human facial
expressions during long-duration spaceflight may be a useful way to
unobtrusively detect the presence of stress during critical operations. To
develop such an approach, we applied optical computer recognition
(OCR) algorithms for detecting facial changes during performance while
people experienced both low- and high-stressor performance demands.
Workload and social feedback were used to vary performance stress in
60 healthy adults (29 men, 31 women; mean age 30 yr). High-stressor
scenarios involved more difficult performance tasks, negative social
feedback, and greater time pressure relative to low workload scenarios.
Stress reactions were tracked using self-report ratings, salivary cortisol,
and heart rate. Subjects also completed personality, mood, and alexi-
thymia questionnaires. To bootstrap development of the OCR algorithm,
we had a human observer, blind to stressor condition, identify the
expressive elements of the face of people undergoing high- vs. low-
stressor performance. Different sets of videos of subjects’ faces during
performance conditions were used for OCR algorithm training. Subjec-
tive ratings of stress, task difficulty, effort required, frustration, and
negative mood were significantly increased during high-stressor perfor-
mance bouts relative to low-stressor bouts (all p ⬍ 0.01). The OCR
algorithm was refined to provide robust 3-d tracking of facial expres-
sions during head movement. Movements of eyebrows and asymmetries
in the mouth were extracted. These parameters are being used in a
Hidden Markov model to identify high- and low-stressor conditions.
Preliminary results suggest that an OCR algorithm using mouth and
eyebrow regions has the potential
to discriminate high- from low-
stressor performance bouts in 75– 88% of subjects. The validity of the
workload paradigm to induce differential levels of stress in facial ex-
pressions was established. The paradigm also provided the basic stress-
related facial expressions required to establish a prototypical OCR al-
gorithm to detect such changes. Efforts are underway to further improve
the OCR algorithm by adding facial touching and automating applica-
tion of the deformable masks and OCR algorithms to video footage of the
moving faces as a prelude to blind validation of the automated ap-
proach.
Keywords: optical computer recognition, computer vision, workload,
performance, stress, human face, cortisol, heart rate, astronauts, Markov
models.
ASTRONAUTS ARE required to perform mission-
critical tasks at a high level of functional capability
throughout spaceflight. While they can be trained to
cope with, and/or adapt to some stressors of space-
flight, stressful reactions can and have occurred during
long-duration missions, especially when operational
performance demands become elevated when unex-
pected and/or underestimated operational require-
ments occurred while crews were already experiencing
work-related stressors (13,28,42,43,52,57,66). In some of
these instances, stressed flight crews have withdrawn
from voice communications with ground controllers
(7,66), or when pressed to continue performing, made
errors that could have jeopardized the mission (13,28).
Consequently, there is a need to identify when during
operational demands astronauts are experiencing be-
havioral stress associated with performance demands.
This is especially important as mission durations in-
crease in length and ultimately involve flight to other
locations in the solar system.
Facial Expressions of Stress
Measurement of human emotional expressions via
the face, including negative affect and distress, dates
back to Darwin (14), but in recent years has been un-
dergoing extensive scientific study (46). Although cul-
tural differences can intensify facial expression of emo-
tions (53), there is considerable scientific evidence that
select emotions are communicated in distinct facial dis-
plays across cultures, age, and gender (45). Because
many techniques for monitoring stress reactions are
impractical, unreliable, or obtrusive in spaceflight, we
seek to develop a novel, objective, unobtrusive com-
puter vision system to continuously track facial expres-
sions during performance demands, to detect when
From the Unit for Experimental Psychiatry, Department of Psychi-
atry, University of Pennsylvania School of Medicine, Philadelphia, PA
(D. F. Dinges, R. L. Rider, J. Dorrian, E. L. McGlinchey, N. L. Rogers,
Z. Cizman); and the Center for Computational Biomedicine, Imaging
and Modeling, Rutgers University
New Brunswick, NJ (S. K. Goldstein, C. Vogler, S. Venkataraman,
D. N. Metaxas).
Address reprint requests to: David F. Dinges, Ph.D., Professor and
Director, Unit for Experimental Psychiatry, Department of Psychiatry,
University of Pennsylvania School of Medicine, 1013 Blockley Hall
med.upenn.edu.
Reprint & Copyright © by Aerospace Medical Association, Alexan-
dria, VA.
B172
('5515440', 'Jillian Dorrian', 'jillian dorrian')
('4940404', 'Ziga Cizman', 'ziga cizman')
('2467082', 'Christian Vogler', 'christian vogler')
('2898034', 'Sundara Venkataraman', 'sundara venkataraman')
423 Guardian Drive, Philadelphia, PA 19104-6021; dinges@mail.
f442a2f2749f921849e22f37e0480ac04a3c3fec Critical Features for Face Recognition in Humans and Machines Naphtali Abudarham1, Lior Shkiller1, Galit Yovel1,2 1School of Psychological Sciences, 2Sagol School of Neuroscience Tel Aviv University, Tel Aviv, Israel Correspondence regarding this manuscript should be addressed to: Galit Yovel School of Psychological Sciences & Sagol School of Neuroscience Tel Aviv University Tel Aviv, 69978, Israel Email: gality@post.tau.ac.il,
f4f9697f2519f1fe725ee7e3788119ed217dca34Selfie-Presentation in Everyday Life: A Large-scale
Characterization of Selfie Contexts on Instagram
Georgia Institute of Technology
North Ave NW
Atlanta, GA 30332
('10799246', 'Julia Deeb-Swihart', 'julia deeb-swihart')
('39723397', 'Christopher Polack', 'christopher polack')
('1809407', 'Eric Gilbert', 'eric gilbert')
{jdeeb3, cfpolack,gilbert,irfan}@gatech.edu
f4f6fc473effb063b7a29aa221c65f64a791d7f4Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 4/20/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
FacialexpressionrecognitioninthewildbasedonmultimodaltexturefeaturesBoSunLiandongLiGuoyanZhouJunHeBoSun,LiandongLi,GuoyanZhou,JunHe,“Facialexpressionrecognitioninthewildbasedonmultimodaltexturefeatures,”J.Electron.Imaging25(6),061407(2016),doi:10.1117/1.JEI.25.6.061407.
f4c01fc79c7ead67899f6fe7b79dd1ad249f71b0
f4373f5631329f77d85182ec2df6730cbd4686a9Soft Computing manuscript No.
(will be inserted by the editor)
Recognizing Gender from Human Facial Regions using
Genetic Algorithm
Received: date / Accepted: date
('24069279', 'Avirup Bhattacharyya', 'avirup bhattacharyya')
('40813600', 'Partha Pratim Roy', 'partha pratim roy')
('32614479', 'Samarjit Kar', 'samarjit kar')
f4210309f29d4bbfea9642ecadfb6cf9581ccec7An Agreement and Sparseness-based Learning Instance Selection
and its Application to Subjective Speech Phenomena
1 Machine Intelligence & Signal Processing Group, MMK, Technische Universit¨at M¨unchen, Germany
Imperial College London, United Kingdom
('30512170', 'Zixing Zhang', 'zixing zhang')
('1751126', 'Florian Eyben', 'florian eyben')
('39629517', 'Jun Deng', 'jun deng')
zixing.zhang@tum.de
f47404424270f6a20ba1ba8c2211adfba032f405International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 5, May 2012)
Identification of Face Age range Group using Neural
Network
('7530203', 'Sneha Thakur', 'sneha thakur') 1sne_thakur@yahoo.co.in
2ligendra@rediffmail.com
f4d30896c5f808a622824a2d740b3130be50258eDS++: A Flexible, Scalable and Provably Tight Relaxation for Matching Problems
Weizmann Institute of Science
('3046344', 'Nadav Dym', 'nadav dym')
('3416939', 'Haggai Maron', 'haggai maron')
('3232072', 'Yaron Lipman', 'yaron lipman')
f4ebbeb77249d1136c355f5bae30f02961b9a359Human Computation for Attribute and Attribute Value Acquisition
School of Computer Science
Carnegie Melon University
('2987829', 'Edith Law', 'edith law')
('1717452', 'Burr Settles', 'burr settles')
('2681926', 'Aaron Snook', 'aaron snook')
('2762792', 'Harshit Surana', 'harshit surana')
('3328108', 'Luis von Ahn', 'luis von ahn')
('39182987', 'Tom Mitchell', 'tom mitchell')
edith@cmu.edu
f4aed1314b2d38fd8f1b9d2bc154295bbd45f523Subspace Clustering using Ensembles of
K-Subspaces
Department of Electrical and Computer Engineering
University of Michigan, Ann Arbor
('1782134', 'John Lipor', 'john lipor')
('5250186', 'David Hong', 'david hong')
('2358258', 'Dejiao Zhang', 'dejiao zhang')
('1682385', 'Laura Balzano', 'laura balzano')
{lipor,dahong,dejiao,girasole}@umich.edu
f42dca4a4426e5873a981712102aa961be34539aNext-Flow: Hybrid Multi-Tasking with Next-Frame Prediction to Boost
Optical-Flow Estimation in the Wild
University of Freiburg
Germany
('31656404', 'Nima Sedaghat', 'nima sedaghat')nima@cs.uni-freiburg.de
f3ca2c43e8773b7062a8606286529c5bc9b3ce25Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative
Entropy Minimization
Electrical and Computer Engineering, University of Pittsburgh, USA
Computer Science and Engineering, University of Texas at Arlington, USA
cid:93)School of Electronic Engineering, Xidian University, China
cid:92)School of Information Technologies, University of Sydney, Australia
('2331771', 'Kamran Ghasedi Dizaji', 'kamran ghasedi dizaji')
('10797930', 'Amirhossein Herandi', 'amirhossein herandi')
('1748032', 'Heng Huang', 'heng huang')
kamran.ghasedi@gmail.com, amirhossein.herandi@uta.edu, chdeng@mail.xidian.edu.cn
tom.cai@sydney.edu.au, heng.huang@pitt.edu
f3fcaae2ea3e998395a1443c87544f203890ae15
f3015be0f9dbc1a55b6f3dc388d97bb566ff94feA Study on the Effective Approach
to Illumination-Invariant Face Recognition
Based on a Single Image
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 518055, China
2 Shenzhen Key Laboratory for Visual Computing and Analytics, Shenzhen, 518055, China
('31361063', 'Jiapei Zhang', 'jiapei zhang')
('2002129', 'Xiaohua Xie', 'xiaohua xie')
{jp.zhang,xiaohua.xie}@siat.ac.cn,
sysuxiexh@gmail.com
f3d9e347eadcf0d21cb0e92710bc906b22f2b3e7NosePose: a competitive, landmark-free
methodology for head pose estimation in the wild
IMAGO Research Group - Universidade Federal do Paran´a
('37435823', 'Antonio C. P. Nascimento', 'antonio c. p. nascimento')
('1800955', 'Olga R. P. Bellon', 'olga r. p. bellon')
{flavio,antonio.paes,olga,luciano}@ufpr.br
f3a59d85b7458394e3c043d8277aa1ffe3cdac91Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource
Constraints
Chinese University of Hong Kong
Indiana University
Chinese University of Hong Kong
('1807925', 'Di Tang', 'di tang')
('47119002', 'XiaoFeng Wang', 'xiaofeng wang')
('3297454', 'Kehuan Zhang', 'kehuan zhang')
td016@ie.cuhk.edu.hk
xw7@indiana.edu
khzhang@ie.cuhk.edu.hk
f3f77b803b375f0c63971b59d0906cb700ea24edAdvances in Electrical and Computer Engineering Volume 9, Number 3, 2009
Feature Extraction for Facial Expression
Recognition based on Hybrid Face Regions
Seyed M. LAJEVARDI, Zahir M. HUSSAIN
RMIT University, Australia
seyed.lajevardi @ rmit.edu.au
f355e54ca94a2d8bbc598e06e414a876eb62ef99
f3df296de36b7c114451865778e211350d153727Spatio-Temporal Facial Expression Recognition Using Convolutional
Neural Networks and Conditional Random Fields
University of Denver, Denver, CO
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')behzad.hasani@du.edu, and mmahoor@du.edu
f3ea181507db292b762aa798da30bc307be95344Covariance Pooling for Facial Expression Recognition
†Computer Vision Lab, ETH Zurich, Switzerland
‡VISICS, KU Leuven, Belgium
('32610154', 'Dinesh Acharya', 'dinesh acharya')
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('35268081', 'Danda Pani Paudel', 'danda pani paudel')
('1681236', 'Luc Van Gool', 'luc van gool')
{acharyad, zhiwu.huang, paudel, vangool}@vision.ee.ethz.ch
f3fed71cc4fc49b02067b71c2df80e83084b2a82Published as a conference paper at ICLR 2018
LEARNING SPARSE LATENT REPRESENTATIONS WITH
THE DEEP COPULA INFORMATION BOTTLENECK
University of Basel, Switzerland
('30069186', 'Aleksander Wieczorek', 'aleksander wieczorek')
('30537851', 'Mario Wieser', 'mario wieser')
('2620254', 'Damian Murezzan', 'damian murezzan')
('39891341', 'Volker Roth', 'volker roth')
{firstname.lastname}@unibas.ch
f3cf10c84c4665a0b28734f5233d423a65ef1f23Title
Temporal Exemplar-based Bayesian Networks for facial
expression recognition
Author(s)
Shang, L; Chan, KP
Citation
Proceedings - 7Th International Conference On Machine
Learning And Applications, Icmla 2008, 2008, p. 16-22
Issued Date
2008
URL
http://hdl.handle.net/10722/61208
Rights
This work is licensed under a Creative Commons Attribution-
NonCommercial-NoDerivatives 4.0 International License.;
International Conference on Machine Learning and Applications
Proceedings. Copyright © IEEE.; ©2008 IEEE. Personal use of
this material is permitted. However, permission to
reprint/republish this material for advertising or promotional
purposes or for creating new collective works for resale or
redistribution to servers or lists, or to reuse any copyrighted
component of this work in other works must be obtained from
the IEEE.
f35a493afa78a671b9d2392c69642dcc3dd2cdc2Automatic Attribute Discovery with Neural
Activations
University of North Carolina at Chapel Hill, USA
2 NTT Media Intelligence Laboratories, Japan
Tohoku University, Japan
('3302783', 'Sirion Vittayakorn', 'sirion vittayakorn')
('1706592', 'Takayuki Umeda', 'takayuki umeda')
('2023568', 'Kazuhiko Murasaki', 'kazuhiko murasaki')
('1745497', 'Kyoko Sudo', 'kyoko sudo')
('1718872', 'Takayuki Okatani', 'takayuki okatani')
('1721910', 'Kota Yamaguchi', 'kota yamaguchi')
f3b7938de5f178e25a3cf477107c76286c0ad691JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MARCH 2017
Object Detection with Deep Learning: A Review
('33698309', 'Zhong-Qiu Zhao', 'zhong-qiu zhao')
('36659418', 'Peng Zheng', 'peng zheng')
('51132438', 'Shou-tao Xu', 'shou-tao xu')
('1748808', 'Xindong Wu', 'xindong wu')
ebedc841a2c1b3a9ab7357de833101648281ff0e
eb526174fa071345ff7b1fad1fad240cd943a6d7Deeply Vulnerable – A Study of the Robustness of Face Recognition to
Presentation Attacks
('1990628', 'Amir Mohammadi', 'amir mohammadi')
('1952348', 'Sushil Bhattacharjee', 'sushil bhattacharjee')
eb100638ed73b82e1cce8475bb8e180cb22a09a2Temporal Action Detection with Structured Segment Networks
The Chinese University of Hong Kong
2Computer Vision Laboratory, ETH Zurich, Switzerland
('47827548', 'Yue Zhao', 'yue zhao')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('33345248', 'Limin Wang', 'limin wang')
('2765994', 'Zhirong Wu', 'zhirong wu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1807606', 'Dahua Lin', 'dahua lin')
eb6ee56e085ebf473da990d032a4249437a3e462Age/Gender Classification with Whole-Component
Convolutional Neural Networks (WC-CNN)
University of Southern California, Los Angeles, CA 90089, USA
('39004239', 'Chun-Ting Huang', 'chun-ting huang')
('7022231', 'Yueru Chen', 'yueru chen')
('35521292', 'Ruiyuan Lin', 'ruiyuan lin')
('9363144', 'C.-C. Jay Kuo', 'c.-c. jay kuo')
E-mail: {chuntinh, yueruche, ruiyuanl}@usc.edu, cckuo@sipi.usc.edu
eb8519cec0d7a781923f68fdca0891713cb81163Temporal Non-Volume Preserving Approach to Facial Age-Progression and
Age-Invariant Face Recognition
Computer Science and Software Engineering, Concordia University, Montr eal, Qu ebec, Canada
2 CyLab Biometrics Center and the Department of Electrical and Computer Engineering,
Carnegie Mellon University, Pittsburgh, PA, USA
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
('2687827', 'Kha Gia Quach', 'kha gia quach')
('1769788', 'Khoa Luu', 'khoa luu')
('6131978', 'T. Hoang Ngan Le', 't. hoang ngan le')
('1794486', 'Marios Savvides', 'marios savvides')
{chinhand, kquach, kluu, thihoanl}@andrew.cmu.edu, msavvid@ri.cmu.edu
ebb1c29145d31c4afa3c9be7f023155832776cd3CASME II: An Improved Spontaneous Micro-Expression
Database and the Baseline Evaluation
State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China, 2 University of Chinese Academy of Sciences
Beijing, China, 3 Center for Machine Vision Research, Department of Computer Science and Engineering, University of Oulu, Oulu, Finland, 4 TNList, Department of
Computer Science and Technology, Tsinghua University, Beijing, China
('9185305', 'Wen-Jing Yan', 'wen-jing yan')
('39522870', 'Xiaobai Li', 'xiaobai li')
('2819642', 'Su-Jing Wang', 'su-jing wang')
('1757287', 'Guoying Zhao', 'guoying zhao')
('1715826', 'Yong-Jin Liu', 'yong-jin liu')
('1838009', 'Yu-Hsin Chen', 'yu-hsin chen')
('1684007', 'Xiaolan Fu', 'xiaolan fu')
eb566490cd1aa9338831de8161c6659984e923fdFrom Lifestyle Vlogs to Everyday Interactions
EECS Department, UC Berkeley
('1786435', 'David F. Fouhey', 'david f. fouhey')
('1763086', 'Alexei A. Efros', 'alexei a. efros')
('1689212', 'Jitendra Malik', 'jitendra malik')
eb9312458f84a366e98bd0a2265747aaed40b1a61-4244-1437-7/07/$20.00 ©2007 IEEE
IV - 473
ICIP 2007
eb716dd3dbd0f04e6d89f1703b9975cad62ffb09Copyright
by
2012
('1883898', 'Yong Jae Lee', 'yong jae lee')
eb4d2ec77fae67141f6cf74b3ed773997c2c0cf6Int. J. Information Technology and Management, Vol. 11, Nos. 1/2, 2012
35
A new soft biometric approach for keystroke
dynamics based on gender recognition
GREYC Research Lab
ENSICAEN – Université de Caen Basse Normandie – CNRS,
14000 Caen, France
Fax: +33-231538110
*Corresponding author
('2615638', 'Romain Giot', 'romain giot')
('1793765', 'Christophe Rosenberger', 'christophe rosenberger')
E-mail: romain.giot@ensicaen.fr
E-mail: christophe.rosenberger@ensicaen.fr
ebb7cc67df6d90f1c88817b20e7a3baad5dc29b9Journal of Computational Mathematics
Vol.xx, No.x, 200x, 1–25.
http://www.global-sci.org/jcm
doi:??
Fast algorithms for Higher-order Singular Value Decomposition
from incomplete data*
University of Alabama, Tuscaloosa, AL
('40507939', 'Yangyang Xu', 'yangyang xu')Email: yangyang.xu@ua.edu
ebabd1f7bc0274fec88a3dabaf115d3e226f198fDriver drowsiness detection system based on feature
representation learning using various deep networks
School of Electrical Engineering, KAIST,
Guseong-dong, Yuseong-gu, Dajeon, Rep. of Korea
('1989730', 'Sanghyuk Park', 'sanghyuk park')
('1773194', 'Fei Pan', 'fei pan')
('3315036', 'Sunghun Kang', 'sunghun kang')
{shine0624, feipan, sunghun.kang, cd yoo}@kaist.ac.kr
eb70c38a350d13ea6b54dc9ebae0b64171d813c9On Graph-Structured Discrete
Labelling Problems in Computer
Vision: Learning, Inference and
Applications
Submitted in partial fulfillment of the requirements for
the degree of
Doctor of Philosophy
in
Electrical and Computer Engineering
M.S., Electrical and Computer Engineering, Carnegie Mellon University
B.Tech., Electronics Engineering, Institute of Technology, Banaras Hindu University
Carnegie Mellon University
August, 2010
('1746610', 'Dhruv Batra', 'dhruv batra')
ebb9d53668205c5797045ba130df18842e3eadef
eb027969f9310e0ae941e2adee2d42cdf07d938cVGGFace2: A dataset for recognising faces across pose and age
Visual Geometry Group, University of Oxford
('46632720', 'Qiong Cao', 'qiong cao')
('46980108', 'Li Shen', 'li shen')
('10096695', 'Weidi Xie', 'weidi xie')
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{qiong,lishen,weidi,omkar,az}@robots.ox.ac.uk
eb48a58b873295d719827e746d51b110f5716d6cFace Alignment Using K-cluster Regression Forests
With Weighted Splitting
('2393538', 'Marek Kowalski', 'marek kowalski')
('1930272', 'Jacek Naruniec', 'jacek naruniec')
eb7b387a3a006609b89ca5ed0e6b3a1d5ecb5e5aFacial Expression Recognition using Neural
Network
National Cheng Kung University
Tainan, Taiwan, R.O.C.
('1751725', 'Shen-Chuan Tai', 'shen-chuan tai')
('2142418', 'Yu-Yi Liao', 'yu-yi liao')
('1925097', 'Chien-Shiang Hong', 'chien-shiang hong')
sctai@mail.ncku.edu.tw hhf93d@lily.ee.ncku.edu.tw zgz@lily.ee.ncku.edu.tw
lyy94d@lily.ee.ncku.edu.tw hcs95d@dcmc.ee.ncku.edu.tw
ebd5df2b4105ba04cef4ca334fcb9bfd6ea0430cFast Localization of Facial Landmark Points
University of Zagreb, Faculty of Electrical Engineering and Computing, Unska 3, 10000 Zagreb, Croatia
Link oping University, SE-581 83 Link oping, Sweden
March 28, 2014
('3013350', 'Miroslav Frljak', 'miroslav frljak')
('1767736', 'Robert Forchheimer', 'robert forchheimer')
ebf204e0a3e137b6c24e271b0d55fa49a6c52b41Master of Science Thesis in Electrical Engineering
Link ping University
Visual Tracking Using
Deep Motion Features
('8161428', 'Susanna Gladh', 'susanna gladh')
c71f36c9376d444075de15b1102b4974481be84d3D Morphable Models: Data
Pre-Processing, Statistical Analysis and
Fitting
Submitted for the degree of Doctor of Philosophy
Department of Computer Science
The University of York
June, 2011
('37519514', 'Ankur Patel', 'ankur patel')
c7c53d75f6e963b403057d8ba5952e4974a779adPurdue University
Purdue e-Pubs
Open Access Theses
8-2016
Theses and Dissertations
Aging effects in automated face recognition
Purdue University
Follow this and additional works at: http://docs.lib.purdue.edu/open_access_theses
Recommended Citation
Agamez, Miguel Cedeno, "Aging effects in automated face recognition" (2016). Open Access Theses. 930.
http://docs.lib.purdue.edu/open_access_theses/930
additional information.
This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for
c79cf7f61441195404472102114bcf079a72138aPose-Invariant 2D Face Recognition by Matching
Using Graphical Models
Submitted for the Degree of
Doctor of Philosophy
from the
University of Surrey
Center for Vision, Speech and Signal Processing
Faculty of Engineering and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
September 2010
('1690611', 'Shervin Rahimzadeh Arashloo', 'shervin rahimzadeh arashloo')
('1690611', 'Shervin Rahimzadeh Arashloo', 'shervin rahimzadeh arashloo')
c73dd452c20460f40becb1fd8146239c88347d87Manifold Constrained Low-Rank Decomposition
1State Key Laboratory of Satellite Navigation System and Equipment Technology, Shijiazhuang, China
Center for Research in Computer Vision (CRCV), University of Central Florida (UCF
School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
4 Istituto Italiano di Tecnologia, Genova, Italy
('9497155', 'Chen Chen', 'chen chen')
('1740430', 'Baochang Zhang', 'baochang zhang')
('1714730', 'Alessio Del Bue', 'alessio del bue')
('1727204', 'Vittorio Murino', 'vittorio murino')
chenchen870713@gmail.com, alessio.delbue@iit.it, bczhang@buaa.edu.cn, vittorio.murino@iit.it ∗
c7e4c7be0d37013de07b6d829a3bf73e1b95ad4eThe International Journal of Multimedia & Its Applications (IJMA) Vol.5, No.5, October 2013
DYNEMO: A VIDEO DATABASE OF NATURAL FACIAL
EXPRESSIONS OF EMOTIONS
1LIP, Univ. Grenoble Alpes, BP 47 - 38040 Grenoble Cedex 9, France
2LIG, Univ. Grenoble Alpes, BP 53 - 38041 Grenoble Cedex 9, France
('3209946', 'Anna Tcherkassof', 'anna tcherkassof')
('20944713', 'Damien Dupré', 'damien dupré')
('2357225', 'Brigitte Meillon', 'brigitte meillon')
('2872246', 'Nadine Mandran', 'nadine mandran')
('1870899', 'Michel Dubois', 'michel dubois')
('1828394', 'Jean-Michel Adam', 'jean-michel adam')
c72e6992f44ce75a40f44be4365dc4f264735cfbStory Understanding in Video
Advertisements
Department of Computer Science
University of Pittsburgh
Pennsylvania, United States
('9085797', 'Keren Ye', 'keren ye')
('51150048', 'Kyle Buettner', 'kyle buettner')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
('9085797', 'Keren Ye', 'keren ye')
('51150048', 'Kyle Buettner', 'kyle buettner')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
yekeren@cs.pitt.edu
buettnerk@pitt.edu
kovashka@cs.pitt.edu
c74aba9a096379b3dbe1ff95e7af5db45c0fd680Neuro-Fuzzy Analysis of Facial Action Units
and Expressions
Digital Signal Processing Lab, Department of Computer Engineering
Sharif University of Technology
Tehran, Iran, Tel: +98 21 6616 4632
('1736464', 'Mahmoud Khademi', 'mahmoud khademi')
('2936650', 'Mohammad Taghi Manzuri', 'mohammad taghi manzuri')
('1702826', 'Mohammad Hadi Kiapour', 'mohammad hadi kiapour')
khademi@ce.sharif.edu, manzuri@sharif.edu, kiapour@ee.sharif.edu
c7de0c85432ad17a284b5b97c4f36c23f506d9d1INTERSPEECH 2011
RANSAC-based Training Data Selection for Speaker State Recognition
Multimedia, Vision and Graphics Laboratory, Koc University, Istanbul, Turkey
Bahc es ehir University, Istanbul, Turkey
Ozye gin University, Istanbul, Turkey
('1777185', 'Elif Bozkurt', 'elif bozkurt')
('1749677', 'Engin Erzin', 'engin erzin')
ebozkurt, eerzin@ku.edu.tr, cigdem.eroglu@bahcesehir.edu.tr, tanju.erdem@ozyegin.edu.tr
c7c5f0fe1fcaf3787c7f78f7dc62f3497dcfdf3cTHE IMPACT OF PRODUCT PHOTO ON ONLINE CONSUMER
PURCHASE INTENTION: AN IMAGE-PROCESSING ENABLED
EMPIRICAL STUDY
('39306563', 'Xin Li', 'xin li')
('2762720', 'Mengyue Wang', 'mengyue wang')
('39016300', 'Yubo Chen', 'yubo chen')
Xin.Li.PhD@gmail.com
Kong, menwang-c@my.cityu.edu.hk
chenyubo@sem.tsinghua.edu.cn
c7f752eea91bf5495a4f6e6a67f14800ec246d08EXPLORING THE TRANSFER
LEARNING ASPECT OF DEEP
NEURAL NETWORKS IN FACIAL
INFORMATION PROCESSING
A DISSERTATION SUBMITTED TO THE UNIVERSITY OF MANCHESTER
FOR THE DEGREE OF MASTER OF SCIENCE
IN THE FACULTY OF ENGINEERING AND PHYSICAL SCIENCES
2015
By
Crefeda Faviola Rodrigues
School of Computer Science
c71217b2b111a51a31cf1107c71d250348d1ff68One Network to Solve Them All — Solving Linear Inverse Problems
using Deep Projection Models
Carnegie Mellon University, Pittsburgh, PA
('2088535', 'Chun-Liang Li', 'chun-liang li')
('1783087', 'B. V. K. Vijaya Kumar', 'b. v. k. vijaya kumar')
('1745861', 'Aswin C. Sankaranarayanan', 'aswin c. sankaranarayanan')
c758b9c82b603904ba8806e6193c5fefa57e9613Heterogeneous Face Recognition with CNNs
INRIA Grenoble, Laboratoire Jean Kuntzmann
('2143851', 'Shreyas Saxena', 'shreyas saxena')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
{firstname.lastname}@inria.fr
c7c03324833ba262eeaada0349afa1b5990c1ea7A Wearable Face Recognition System on Google
Glass for Assisting Social Interactions
Institute for Infocomm Research, Singapore
('1709001', 'Bappaditya Mandal', 'bappaditya mandal')
('35718875', 'Liyuan Li', 'liyuan li')
('1694051', 'Cheston Tan', 'cheston tan')
Email address: bmandal@i2r.a-star.edu.sg (∗Contact author: Bappaditya Mandal);
{scchia, lyli, vijay, cheston-tan, joohwee}@i2r.a-star.edu.sg
c76f64e87f88475069f7707616ad9df1719a6099T-RECS: Training for Rate-Invariant
Embeddings by Controlling Speed for Action
Recognition
University of Michigan
('31646172', 'Madan Ravi Ganesh', 'madan ravi ganesh')
('24337238', 'Eric Hofesmann', 'eric hofesmann')
('40893359', 'Byungsu Min', 'byungsu min')
('40893002', 'Nadha Gafoor', 'nadha gafoor')
('3587688', 'Jason J. Corso', 'jason j. corso')
c7f0c0636d27a1d45b8fcef37e545b902195d937Towards Around-Device Interaction using Corneal Imaging
Coburg University
Coburg University
('49770541', 'Daniel Schneider', 'daniel schneider')
('2708269', 'Jens Grubert', 'jens grubert')
daniel.schneider@hs-coburg.de
jg@jensgrubert.de
c7c8d150ece08b12e3abdb6224000c07a6ce7d47DeMeshNet: Blind Face Inpainting for Deep MeshFace Verification
National Laboratory of Pattern Recognition, CASIA
Center for Research on Intelligent Perception and Computing, CASIA
('50202300', 'Shu Zhang', 'shu zhang'){shu.zhang,rhe,tnt}@nlpr.ia.ac.cn
c78fdd080df01fff400a32fb4cc932621926021fRobust Automatic Facial Expression Detection
Method
Institute for Pattern Recognition and Artificial Intelligence/ Huazhong University of Science and Technology, Wuhan
Institute for Pattern Recognition and Artificial Intelligence/ Huazhong University of Science and Technology, Wuhan
China
China
('33024921', 'Yan Ouyang', 'yan ouyang')
('1707161', 'Nong Sang', 'nong sang')
Email:oyy_01@163.com
Email: nsang@hust.edu.cn
c74b1643a108939c6ba42ae4de55cb05b2191be5NON-NEGATIVE MATRIX FACTORIZATION FOR FACE
ILLUMINATION ANALYSIS
CVSSP, University of Surrey
CVSSP, University of Surrey
CVSSP, University of Surrey
Guildford, Surrey
UK GU2 7XH
Guildford, Surrey
UK GU2 7XH
Guildford, Surrey
UK GU2 7XH
('38746097', 'Xuan Zou', 'xuan zou')
('39685698', 'Wenwu Wang', 'wenwu wang')
('1748684', 'Josef Kittler', 'josef kittler')
xuan.zou@surrey.ac.uk
w.wang@surrey.ac.uk
j.kittler@surrey.ac.uk
c75e6ce54caf17b2780b4b53f8d29086b391e839ExpNet: Landmark-Free, Deep, 3D Facial Expressions
Institute for Robotics and Intelligent Systems, USC, CA, USA
Information Sciences Institute, USC, CA, USA
The Open University of Israel, Israel
('1752756', 'Feng-Ju Chang', 'feng-ju chang')
('46634688', 'Anh Tuan Tran', 'anh tuan tran')
('1756099', 'Tal Hassner', 'tal hassner')
('11269472', 'Iacopo Masi', 'iacopo masi')
{fengjuch,anhttran,iacopoma,nevatia,medioni}@usc.edu, hassner@openu.ac.il
c0723e0e154a33faa6ff959d084aebf07770ffafInterpolation Between Eigenspaces Using
Rotation in Multiple Dimensions
Graduate School of Information Science, Nagoya University, Japan
2 No Japan Society for the Promotion of Science
Japan
('1685524', 'Tomokazu Takahashi', 'tomokazu takahashi')
('2833316', 'Lina', 'lina')
('1679187', 'Ichiro Ide', 'ichiro ide')
('1680642', 'Yoshito Mekada', 'yoshito mekada')
('1725612', 'Hiroshi Murase', 'hiroshi murase')
ttakahashi@murase.m.is.nagoya-u.ac.jp
c03f48e211ac81c3867c0e787bea3192fcfe323eINTERSPEECH 2016
September 8–12, 2016, San Francisco, USA
Mahalanobis Metric Scoring Learned from Weighted Pairwise Constraints in
I-vector Speaker Recognition System
School of Computer Information Engineering, Jiangxi Normal University, Nanchang, China
('3308432', 'Zhenchun Lei', 'zhenchun lei')
('2947033', 'Yanhong Wan', 'yanhong wan')
('1853437', 'Jian Luo', 'jian luo')
('2956877', 'Yingen Yang', 'yingen yang')
zhenchun.lei@hotmail.com, wyanhhappy@126.com,
luo.jian@hotmail.com, ygyang@jxnu.edu.cn
c038beaa228aeec174e5bd52460f0de75e9cccbeTemporal Segment Networks for Action
Recognition in Videos
('33345248', 'Limin Wang', 'limin wang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('48708388', 'Zhe Wang', 'zhe wang')
('40612284', 'Yu Qiao', 'yu qiao')
('1807606', 'Dahua Lin', 'dahua lin')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1681236', 'Luc Van Gool', 'luc van gool')
c043f8924717a3023a869777d4c9bee33e607fb5Emotion Separation Is Completed Early and It Depends
on Visual Field Presentation
Lab for Human Brain Dynamics, RIKEN Brain Science Institute, Wakoshi, Saitama, Japan, 2 Lab for Human Brain Dynamics, AAI Scientific Cultural Services Ltd., Nicosia
Cyprus
('2259342', 'Lichan Liu', 'lichan liu')
('2348276', 'Andreas A. Ioannides', 'andreas a. ioannides')
c05a7c72e679745deab9c9d7d481f7b5b9b36bddNPS-CS-11-005


NAVAL
POSTGRADUATE
SCHOOL
MONTEREY, CALIFORNIA
by
BIOMETRIC CHALLENGES FOR FUTURE DEPLOYMENTS:
A STUDY OF THE IMPACT OF GEOGRAPHY, CLIMATE, CULTURE,
AND SOCIAL CONDITIONS ON THE EFFECTIVE
COLLECTION OF BIOMETRICS
April 2011
Approved for public release; distribution is unlimited
('3337733', 'Paul C. Clark', 'paul c. clark')
c03e01717b2d93f04cce9b5fd2dcfd1143bcc180Locality-constrained Active Appearance Model
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
('1874505', 'Xiaowei Zhao', 'xiaowei zhao')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1695600', 'Xiujuan Chai', 'xiujuan chai')
('1710220', 'Xilin Chen', 'xilin chen')
mathzxw2002@gmail.com,{sgshan,chaixiujuan,xlchen}@ict.ac.cn
c0ff7dc0d575658bf402719c12b676a34271dfcdA New Incremental Optimal Feature Extraction
Method for On-line Applications
K. N. Toosi University of
Technology, Tehran, Iran
21−Σ
('2784763', 'Youness Aliyari Ghassabeh', 'youness aliyari ghassabeh')
('2060085', 'Hamid Abrishami Moghaddam', 'hamid abrishami moghaddam')
y_aliyari@ee.kntu.ac.ir, moghadam@saba.kntu.ac.ir
c02847a04a99a5a6e784ab580907278ee3c12653Fine Grained Video Classification for
Endangered Bird Species Protection
Non-Thesis MS Final Report
1. Introduction
1.1 Background
This project is about detecting eagles in videos. Eagles are endangered species at the brim of
extinction since 1980s. With the bans of harmful pesticides, the number of eagles keep increasing.
However, recent studies on golden eagles’ activities in the vicinity of wind turbines have shown
significant number of turbine blade collisions with eagles as the major cause of eagles’ mortality. [1]
This project is a part of a larger research project to build an eagle detection and deterrent system
on wind turbine toward reducing eagles’ mortality. [2] The critical component of this study is a
computer vision system for eagle detection in videos. The key requirement are that the system should
work in real time and detect eagles at a far distance from the camera (i.e. in low resolution).
There are three different bird species in my dataset - falcon, eagle and seagull. The reason for
involving only these three species is based on the real world situation. Wind turbines are always
installed near coast and mountain hill where falcons and seagulls will be the majority. So my model
will classify the minority eagles out of other bird species during the immigration season and protecting
them by using the deterrent system.
1.2 Brief Approach
Our approach represents a unified deep-learning architecture for eagle detection. Given videos,
our goal is to detect eagle species at far distance from the camera, using both appearance and bird
motion cues, so as to meet the recall-precision rates set by the user. Detecting eagle is a challenging
task because of the following reasons. Frist, an eagle flies fast and high in the sky which means that
we need a lens with wide angle such that captures their movement. However, a camera with wide
angle produces a low resolution and low quality video and the detailed appearance of bird is
compromised. Second, current neural network typically take as input low resolution images. This is
because a higher resolution image will require larger filters and deeper networks which is turn hard to
train [3]. So it is not clear whether the low resolution will cause challenge for fine-grained
classification task. Last but not the least, there is not a large training database like PASCAL, MNIST
('2355840', 'Chenyu Wang', 'chenyu wang')
c0c8d720658374cc1ffd6116554a615e846c74b5JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Modeling Multimodal Clues in a Hybrid Deep
Learning Framework for Video Classification
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('8053308', 'Jinhui Tang', 'jinhui tang')
('3233021', 'Zechao Li', 'zechao li')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
c035c193eed5d72c7f187f0bc880a17d217dada0Local Gradient Gabor Pattern (LGGP) with Applications in
Face Recognition, Cross-spectral Matching and Soft
Biometrics
West Virginia University
Michigan State University
Morgantown, WV, USA
East Lansing, MI, USA
('1751335', 'Cunjian Chen', 'cunjian chen')
('1698707', 'Arun Ross', 'arun ross')
c0cdaeccff78f49f4604a6d263dc6eb1bb8707d5Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'16 |
263
MLP Neural Network Based Approach for
Facial Expression Analysis
Kent State University, Kent, Ohio, USA
2 Department of Robotic Engineering, AU-TNB, Tehran, Iran
the efficiency of
c00f402b9cfc3f8dd2c74d6b3552acbd1f358301LEARNING DEEP REPRESENTATION FROM COARSE TO FINE FOR FACE ALIGNMENT
Shanghai Jiao Tong University, China
('3403352', 'Zhiwen Shao', 'zhiwen shao')
('7406856', 'Shouhong Ding', 'shouhong ding')
('3450479', 'Yiru Zhao', 'yiru zhao')
('3451401', 'Qinchuan Zhang', 'qinchuan zhang')
('8452947', 'Lizhuang Ma', 'lizhuang ma')
{shaozhiwen, feiben, yiru.zhao, qinchuan.zhang}@sjtu.edu.cn, ma-lz@cs.sjtu.edu.cn
c089c7d8d1413b54f59fc410d88e215902e51638TVParser: An Automatic TV Video Parsing Method
National Lab of Pattern Recognition, Institute of Automation
Chinese Academy of Sciences, Beijing, China, 100190
China-Singapore Institute of Digital Media, Singapore
('1690954', 'Chao Liang', 'chao liang')
('1688633', 'Changsheng Xu', 'changsheng xu')
('1709439', 'Jian Cheng', 'jian cheng')
('1694235', 'Hanqing Lu', 'hanqing lu')
fcliang,csxu,jcheng,luhqg@nlpr.ia.ac.cn
c0ee89dc2dad76147780f96294de9e421348c1f4Efficiently detecting outlying behavior in
video-game players
Interdisciplinary Program in Visual Information Processing, Korea University, Seoul, Korea
School of Games, Hongik University, Seoul, Korea
Korea University
Seoul, Korea
4 AI Lab, NCSOFT, Seongnam, Korea
('7652095', 'Young Bin Kim', 'young bin kim')
('40267433', 'Shin Jin Kang', 'shin jin kang')
('4972813', 'Sang Hyeok Lee', 'sang hyeok lee')
('5702793', 'Jang Young Jung', 'jang young jung')
('3000093', 'Hyeong Ryeol Kam', 'hyeong ryeol kam')
('2013790', 'Jung Lee', 'jung lee')
('2467280', 'Young Sun Kim', 'young sun kim')
('3103240', 'Joonsoo Lee', 'joonsoo lee')
('22232963', 'Chang Hun Kim', 'chang hun kim')
c0ca6b992cbe46ea3003f4e9b48f4ef57e5fb774A Two-Layer Representation For Large-Scale Action Recognition
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University
2Shanghai Key Lab of Digital Media Processing and Transmission, 3Microsoft Research Asia
University of California, San Diego
('1701941', 'Jun Zhu', 'jun zhu')
('2450889', 'Baoyuan Wang', 'baoyuan wang')
('1795291', 'Xiaokang Yang', 'xiaokang yang')
('38790729', 'Wenjun Zhang', 'wenjun zhang')
('1736745', 'Zhuowen Tu', 'zhuowen tu')
{zhujun.sjtu,zhuowen.tu}@gmail.com, baoyuanw@microsoft.com, {xkyang,zhangwenjun}@sjtu.edu.cn
c00df53bd46f78ae925c5768d46080159d4ef87dLearning Bag-of-Features Pooling for Deep Convolutional Neural Networks
Aristotle University of Thessaloniki
Thessaloniki, Greece
('3200630', 'Nikolaos Passalis', 'nikolaos passalis')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
passalis@csd.auth.gr, tefas@aiia.csd.auth.gr
c0d5c3aab87d6e8dd3241db1d931470c15b9e39d
c05441dd1bc418fb912a6fafa84c0659a6850bf0Received on 16th July 2014
Revised on 11th September 2014
Accepted on 23rd September 2014
doi: 10.1049/iet-cvi.2014.0200
www.ietdl.org
ISSN 1751-9632
Face recognition under varying illumination based on
adaptive homomorphic eight local directional patterns
Utah State University, Logan, UT 84322-4205, USA
('2147212', 'Mohammad Reza Faraji', 'mohammad reza faraji')
('1725739', 'Xiaojun Qi', 'xiaojun qi')
E-mail: Mohammadreza.Faraji@aggiemail.usu.edu
eee8a37a12506ff5df72c402ccc3d59216321346Uredniki:
dr. Tomaž Erjavec
Odsek za tehnologije znanja
Institut »Jožef Stefan«, Ljubljana
dr. Jerneja Žganec Gros
Alpineon d.o.o, Ljubljana
Založnik: Institut »Jožef Stefan«, Ljubljana
Tisk: Birografika BORI d.o.o.
Priprava zbornika: Mitja Lasič
Oblikovanje naslovnice: dr. Damjan Demšar
Tiskano iz predloga avtorjev
Naklada: 50
Ljubljana, oktober 2008
Konferenco IS 2008 sofinancirata
Ministrstvo za visoko šolstvo, znanost in tehnologijo
Institut »Jožef Stefan«
ISSN 1581-9973
CIP - Kataložni zapis o publikaciji
Narodna in univerzitetna knjižnica, Ljubljana
004.934(082)
81'25:004.6(082)
004.8(063)
oktober 2008, Ljubljana, Slovenia : zbornik 11. mednarodne
Proceedings of the Sixth Language Technologies Conference, October
16th-17th, 2008 : proceedings of the 11th International
Multiconference Information Society - IS 2008, volume C / uredila,
edited by Tomaž Erjavec, Jerneja Žganec Gros. - Ljubljana :
1581-9973)
ISBN 978-961-264-006-4
družba 4. Information society 5. Erjavec, Tomaž, 1960- 6.
Ljubljana)
241520896
ee6b503ab512a293e3088fdd7a1c893a77902acbAutomatic Name-Face Alignment to Enable Cross-Media News Retrieval
*School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing,
The University of North Carolina at Charlotte, USA
Fudan University, Shanghai, China
('7550713', 'Yuejie Zhang', 'yuejie zhang')
('1721131', 'Wei Wu', 'wei wu')
('1678662', 'Yang Li', 'yang li')
('1751513', 'Cheng Jin', 'cheng jin')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
('2344620', 'Jianping Fan', 'jianping fan')
*{yjzhang, 10210240122, 11210240052, jc, xyxue}@fudan.edu.cn, +jfan@uncc.edu
ee18e29a2b998eddb7f6663bb07891bfc72622481119
Local Linear Discriminant Analysis Framework
Using Sample Neighbors
('38162192', 'David Zhang', 'david zhang')
eeb6d084f9906c53ec8da8c34583105ab5ab828412
Generation of Facial Expression Map using
Supervised and Unsupervised Learning
Akita Prefectural University
Akita University
Japan
1. Introduction
Recently, studies of human face recognition have been conducted vigorously (Fasel &
Luettin, 2003; Yang et al., 2002; Pantic & Rothkrantz, 2000a; Zhao et al., 2000; Hasegawa et
al., 1997; Akamatsu, 1997). Such studies are aimed at the implementation of an intelligent
man-machine interface. Especially, studies of facial expression recognition for human-
machine emotional communication are attracting attention (Fasel & Luettin, 2003; Pantic &
Rothkrantz, 2000a; Tian et al., 2001; Pantic & Rothkrantz, 2000b; Lyons et al., 1999; Lyons et
al., 1998; Zhang et al., 1998).
The shape (static diversity) and motion (dynamic diversity) of facial components such as the
eyebrows, eyes, nose, and mouth manifest expressions. Considering facial expressions from
the perspective of static diversity because facial configurations differ among people, it is
presumed that a facial expression pattern appearing on a face when facial expression is
manifested includes person-specific features. In addition, from the viewpoint of dynamic
diversity, because the dynamic change of facial expression originates in a person-specific
facial expression pattern, it is presumed that the displacement vector of facial components
has person-specific features. The properties of the human face described above reveal the
following tasks.
The first task is to generalize a facial expression recognition model. Numerous conventional
approaches have attempted generalization of a facial expression recognition model. They
use the distance of motion of feature points set on a face and the motion vectors of facial
muscle movements in its arbitrary regions as feature values. Typically, such methods assign
that information to so-called Action Units (AUs) of a Facial Action Coding System (FACS)
(Ekman & Friesen, 1978). In fact, AUs are described qualitatively. Therefore, no objective
criteria pertain to the setting positions of feature points and regions. They all depend on a
particular researcher’s experience. However, features representing facial expressions are
presumed to differ among subjects. Accordingly, a huge effort is necessary to link
quantitative features with qualitative AUs for each subject and to derive universal features
therefrom. It is also suspected that a generalized facial expression recognition model that is
applicable to all subjects would disregard person-specific features of facial expressions that are
borne originally by each subject. For all the reasons described above, it is an important task to
establish a method to extract person-specific features using a common approach to every
subject, and to build a facial expression recognition model that incorporates these features.
Source: Machine Learning, Book edited by: Abdelhamid Mellouk and Abdennacer Chebira,
ISBN 978-3-902613-56-1, pp. 450, February 2009, I-Tech, Vienna, Austria
www.intechopen.com
('1932760', 'Masaki Ishii', 'masaki ishii')
('2052920', 'Kazuhito Sato', 'kazuhito sato')
('1738333', 'Hirokazu Madokoro', 'hirokazu madokoro')
('21063785', 'Makoto Nishida', 'makoto nishida')
ee815f60dc4a090fa9fcfba0135f4707af21420dEAC-Net: A Region-based Deep Enhancing and Cropping Approach for
Facial Action Unit Detection
Grove School of Engineering, CUNY City College, NY, USA
2 Department of Computer Science, CUNY Graduate Center, NY, USA
Engineering and Applied Science, SUNY Binghamton University, NY, USA
('48625314', 'Wei Li', 'wei li')
eed7920682789a9afd0de4efd726cd9a706940c8Computers to Help with Conversations:
Affective Framework to Enhance Human Nonverbal Skills
by
Mohammed Ehsan Hoque
B.S., Pennsylvania State University
M.S., University of Memphis
Submitted to the Program in Media Arts and Sciences,
School of Architecture and Planning,
In partial fulfilment of the requirements for the degree of
DOCTOR OF PHILOSOPHY
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
September 2013
Massachusetts Institute of Technology 2013. All rights reserved
Author
Certified by
Accepted by
Program in Media Arts and Sciences
August 15, 2013
Rosalind W. Picard
Professor of Media Arts and Sciences
Program in Media Arts and Sciences, MIT
Thesis supervisor
Pattie Maes
Associate Academic Head
Program in Media Arts and Sciences, MIT
ee7093e91466b81d13f4d6933bcee48e4ee63a16Discovering Person Identity via
Large-Scale Observations
Interactive and Digital Media Institute, National University of Singapore, SG
School of Computing, National University of Singapore, SG
('3026404', 'Yongkang Wong', 'yongkang wong')
('1986874', 'Lekha Chaisorn', 'lekha chaisorn')
('1744045', 'Mohan S. Kankanhalli', 'mohan s. kankanhalli')
ee461d060da58d6053d2f4988b54eff8655ecede
eefb8768f60c17d76fe156b55b8a00555eb40f4dSubspace Scores for Feature Selection in Computer Vision ('2032038', 'Cameron Musco', 'cameron musco')
('2767340', 'Christopher Musco', 'christopher musco')
cnmusco@mit.edu
cpmusco@mit.edu
ee463f1f72a7e007bae274d2d42cd2e5d817e751Automatically Extracting Qualia Relations for the Rich Event Ontology
University of Colorado Boulder, 2U.S. Army Research Lab
('51203051', 'Ghazaleh Kazeminejad', 'ghazaleh kazeminejad')
('3202888', 'Claire Bonial', 'claire bonial')
('1783500', 'Susan Windisch Brown', 'susan windisch brown')
('1728285', 'Martha Palmer', 'martha palmer')
{ghazaleh.kazeminejad, susan.brown, martha.palmer}@colorado.edu
claire.n.bonial.civ@mail.mil
eed1dd2a5959647896e73d129272cb7c3a2e145c
ee92d36d72075048a7c8b2af5cc1720c7bace6ddFACE RECOGNITION USING MIXTURES OF PRINCIPAL COMPONENTS
Video and Display Processing
Philips Research USA
Briarcliff Manor, NY 10510
('1727257', 'Deepak S. Turaga', 'deepak s. turaga')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
deepak.turaga@philips.com
ee418372b0038bd3b8ae82bd1518d5c01a33a7ecCSE 255 Winter 2015 Assignment 1: Eye Detection using Histogram
of Oriented Gradients and Adaboost Classifier
Electrical and Computer Engineering Department
University of California, San Diego
('2812409', 'Kevan Yuen', 'kevan yuen')kcyuen@eng.ucsd.edu
eee06d68497be8bf3a8aba4fde42a13aa090b301CR-GAN: Learning Complete Representations for Multi-view Generation
Rutgers University
University of North Carolina at Charlotte
('6812347', 'Yu Tian', 'yu tian')
('4340744', 'Xi Peng', 'xi peng')
('33860220', 'Long Zhao', 'long zhao')
('1753384', 'Shaoting Zhang', 'shaoting zhang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
{yt219, px13, lz311, dnm}@cs.rutgers.edu, szhang16@uncc.edu
eee2d2ac461f46734c8e674ae14ed87bbc8d45c6Generalized Rank Pooling for Activity Recognition
1Australian Centre for Robotic Vision, 2Data61/CSIRO
The Australian National University, Canberra, Australia
('2691929', 'Anoop Cherian', 'anoop cherian')
('1688071', 'Basura Fernando', 'basura fernando')
('23911916', 'Mehrtash Harandi', 'mehrtash harandi')
('49384847', 'Stephen Gould', 'stephen gould')
firstname.lastname@{anu.edu.au, data61.csiro.au}
eed93d2e16b55142b3260d268c9e72099c53d5bcICFVR 2017: 3rd International Competition on Finger Vein Recognition
Chittagong University of Engineering and Technology
∗ These authors contributed equally to this work
Peking University
2Shenzhen Maidi Technology Co., LTD.
3TigerIT
('46867002', 'Yi Zhang', 'yi zhang')
('2560109', 'Houjun Huang', 'houjun huang')
('38728899', 'Haifeng Zhang', 'haifeng zhang')
('3142600', 'Liao Ni', 'liao ni')
('47210488', 'Wei Xu', 'wei xu')
('1694788', 'Nasir Uddin Ahmed', 'nasir uddin ahmed')
('9336364', 'Md. Shakil Ahmed', 'md. shakil ahmed')
('9372198', 'Yilun Jin', 'yilun jin')
('23100665', 'Yingjie Chen', 'yingjie chen')
('35273470', 'Jingxuan Wen', 'jingxuan wen')
('39201759', 'Wenxin Li', 'wenxin li')
eedfb384a5e42511013b33104f4cd3149432bd9eMultimodal Probabilistic Person
Tracking and Identification
in Smart Spaces
zur Erlangung des akademischen Grades eines
Doktors der Ingenieurwissenschaften
der Fakultät für Informatik
der Universität Fridericiana zu Karlsruhe (TH)
genehmigte
Dissertation
von
aus Karlsruhe
Tag der mündlichen Prüfung: 20.11.2009
Erster Gutachter:
Zweiter Gutachter:
Prof. Dr. A. Waibel
Prof. Dr. R. Stiefelhagen
('1701229', 'Keni Bernardin', 'keni bernardin')
c94b3a05f6f41d015d524169972ae8fd52871b67The Fastest Deformable Part Model for Object Detection
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1721677', 'Junjie Yan', 'junjie yan')
('1718623', 'Zhen Lei', 'zhen lei')
('39774417', 'Longyin Wen', 'longyin wen')
('34679741', 'Stan Z. Li', 'stan z. li')
{jjyan,zlei,lywen,szli}@nlpr.ia.ac.cn
c9424d64b12a4abe0af201e7b641409e182bababArticle
Which, When, and How: Hierarchical Clustering with
Human–Machine Cooperation
Academic Editor: Tom Burr
Received: 3 November 2016; Accepted: 14 December 2016; Published: 21 December 2016
('1751849', 'Huanyang Zheng', 'huanyang zheng')
('1703691', 'Jie Wu', 'jie wu')
Computer and Information Sciences, Temple University, PA 19121, USA; jiewu@temple.edu
* Correspondence: huanyang.zheng@temple.edu; Tel.: +1-215-204-8450
c91103e6612fa7e664ccbc3ed1b0b5deac865b02Automatic facial expression recognition using
statistical-like moments
Integrated Research Center, Universit`a Campus Bio-Medico di Roma
Via Alvaro del Portillo, 00128 Roma, Italy
('1679260', 'Giulio Iannello', 'giulio iannello')
('1720099', 'Paolo Soda', 'paolo soda')
{r.dambrosio, g.iannello, p.soda}@unicampus.it
c903af0d69edacf8d1bff3bfd85b9470f6c4c243
c97a5f2241cc6cd99ef0c4527ea507a50841f60bPerson Search in Videos with One Portrait
Through Visual and Temporal Links
CUHK-SenseTime Joint Lab, The Chinese University of Hong Kong
Tsinghua University
3 SenseTime Research
('39360892', 'Qingqiu Huang', 'qingqiu huang')
('40584026', 'Wentao Liu', 'wentao liu')
('1807606', 'Dahua Lin', 'dahua lin')
{hq016,dhlin}@ie.cuhk.edu.hk
liuwtwinter@gmail.com
c95cd36779fcbe45e3831ffcd3314e19c85defc5FACE RECOGNITION USING MULTI-MODAL LOW-RANK DICTIONARY LEARNING
University of Alberta, Edmonton, Canada
('1807674', 'Homa Foroughi', 'homa foroughi')
('2627414', 'Moein Shakeri', 'moein shakeri')
('1772846', 'Nilanjan Ray', 'nilanjan ray')
('1734058', 'Hong Zhang', 'hong zhang')
c9e955cb9709f16faeb0c840f4dae92eb875450aProposal of Novel Histogram Features
for Face Detection
Harbin Institute of Technology, School of Computer Science and Technology
P.O.Box 1071, Harbin, Heilongjiang 150001, China
Heilongjiang University, College of Computer Science and Technology, China
('2607285', 'Haijing Wang', 'haijing wang')
('40426020', 'Peihua Li', 'peihua li')
('1821107', 'Tianwen Zhang', 'tianwen zhang')
ninhaijing@yahoo.com
peihualj@hotmail.com
c92bb26238f6e30196b0c4a737d8847e61cfb7d4BEYOND CONTEXT: EXPLORING SEMANTIC SIMILARITY FOR TINY FACE DETECTION
School of Computer Science, Northwestern Polytechnical University, P.R.China
Global Big Data Technologies Centre (GBDTC), University of Technology Sydney, Australia
School of Data and Computer Science, Sun Yat-sen University, P.R.China
('24336288', 'Yue Xi', 'yue xi')
('3104013', 'Jiangbin Zheng', 'jiangbin zheng')
('1714410', 'Wenjing Jia', 'wenjing jia')
('3031842', 'Hanhui Li', 'hanhui li')
c9bbd7828437e70cc3e6863b278aa56a7d545150Unconstrained Fashion Landmark Detection via
Hierarchical Recurrent Transformer Networks
The Chinese University of Hong Kong
2SenseTime Group Limited
('1979911', 'Sijie Yan', 'sijie yan')
('3243969', 'Ziwei Liu', 'ziwei liu')
('47571885', 'Ping Luo', 'ping luo')
('1725421', 'Shi Qiu', 'shi qiu')
{ys016,lz013,pluo,xtang}@ie.cuhk.edu.hk,sqiu@sensetime.com,xgwang@ee.cuhk.edu.hk
c9f588d295437009994ddaabb64fd4e4c499b294Predicting Professions through
Probabilistic Model under Social Context
Northeastern University
Boston, MA, 02115
('2025056', 'Ming Shao', 'ming shao')
('2897748', 'Liangyue Li', 'liangyue li')
('1708679', 'Yun Fu', 'yun fu')
mingshao@ccs.neu.edu, {liangyue, yunfu}@ece.neu.edu
c92da368a6a886211dc759fe7b1b777a64d8b682International Journal of Science and Advanced Technology (ISSN 2221-8386) Volume 1 No 2 April 2011
http://www.ijsat.com
Face Recognition System based on
Face Pose Estimation and
Frontal Face Pose Synthesis
Department of Electrical Engineering
National Chiao-Tung University
Hsinchu, Taiwan, R.O.C
Department of Electrical Engineering
National Chiao-Tung University
Hsinchu, Taiwan, R.O.C
('4525043', 'Kuo-Yu Chiu', 'kuo-yu chiu')
('1707677', 'Sheng-Fuu Lin', 'sheng-fuu lin')
Alvin_cgr@hotmail.com
c98983592777952d1751103b4d397d3ace00852dFace Synthesis from Facial Identity Features
Google Research
Google Research
University of Massachusetts Amherst
Google Research
Google Research
CSAIL, MIT and Google Research
('39578349', 'Forrester Cole', 'forrester cole')
('8707513', 'Aaron Sarna', 'aaron sarna')
('2636941', 'David Belanger', 'david belanger')
('1707347', 'Dilip Krishnan', 'dilip krishnan')
('2138834', 'Inbar Mosseri', 'inbar mosseri')
('1768236', 'William T. Freeman', 'william t. freeman')
fcole@google.com
sarna@google.com
belanger@cs.umass.edu
dilipkay@google.com
inbarm@google.com
wfreeman@google.com
c9367ed83156d4d682cefc59301b67f5460013e0Geometry-Contrastive GAN for Facial Expression
Transfer
Institute of Software, Chinese Academy of Sciences
('35790820', 'Fengchun Qiao', 'fengchun qiao')
('35996065', 'Zirui Jiao', 'zirui jiao')
('3238696', 'Zhihao Li', 'zhihao li')
('1804472', 'Hui Chen', 'hui chen')
('7643981', 'Hongan Wang', 'hongan wang')
fc1e37fb16006b62848def92a51434fc74a2431aDRAFT
A Comprehensive Analysis of Deep Regression
('2793152', 'Pablo Mesejo', 'pablo mesejo')
('1780201', 'Xavier Alameda-Pineda', 'xavier alameda-pineda')
('1794229', 'Radu Horaud', 'radu horaud')
fc5bdb98ff97581d7c1e5eb2d24d3f10714aa192Initialization Strategies of Spatio-Temporal
Convolutional Neural Networks
University of Toronto
('2711409', 'Elman Mansimov', 'elman mansimov')
('2897313', 'Nitish Srivastava', 'nitish srivastava')
('1776908', 'Ruslan Salakhutdinov', 'ruslan salakhutdinov')
fc20149dfdff5fdf020647b57e8a09c06e11434bSubmitted 8/06; Revised 1/07; Published 5/07
Local Discriminant Wavelet Packet Coordinates for Face Recognition
Center for Computer Vision and Department of Mathematics
Sun Yat-Sen (Zhongshan) University
Guangzhou, 510275 China
Department of Electric Engineering
City University of Hong Kong
83 Tat Chee Avenue
Kowloon, Hong Kong, China
Editor: Donald Geman
('5692650', 'Chao-Chun Liu', 'chao-chun liu')
('1726138', 'Dao-Qing Dai', 'dao-qing dai')
('1718530', 'Hong Yan', 'hong yan')
STSDDQ@MAIL.SYSU.EDU.CN
H.YAN@CITYU.EDU.HK
fc516a492cf09aaf1d319c8ff112c77cfb55a0e5
fc0f5859a111fb17e6dcf6ba63dd7b751721ca61Design of an Automatic
Facial Expression Detector
An essay presented for the degree
of
M.Math
Applied Mathematics
University of Waterloo
2018/01/26
('2662893', 'Jian Liang', 'jian liang')
fcbec158e6a4ace3d4311b26195482b8388f0ee9Face Recognition from Still Images and Videos
Center for Automation Research (CfAR) and
Department of Electrical and Computer Engineering
University of Maryland, College Park, MD
I. INTRODUCTION
In most situations, identifying humans using faces is an effortless task for humans. Is this true for computers?
This very question defines the field of automatic face recognition [7], [31], [62], one of the most active research
areas in computer vision, pattern recognition, and image understanding.
Over the past decade, the problem of face recognition has attracted substantial attention from various disciplines
and has witnessed a skyrocketing growth of the literature. Below, we mainly emphasize some key perspectives of
the face recognition problem.
A. Biometric perspective
Face is a biometric. As a consequence, face recognition finds wide applications in authentication, security, and
so on. One recent application is the US-VISIT system by the Department of Homeland Security (DHS), collecting
foreign passengers’ fingerprints and face images.
Biometric signatures of a person characterize the physiological or behavioral characteristics. Physiological bio-
metrics are innate or naturally occuring, while behavioral biometrics arise from mannerisms or traits that are learned
or acquired. Table I lists commonly used biometrics. Biometric technologies provide the foundation for an extensive
array of highly secure identification and personal verification solutions. Compared to conventional identification and
verification methods based on personal identification numbers (PINs) or passwords, biometric technologies offer
many advantages. First, biometrics are individualized traits while passwords may be used or stolen by someone
other than the authorized user. Also, biometric is very convenient since there is nothing to carry or remember. In
addition, biometric technologies are becoming more accurate and less expensive.
Among all biometrics listed in Table I, the face is a very unique one because it is the only biometric belonging
to both physiological and behavioral categories. While the physiological part of the face has been widely exploited
Partially supported by NSF ITR Grant 03-25119. Zhou is now with Integrated Data Systems Department, Siemens Corporate Research,
November 5, 2004
DRAFT
('1682187', 'Shaohua Kevin Zhou', 'shaohua kevin zhou')
('9215658', 'Rama Chellappa', 'rama chellappa')
Email: {shaohua, rama}@cfar.umd.edu
Princeton, NJ 08540. His current email address is kzhou@scr.siemens.com.
fcd3d69b418d56ae6800a421c8b89ef363418665Effects of Aging over Facial Feature Analysis and Face
Recognition
Bogaziçi Un. Electronics Eng. Dept. March 2010
('3398552', 'Bilgin Esme', 'bilgin esme')
fcd77f3ca6b40aad6edbd1dab9681d201f85f365c(cid:13)Copyright 2014 ('3299424', 'Miro Enev', 'miro enev')
fcf91995dc4d9b0cee84bda5b5b0ce5b757740acProceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Asymmetric Discrete Graph Hashing
University of Florida, Gainesville, FL, 32611, USA
('2766473', 'Xiaoshuang Shi', 'xiaoshuang shi')
('2082604', 'Fuyong Xing', 'fuyong xing')
('46321210', 'Kaidi Xu', 'kaidi xu')
('2599018', 'Manish Sapkota', 'manish sapkota')
('49576071', 'Lin Yang', 'lin yang')
xsshi2015@ufl.edu
fc798314994bf94d1cde8d615ba4d5e61b6268b6Face Recognition: face in video, age invariance,
and facial marks
By
A DISSERTATION
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
DOCTOR OF PHILOSOPHY
Computer Science
2009
('2222919', 'Unsang Park', 'unsang park')
fc23a386c2189f221b25dbd0bb34fcd26ccf60faA Discriminative Latent Model of Object
Classes and Attributes
School of Computing Science, Simon Fraser University, Canada
('40457160', 'Yang Wang', 'yang wang')
('10771328', 'Greg Mori', 'greg mori')
{ywang12,mori}@cs.sfu.ca
fc68c5a3ab80d2d31e6fd4865a7ff2b4ab66ca9fThis is a preprint of the paper presented at the 11th International Conference Beyond Databases, Architectures and
Structures (BDAS 2015), May 26-29 2015 in Ustroń, Poland and published in the Communications in Computer and
Information Science Volume 521, 2015, pp 585-597. DOI: 10.1007/978-3-319-18422-7_52
Evaluation Criteria for Affect-Annotated Databases
Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Poland
('2414357', 'Agnieszka Landowska', 'agnieszka landowska')
('3271448', 'Mariusz Szwoch', 'mariusz szwoch')
('3175073', 'Wioleta Szwoch', 'wioleta szwoch')
szwoch@eti.pg.gda.pl
fc2bad3544c7c8dc7cd182f54888baf99ed75e53Efficient Retrieval for Large Scale Metric
Learning
Institute for Computer Graphics and Vision
Graz University of Technology, Austria
('1791182', 'Peter M. Roth', 'peter m. roth')
('3628150', 'Horst Bischof', 'horst bischof')
{koestinger,pmroth,bischof}@icg.tugraz.at
fcf8bb1bf2b7e3f71fb337ca3fcf3d9cf18daa46MANUSCRIPT SUBMITTED TO IEEE TRANS. PATTERN ANAL. MACH. INTELL., JULY 2010
Feature Selection via Sparse Approximation for
Face Recognition
('1944073', 'Yixiong Liang', 'yixiong liang')
('31685288', 'Lei Wang', 'lei wang')
('2090968', 'Yao Xiang', 'yao xiang')
('6609276', 'Beiji Zou', 'beiji zou')
fcbf808bdf140442cddf0710defb2766c2d25c30IJCV manuscript No.
(will be inserted by the editor)
Unsupervised Semantic Action Discovery from Video
Collections
Received: date / Accepted: date
('3114252', 'Ozan Sener', 'ozan sener')
('1681995', 'Ashutosh Saxena', 'ashutosh saxena')
fdff2da5bdca66e0ab5874ef58ac2205fb088ed7Continuous Supervised Descent Method for
Facial Landmark Localisation
1Universitat Oberta de Catalunya, 156 Rambla del Poblenou, Barcelona, Spain
2Universitat de Barcelona, 585 Gran Via de les Corts Catalanes, Barcelona, Spain
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
4Computer Vision Center, O Building, UAB Campus, Bellaterra, Spain
University of Pittsburgh, Pittsburgh, PA, USA
('3305641', 'Marc Oliu', 'marc oliu')
('1733113', 'Takeo Kanade', 'takeo kanade')
('7855312', 'Sergio Escalera', 'sergio escalera')
fdfd57d4721174eba288e501c0c120ad076cdca8An Analysis of Action Recognition Datasets for
Language and Vision Tasks
Institute for Language, Cognition and Computation
School of Informatics, University of Edinburgh
10 Crichton Street, Edinburgh EH8 9AB
('2921001', 'Spandana Gella', 'spandana gella')
('48716849', 'Frank Keller', 'frank keller')
S.Gella@sms.ed.ac.uk, keller@inf.ed.ac.uk
fd4ac1da699885f71970588f84316589b7d8317bJOURNAL OF LATEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007
Supervised Descent Method
for Solving Nonlinear Least Squares
Problems in Computer Vision
('3182065', 'Xuehan Xiong', 'xuehan xiong')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
fd33df02f970055d74fbe69b05d1a7a1b9b2219bSingle Shot Temporal Action Detection
Shanghai Jiao Tong University, China. 2Columbia University, USA
Cooperative Medianet Innovation Center (CMIC), Shanghai Jiao Tong University, China
('6873935', 'Tianwei Lin', 'tianwei lin')
('1758267', 'Xu Zhao', 'xu zhao')
('2195345', 'Zheng Shou', 'zheng shou')
{wzmsltw,zhaoxu}@sjtu.edu.cn,zs2262@columbia.edu
fdf533eeb1306ba418b09210387833bdf27bb756951
fdda5852f2cffc871fd40b0cb1aa14cea54cd7e3Im2Flow: Motion Hallucination from Static Images for Action Recognition
UT Austin
UT Austin
UT Austin
('3387849', 'Ruohan Gao', 'ruohan gao')
('50398746', 'Bo Xiong', 'bo xiong')
('1794409', 'Kristen Grauman', 'kristen grauman')
rhgao@cs.utexas.edu
bxiong@cs.utexas.edu
grauman@cs.utexas.edu
fdfaf46910012c7cdf72bba12e802a318b5bef5aComputerized Face Recognition in Renaissance
Portrait Art
('18640672', 'Ramya Srinivasan', 'ramya srinivasan')
('3007257', 'Conrad Rudolph', 'conrad rudolph')
('1688416', 'Amit Roy-Chowdhury', 'amit roy-chowdhury')
fd15e397629e0241642329fc8ee0b8cd6c6ac807Semi-Supervised Clustering with Neural Networks
IIIT-Delhi, India
('2200208', 'Ankita Shukla', 'ankita shukla')
('39866663', 'Gullal Singh Cheema', 'gullal singh cheema')
('34817359', 'Saket Anand', 'saket anand')
{ankitas, gullal1408, anands}@iiitd.ac.in
fde41dc4ec6ac6474194b99e05b43dd6a6c4f06fMulti-Expert Gender Classification on Age Group by Integrating Deep Neural
Networks
Yonsei University
50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea.
('51430701', 'Jun Beom Kho', 'jun beom kho')kojb87@hanmail.net
fd9feb21b3d1fab470ff82e3f03efce6a0e67a1fUniversity of Twente
Department of Services, Cybersecurity and Safety
Master Thesis
Deep Verification Learning
Author:
F.H.J. Hillerstr¨om
Committee:
Prof. Dr. Ir. R.N.J. Veldhuis
Dr. Ir. L.J. Spreeuwers
Dr. Ir. D. Hiemstra
December 5, 2016
fdca08416bdadda91ae977db7d503e8610dd744f
ICT-2009.7.1
KSERA Project
2010-248085
Deliverable D3.1
Deliverable D3.1
Human Robot Interaction
Human Robot Interaction
18 October 2010
Public Document
The KSERA project (http://www.ksera
KSERA project (http://www.ksera-project.eu) has received funding from the European Commission
project.eu) has received funding from the European Commission
under the 7th Framework Programme (FP7) for Research and Technological Development under grant
under the 7th Framework Programme (FP7) for Research and Technological Development under grant
under the 7th Framework Programme (FP7) for Research and Technological Development under grant
agreement n°2010-248085.
fd53be2e0a9f33080a9db4b5a5e416e24ae8e198Apparent Age Estimation Using Ensemble of Deep Learning Models
Refik Can Mallı∗
Mehmet Ayg¨un∗
Hazım Kemal Ekenel
Istanbul Technical University
Istanbul, Turkey
{mallir,aygunme,ekenel}@itu.edu.tr
fd71ae9599e8a51d8a61e31e6faaaf4a23a17d81Action Detection from a Robot-Car Perspective
Universit´a degli Studi Federico II
Naples, Italy
Oxford Brookes University
Oxford, UK
('39078800', 'Valentina Fontana', 'valentina fontana')
('51149466', 'Manuele Di Maio', 'manuele di maio')
('51152717', 'Stephen Akrigg', 'stephen akrigg')
('1931660', 'Gurkirt Singh', 'gurkirt singh')
('49348905', 'Suman Saha', 'suman saha')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
vale.fontana@studenti.unina.it, man.dimaio@gmail.com
15057204@brookes.ac.uk, gurkirt.singh-2015@brookes.ac.uk,
suman.saha-2014@brookes.ac.uk, fabio.cuzzolin@brookes.ac.uk
fd96432675911a702b8a4ce857b7c8619498bf9fImproved Face Detection and Alignment using Cascade
Deep Convolutional Network
†Beijing Key Laboratory of Intelligent Information Technology, School of
Computer Science, Beijing Institute of Technology, Beijing 100081, P.R.China
China Mobile Research Institute, Xuanwu Men West Street, Beijing
('22244104', 'Weilin Cong', 'weilin cong')
('2901725', 'Sanyuan Zhao', 'sanyuan zhao')
('1698061', 'Hui Tian', 'hui tian')
('34926055', 'Jianbing Shen', 'jianbing shen')
fd10b0c771a2620c0db294cfb82b80d65f73900dIdentifying The Most Informative Features Using A Structurally Interacting Elastic Net
Central University of Finance and Economics, Beijing, China
Xiamen University, Xiamen, Fujian, China
University of York, York, UK
('2290930', 'Lixin Cui', 'lixin cui')
('1749518', 'Lu Bai', 'lu bai')
('47295137', 'Zhihong Zhang', 'zhihong zhang')
('49416727', 'Yue Wang', 'yue wang')
('1679753', 'Edwin R. Hancock', 'edwin r. hancock')
fd7b6c77b46420c27725757553fcd1fb24ea29a8MEXSVMs: Mid-level Features for Scalable Action Recognition
Dartmouth College
6211 Sudikoff Lab, Hanover, NH 03755
Dartmouth Computer Science Technical Report TR2013-726
('1687325', 'Du Tran', 'du tran')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
{dutran,lorenzo}@cs.dartmouth.edu
fdb33141005ca1b208a725796732ab10a9c37d75Int.J.Appl. Math. Comput.Sci.,2016,Vol. 26,No. 2,451–465
DOI: 10.1515/amcs-2016-0032
A CONNECTIONIST COMPUTATIONAL METHOD FOR FACE RECOGNITION
, JOS ´E A. GIRONA-SELVA a
aDepartment of Computer Technology
University of Alicante, 03690, San Vicente del Raspeig, Alicante, Spain
In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced.
First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial
graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of
the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining
each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and
the recognition process are performed by using a similarity function that takes into account both the geometric and texture
information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our
proposal when compared with other state-of the-art methods.
Keywords: pattern recognition, face recognition, neural networks, self-organizing maps.
1.
Introduction
libraries,
In recent years, there has been intensive research carried
to develop complex security systems involving
out
biometric features.
Automated biometric systems
are being widely used in many applications such
as surveillance, digital
law
enforcement, human computer intelligent interaction, and
banking, among others. For applications requiring high
levels of security, biometrics can be integrated with other
authentication means such as smart cards and passwords.
In relation to this, face recognition is an emerging research
area and, in the next few years, it is supposed to be
extensively used for automatic human recognition systems
in many of the applications mentioned before.
forensic work,
One of the most popular methods for face recognition
is elastic graph bunch matching (EBGM), proposed by
Wiskott et al. (1997). This method is an evolution of the
so-called dynamic link architecture (DLA) (Kotropoulos
and Pitas, 1997). The main idea in elastic graph matching
is to represent a face starting from a set of reference or
fiducial points known as landmarks. These fiducial points
have a spatial coherence, as they are connected using a
graph structure. Therefore, EBGM represents faces as
facial graphs with nodes at those facial landmarks (such
Corresponding author
as eyes, the tip of the nose, etc.). Considering these nodes,
geometric information can be extracted, and both distance
and angle metrics can be defined accordingly.
This algorithm takes into account that facial images
have many nonlinear features (variations in lighting,
pose and expression) that are not generally considered
in linear analysis methods, such as linear discriminant
analysis (LDA) or principal component analysis (PCA)
(Shin and Park, 2011). Moreover, it is particularly robust
when out-of-plane rotations appear. However, the main
drawback of this method is that it requires an accurate
location of the fiducial points.
Artificial neural networks (ANNs) are one of the
most often used paradigms to address problems in
artificial intelligence (Ba´nka et al., 2014; Kayarvizhy
et al., 2014; Tran et al., 2014; Kumar and Kumar,
2015). Among the different approaches of ANNs, the self
organizing map (SOM) has special features for association
and pattern classification (Kohonen, 2001), and it is one of
the most popular neural network models. This technique
is suitable in situations where there is an inaccuracy or a
lack of formalization of the problem to be solved. In these
cases, there is no precise mathematical formulation of
the relationship between the input patterns (Azor´ın-L´opez
et al., 2014).
The SOM makes use of an unsupervised learning
('2274078', 'Francisco A. Pujol', 'francisco a. pujol')e-mail: {fpujol,hmora}@dtic.ua.es,jags20@alu.ua.es
fdbacf2ff0fc21e021c830cdcff7d347f2fddd8eORIGINAL RESEARCH
published: 17 August 2018
doi: 10.3389/fnhum.2018.00327
Recognizing Frustration of Drivers
From Face Video Recordings and
Brain Activation Measurements With
Functional Near-Infrared
Spectroscopy
Institute of Transportation Systems, German Aerospace Center (DLR), Braunschweig
Germany, University of Oldenburg, Oldenburg, Germany
Experiencing frustration while driving can harm cognitive processing, result in aggressive
behavior and hence negatively influence driving performance and traffic safety. Being
able to automatically detect frustration would allow adaptive driver assistance and
automation systems to adequately react to a driver’s frustration and mitigate potential
negative consequences. To identify reliable and valid indicators of driver’s frustration,
we conducted two driving simulator experiments. In the first experiment, we aimed to
reveal facial expressions that indicate frustration in continuous video recordings of the
driver’s face taken while driving highly realistic simulator scenarios in which frustrated
or non-frustrated emotional states were experienced. An automated analysis of facial
expressions combined with multivariate logistic regression classification revealed that
frustrated time intervals can be discriminated from non-frustrated ones with accuracy
of 62.0% (mean over 30 participants). A further analysis of the facial expressions
revealed that frustrated drivers tend to activate muscles in the mouth region (chin
raiser, lip pucker, lip pressor). In the second experiment, we measured cortical activation
with almost whole-head functional near-infrared spectroscopy (fNIRS) while participants
experienced frustrating and non-frustrating driving simulator scenarios. Multivariate
logistic regression applied to the fNIRS measurements allowed us to discriminate
between frustrated and non-frustrated driving intervals with higher accuracy of 78.1%
(mean over 12 participants). Frustrated driving intervals were indicated by increased
activation in the inferior frontal, putative premotor and occipito-temporal cortices.
Our results show that facial and cortical markers of
frustration can be informative
for time resolved driver state identification in complex realistic driving situations. The
markers derived here can potentially be used as an input for future adaptive driver
assistance and automation systems that detect driver frustration and adaptively react
to mitigate it.
Keywords: frustration, driver state recognition, facial expressions, functional near-infrared spectroscopy, adaptive
automation
Edited by:
Guido P. H. Band,
Leiden University, Netherlands
Reviewed by:
Paola Pinti,
University College London
United Kingdom
Edmund Wascher,
Leibniz-Institut für Arbeitsforschung
an der TU Dortmund (IfADo),
Germany
*Correspondence:
Received: 17 April 2018
Accepted: 25 July 2018
Published: 17 August 2018
Citation:
Ihme K, Unni A, Zhang M, Rieger JW
and Jipp M (2018) Recognizing
Frustration of Drivers From Face
Video Recordings and Brain
Activation Measurements With
Functional Near-Infrared
Spectroscopy.
Front. Hum. Neurosci. 12:327.
doi: 10.3389/fnhum.2018.00327
Frontiers in Human Neuroscience | www.frontiersin.org
August 2018 | Volume 12 | Article 327
('2873465', 'Klas Ihme', 'klas ihme')
('34722642', 'Anirudh Unni', 'anirudh unni')
('48984951', 'Meng Zhang', 'meng zhang')
('2743311', 'Jochem W. Rieger', 'jochem w. rieger')
('50093361', 'Meike Jipp', 'meike jipp')
('2873465', 'Klas Ihme', 'klas ihme')
klas.ihme@dlr.de
fd892e912149e3f5ddd82499e16f9ea0f0063fa3GazeDirector: Fully Articulated Eye Gaze Redirection in Video
University of Cambridge, UK 2Carnegie Mellon University, USA
Max Planck Institute for Informatics, Germany
4Microsoft
('34399452', 'Erroll Wood', 'erroll wood')
('49933077', 'Louis-Philippe Morency', 'louis-philippe morency')
fde0180735699ea31f6c001c71eae507848b190fInternational Journal of Computer Applications (0975 – 8887)
Volume 76– No.3, August 2013
Face Detection and Sex Identification from Color Images
using AdaBoost with SVM based Component Classifier
Lecturer, Department of EEE
University of Information
Technology and Sciences
(UITS)
Dhaka, Bangladesh
B.Sc. in EEE
International University of
Business Agriculture and
Technology (IUBAT)
Dhaka-1230, Bangladesh
Lecturer, Department of EEE
International University of
Business Agriculture and
Technology (IUBAT)
Dhaka-1230, Bangladesh
('1804849', 'Tonmoy Das', 'tonmoy das')
('2832495', 'Md. Hafizur Rahman', 'md. hafizur rahman')
fdf8e293a7618f560e76bd83e3c40a0788104547Interspecies Knowledge Transfer for Facial Keypoint Detection
University of California, Davis
Zhejiang University
University of California, Davis
('35157022', 'Maheen Rashid', 'maheen rashid')
('10734287', 'Xiuye Gu', 'xiuye gu')
('1883898', 'Yong Jae Lee', 'yong jae lee')
mhnrashid@ucdavis.edu
gxy0922@zju.edu.cn
yongjaelee@ucdavis.edu
fd615118fb290a8e3883e1f75390de8a6c68bfdeJoint Face Alignment with Non-Parametric
Shape Models
University of Wisconsin Madison
http://www.cs.wisc.edu/~lizhang/projects/joint-align/
('1893050', 'Brandon M. Smith', 'brandon m. smith')
('40396555', 'Li Zhang', 'li zhang')
fdaf65b314faee97220162980e76dbc8f32db9d6Accepted Manuscript
Face recognition using both visible light image and near-infrared image and a deep
network
PII:
DOI:
Reference:
S2468-2322(17)30014-8
10.1016/j.trit.2017.03.001
TRIT 41
To appear in:
CAAI Transactions on Intelligence Technology
Received Date: 30 January 2017
Accepted Date: 28 March 2017
Please cite this article as: K. Guo, S. Wu, Y. Xu, Face recognition using both visible light image and
near-infrared image and a deep network, CAAI Transactions on Intelligence Technology (2017), doi:
10.1016/j.trit.2017.03.001.
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to
our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and all
legal disclaimers that apply to the journal pertain.
('48477652', 'Kai Guo', 'kai guo')
('40200363', 'Shuai Wu', 'shuai wu')
f22d6d59e413ee255e5e0f2104f1e03be1a6722eLattice Long Short-Term Memory for Human Action Recognition
The Hong Kong University of Science and Technology
Stanford University
South China University of Technology
('41191188', 'Lin Sun', 'lin sun')
('2370507', 'Kui Jia', 'kui jia')
('1794604', 'Kevin Chen', 'kevin chen')
('2131088', 'Bertram E. Shi', 'bertram e. shi')
('1702137', 'Silvio Savarese', 'silvio savarese')
f24e379e942e134d41c4acec444ecf02b9d0d3a9International Scholarly Research Network
ISRN Machine Vision
Volume 2012, Article ID 505974, 7 pages
doi:10.5402/2012/505974
Research Article
Analysis of Facial Images across Age Progression by Humans
Temple University, Philadelphia, PA 19122, USA
Temple University, Philadelphia, PA 19122, USA
West Virginia University, Morgantown, WV 26506, USA
Received 25 July 2011; Accepted 25 August 2011
Academic Editors: O. Ghita and R.-H. Park
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The appearance of human faces can undergo large variations over aging progress. Analysis of facial image taken over age
progression recently attracts increasing attentions in computer-vision community. Human abilities for such analysis are, however,
less studied. In this paper, we conduct a thorough study of human ability on two tasks, face verification and age estimation, for
facial images taken at different ages. Detailed and rigorous experimental analysis is provided, which helps understanding roles of
different factors including age group, age gap, race, and gender. In addition, our study also leads to an interesting observation: for
age estimation, photos from adults are more challenging than that from young people. We expect the study to provide a reference
for machine-based solutions.
1. Introduction
Human faces are important in revealing the personal char-
acteristic and understanding visual data. The facial research
has been studied over several decades in computer vision
community [1, 2]. Analysis facial images across age pro-
gression recently attracts increasing research attention [3]
because of its important real-life applications. For example,
facial appearance predictor of missing people and ID photo
automatic update system are playing important roles in
simulating face aging of human beings. Age estimation can
also be applied to age-restricted vending machine [4]. Most
recent studies (see Section 2) of age-related facial image
analysis mainly focus on three tasks: face verification, age
estimation, and age effect simulation. In comparison, it
remains unclear how humans perform on these tasks.
In this paper, we study human ability on face verification
and age estimation for face photos taken at across age
progression. Such studies are important in that it not only
provides a reference for future machine-based solutions,
but also provides insight on how different factors (e.g., age
gaps, gender, etc.) affect facial analysis algorithms. There are
previous works on human performance for face recognition
and age estimation; however, most of them are either
focusing on nonage related issues such as lighting [5] or
limited by the scale of image datasets (e.g., [6]). Taking
advantage of the recent available MORPH dataset [7], which
to the best of our knowledge is the largest publicly available
face aging dataset, we are able to conduct thorough human
studies on facial analysis tasks.
For face verification, the task is to let a human subject
decide whether two photos come from the same person (at
different ages). In addition to report the general performance
on our human subjects’ performance, we also analyze the
e ects of di erence factors, including age group, age gap
race, and gender. In addition, we also compare human
performance with previous reported baseline algorithm. For
age estimation, similarly, we report and analyze human
performance for general cases as well as for different factors.
Compared to a previous study on the FGNet database [8],
our study implies that age estimation are harder for photos
from adults than those from young people.
The rest of the paper is organized as follows. Section 2
shows the related works on different databases. Section 3
describes the details of human experiments of face-recog-
nition and age-estimation problems. Then, in Section 4,
('38129124', 'Jingting Zeng', 'jingting zeng')
('1805398', 'Haibin Ling', 'haibin ling')
('1686678', 'Longin Jan Latecki', 'longin jan latecki')
('1822413', 'Guodong Guo', 'guodong guo')
('38129124', 'Jingting Zeng', 'jingting zeng')
Correspondence should be addressed to Haibin Ling, hbling@temple.edu
f2b13946d42a50fa36a2c6d20d28de2234aba3b4Adaptive Facial Expression Recognition Using Inter-modal
Top-down Context
Ravi Kiran
Sarvadevabhatla
Honda Research Institute USA
425 National Ave, Suite 100
Mountain View 94043, USA
Neural Prosthetics Lab
Department of Electrical and
Computer Engineering
McGill University
Montreal H3A 2A7, Canada
Neural Prosthetics Lab
Department of Electrical and
Computer Engineering
McGill University
Montreal H3A 2A7, Canada
Honda Research Institute USA
425 National Ave, Suite 100
Mountain View 94043, USA
('1708927', 'Mitchel Benovoy', 'mitchel benovoy')
('2003327', 'Sam Musallam', 'sam musallam')
('1692465', 'Victor Ng-Thow-Hing', 'victor ng-thow-hing')
RSarvadevabhatla@hra.com
benovoym@mcgill.ca
sam.musallam@mcgill.ca
vngthowhing@hra.com
f2c30594d917ea915028668bc2a481371a72a14dScene Understanding Using Internet Photo Collections
A dissertation submitted in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
University of Washington
2010
Program Authorized to Offer Degree: Computer Science and Engineering
('35577716', 'Ian Simon', 'ian simon')
f2ad9b43bac8c2bae9dea694f6a4e44c760e63daA Study on Illumination Invariant Face Recognition Methods
Based on Multiple Eigenspaces
1National Laboratory for Novel Software Technology
Nanjing University, Nanjing 210093, P.R.China
2Department of Computer Science
North Dakota State University, Fargo, ND58105, USA
('7878359', 'Wu-Jun Li', 'wu-jun li')
('2697799', 'Chong-Jun Wang', 'chong-jun wang')
('1737124', 'Bin Luo', 'bin luo')
Email: {liwujun, chjwang}@ai.nju.edu.cn
Email: Dianxiang.xu@ndsu.nodak.edu
f2e9494d0dca9fb6b274107032781d435a508de6
f2c568fe945e5743635c13fe5535af157b1903d1
f2a7f9bd040aa8ea87672d38606a84c31163e171Human Action Recognition without Human
National Institute of Advanced Industrial Science and Technology (AIST
Tsukuba, Ibaraki, Japan
('1713046', 'Yun He', 'yun he')
('3393640', 'Soma Shirakabe', 'soma shirakabe')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
{yun.he, shirakabe-s, yu.satou, hirokatsu.kataoka}@aist.go.jp
f257300b2b4141aab73f93c146bf94846aef5fa1Eigen Evolution Pooling for Human Action Recognition
Stony Brook University, Stony Brook, NY 11794, USA
('2295608', 'Yang Wang', 'yang wang')
('49701507', 'Vinh Tran', 'vinh tran')
('2356016', 'Minh Hoai', 'minh hoai')
{wang33, tquangvinh, minhhoai}@cs.stonybrook.edu
f20e0eefd007bc310d2a753ba526d33a8aba812cLee et al.: RGB-D FACE RECOGNITION WITH A DEEP LEARNING APPROACH
Accurate and robust face recognition from
RGB-D images with a deep learning
approach
Yuancheng Lee
http://cv.cs.nthu.edu.tw/php/people/profile.php?uid=150
http://cv.cs.nthu.edu.tw/php/people/profile.php?uid=153
Ching-Wei Tseng
http://cv.cs.nthu.edu.tw/php/people/profile.php?uid=156
Computer Vision Lab,
Department of
Computer Science,
National Tsing Hua
University
Hsinchu, Taiwan
http://www.cs.nthu.edu.tw/~lai/
('7557765', 'Jiancong Chen', 'jiancong chen')
('1696527', 'Shang-Hong Lai', 'shang-hong lai')
f26097a1a479fb6f32b27a93f8f32609cfe30fdc
f231046d5f5d87e2ca5fae88f41e8d74964e8f4fWe are IntechOpen,
the first native scientific
publisher of Open Access books
3,350
108,000
1.7 M
Open access books available
International authors and editors
Downloads
Our authors are among the
151
Countries delivered to
TOP 1%
12.2%
most cited scientists
Contributors from top 500 universities
Selection of our books indexed in the Book Citation Index
in Web of Science™ Core Collection (BKCI)
Interested in publishing with us?
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Contact book.department@intechopen.com
f28b7d62208fdaaa658716403106a2b0b527e763Clustering-driven Deep Embedding with Pairwise Constraints
JACOB GOLDBERGER, Bar-Ilan University
Fig. 1. Employing deep embeddings for clustering 3D shapes. Above, we use PCA to visualize the output embedding of point clouds of chairs. We also highlight
(in unique colors) a few random clusters and display a few representative chairs from these clusters.
Recently, there has been increasing interest to leverage the competence
of neural networks to analyze data. In particular, new clustering meth-
ods that employ deep embeddings have been presented. In this paper, we
depart from centroid-based models and suggest a new framework, called
Clustering-driven deep embedding with PAirwise Constraints (CPAC), for
non-parametric clustering using a neural network. We present a clustering-
driven embedding based on a Siamese network that encourages pairs of data
points to output similar representations in the latent space. Our pair-based
model allows augmenting the information with labeled pairs to constitute a
semi-supervised framework. Our approach is based on analyzing the losses
associated with each pair to refine the set of constraints. We show that clus-
tering performance increases when using this scheme, even with a limited
amount of user queries. We demonstrate how our architecture is adapted
for various types of data and present the first deep framework to cluster 3D
shapes.
INTRODUCTION
Autoencoders provide means to analyze data without supervision.
Autoencoders based on deep neural networks include non-linear
neurons which significantly strengthen the power of the analysis.
The key idea is that the encoders project the data into an embedding
latent space, where the L2 proximity among the projected elements
better expresses their similarity. To further enhance the data prox-
imity in the embedding space, the encoder can be encouraged to
form tight clusters in the embedding space. Xie et al. [2016] have
presented an unsupervised embedding driven by a centroid-based
clustering. They have shown that their deep embedding leads to
better clustering of the data. More advanced clustering-driven em-
bedding techniques have been recently presented [Dizaji et al. 2017;
Yang et al. 2016]. These techniques are all centroid-based and para-
metric, in the sense that the number of clusters is known a-priori.
In this paper, we present a clustering-driven embedding technique
that allows semi-supervision. The idea is to depart from centroid-
based methods and use pairwise constraints to drive the clustering.
Most, or all the constraints, can be learned with no supervision,
while possibly a small portion of the data is supervised. More specifi-
cally, we adopt robust continuous clustering (RCC) [Shah and Koltun
2017] as a driving mechanism to encourage a tight clustering of the
embedded data.
The idea is to extract pairwise constraints using a mutual k-
nearest neighbors analysis, and use these pairs as must-link con-
straints. With no supervision, the set of constraints is imperfect
and contains false positive pairs on one hand. Our technique allows
removing false positive pairs and strengthening true positive pairs
actively by a user. We present an approach that analyzes the losses
associated with the pairs to form a set of false positive candidates.
See Figure 2(b)-(c) for a visualization of the distribution of the data
('40901326', 'Sharon Fogel', 'sharon fogel')
('1793313', 'Hadar Averbuch-Elor', 'hadar averbuch-elor')
('1701009', 'Daniel Cohen-Or', 'daniel cohen-or')
f214bcc6ecc3309e2efefdc21062441328ff6081
f5149fb6b455a73734f1252a96a9ce5caa95ae02Low-Rank-Sparse Subspace Representation for Robust Regression
Harbin Institute of Technology
Harbin Institute of Technology;Shenzhen University
Harbin, China
Harbin, China;Shenzhen, China
The University of Sydney
Harbin Institute of Technology
Sydney, Australia
Harbin, China
('1747644', 'Yongqiang Zhang', 'yongqiang zhang')
('1887263', 'Daming Shi', 'daming shi')
('1750488', 'Junbin Gao', 'junbin gao')
('2862899', 'Dansong Cheng', 'dansong cheng')
seekever@foxmail.com
d.m.shi@hotmail.com
junbin.gao@sydney.edu.au
cdsinhit@hit.edu.cn
f58d584c4ac93b4e7620ef6e5a8f20c6f6da295eProceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Feature Selection Guided Auto-Encoder
1Department of Electrical & Computer Engineering,
College of Computer and Information Science
Northeastern University, Boston, MA, USA
('47673521', 'Shuyang Wang', 'shuyang wang')
('2788685', 'Zhengming Ding', 'zhengming ding')
('1708679', 'Yun Fu', 'yun fu')
{shuyangwang, allanding, yunfu}@ece.neu.edu
f5eb0cf9c57716618fab8e24e841f9536057a28aRethinking Feature Distribution for Loss Functions in Image Classification
Tsinghua University, Beijing, China
University of at Urbana-Champaign, Illinois, USA
('47718901', 'Weitao Wan', 'weitao wan')
('1752427', 'Jiansheng Chen', 'jiansheng chen')
('8802368', 'Yuanyi Zhong', 'yuanyi zhong')
('2641581', 'Tianpeng Li', 'tianpeng li')
wwt16@mails.tsinghua.edu.cn
yuanyiz2@illinois.edu
ltp16@mails.tsinghua.edu.cn
jschenthu@mail.tsinghua.edu.cn
f571fe3f753765cf695b75b1bd8bed37524a52d2Submodular Attribute Selection for Action
Recognition in Video
Jinging Zheng
UMIACS, University of Maryland
College Park, MD, USA
Noah’s Ark Lab
Huawei Technologies
UMIACS, University of Maryland
National Institute of Standards and Technology
College Park, MD, USA
Gaithersburg, MD, USA
('34145947', 'Zhuolin Jiang', 'zhuolin jiang')
('9215658', 'Rama Chellappa', 'rama chellappa')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
zjngjng@umiacs.umd.edu
zhuolin.jiang@huawei.com
rama@umiacs.umd.edu
jonathon.phillips@nist.gov
f5fae7810a33ed67852ad6a3e0144cb278b24b41Multilingual Gender Classification with Multi-view
Deep Learning
Notebook for PAN at CLEF 2018
Jo ef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
2 Jožef Stefan International Postgraduate School, Jamova 39, 1000 Ljubljana, Slovenia
USHER Institute, University of Edinburgh, United Kingdom
('22684661', 'Matej Martinc', 'matej martinc')
('40235216', 'Senja Pollak', 'senja pollak')
{matej.martinc,blaz.skrlj,senja.pollak}@ijs.si
f5af4e9086b0c3aee942cb93ece5820bdc9c9748ENHANCING PERSON ANNOTATION
FOR PERSONAL PHOTO MANAGEMENT
USING CONTENT AND CONTEXT
BASED TECHNOLOGIES
By
THESIS DIRECTED BY: PROF. NOEL E. O’CONNOR
A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY
September 2008
SCHOOL OF ELECTRONIC ENGINEERING
DUBLIN CITY UNIVERSITY
('2668569', 'Saman H. Cooray', 'saman h. cooray')
f5770dd225501ff3764f9023f19a76fad28127d4Real Time Online Facial Expression Transfer
with Single Video Camera
f5aee1529b98136194ef80961ba1a6de646645feLarge-Scale Learning of
Discriminative Image Representations
D.Phil Thesis
Robotics Research Group
Department of Engineering Science
University of Oxford
Supervisors:
Professor Andrew Zisserman
Doctor Antonio Criminisi
Mans eld College
Trinity Term, 2013
('34838386', 'Karen Simonyan', 'karen simonyan')
f52efc206432a0cb860155c6d92c7bab962757deMUGSHOT DATABASE ACQUISITION IN VIDEO SURVEILLANCE NETWORKS USING
INCREMENTAL AUTO-CLUSTERING QUALITY MEASURES
Computer Science Department
University of Kentucky
Lexington, KY, 40508
('3237043', 'Quanren Xiong', 'quanren xiong')
f519723238701849f1160d5a9cedebd31017da89Impact of multi-focused images on recognition of soft biometric traits
aEURECOM, Campus SophiaTech, 450 Route des Chappes, CS 50193 - 06904 Biot Sophia

Antipolis cedex, FRANCE
('24362694', 'V. Chiesa', 'v. chiesa')
f5eb411217f729ad7ae84bfd4aeb3dedb850206aTackling Low Resolution for Better Scene Understanding
Thesis submitted in partial fulfillment
of the requirements for the degree of
MS in Computer Science and Engineering
By Research
by
201202172
International Institute of Information Technology
Hyderabad - 500 032, INDIA
July 2018
('41033644', 'Harish Krishna', 'harish krishna')harishkrishna.v@research.iiit.ac.in
f558af209dd4c48e4b2f551b01065a6435c3ef33International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE)
ISSN: 0976-1353 Volume 23 Issue 1 –JUNE 2016.
AN ENHANCED ATTRIBUTE
RERANKING DESIGN FOR WEB IMAGE
SEARCH
#Student,Cse, CIET, Lam,Guntur, India
* Assistant Professort,Cse, CIET, Lam,Guntur , India
('4384318', 'G K Kishore Babu', 'g k kishore babu')
e378ce25579f3676ca50c8f6454e92a886b9e4d7Robust Video Super-Resolution with Learned Temporal Dynamics
University of Illinois at Urbana-Champaign 2Adobe Research
Facebook 4Texas AandM University 5IBM Research
('1771885', 'Ding Liu', 'ding liu')
('2969311', 'Zhangyang Wang', 'zhangyang wang')
e393a038d520a073b9835df7a3ff104ad610c552Automatic temporal segment
detection via bilateral long short-
term memory recurrent neural
networks
detection via bilateral long short-term memory recurrent neural networks,” J.
Electron. Imaging 26(2), 020501 (2017), doi: 10.1117/1.JEI.26.2.020501.
Downloaded From: http://electronicimaging.spiedigitallibrary.org/ on 03/03/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx
('49447269', 'Bo Sun', 'bo sun')
('7886608', 'Siming Cao', 'siming cao')
('49264106', 'Jun He', 'jun he')
('8834504', 'Lejun Yu', 'lejun yu')
('2089565', 'Liandong Li', 'liandong li')
('49447269', 'Bo Sun', 'bo sun')
('7886608', 'Siming Cao', 'siming cao')
('49264106', 'Jun He', 'jun he')
('8834504', 'Lejun Yu', 'lejun yu')
('2089565', 'Liandong Li', 'liandong li')
e35b09879a7df814b2be14d9102c4508e4db458bOptimal Sensor Placement and
Enhanced Sparsity for Classification
University of Washington, Seattle, WA 98195, United States
University of Washington, Seattle, WA 98195, United States
Institute for Disease Modeling, Intellectual Ventures Laboratory, Bellevue, WA 98004, United States
('1824880', 'Bingni W. Brunton', 'bingni w. brunton')
('3083169', 'Steven L. Brunton', 'steven l. brunton')
('2424683', 'Joshua L. Proctor', 'joshua l. proctor')
('1937069', 'J. Nathan Kutz', 'j. nathan kutz')
e3b324101157daede3b4d16bdc9c2388e849c7d4Robust Real-Time 3D Face Tracking from RGBD Videos under Extreme Pose,
Depth, and Expression Variations
Hai X. Pham
Rutgers University, USA
('1736042', 'Vladimir Pavlovic', 'vladimir pavlovic'){hxp1,vladimir}@cs.rutgers.edu
e3657ab4129a7570230ff25ae7fbaccb4ba9950c
e315959d6e806c8fbfc91f072c322fb26ce0862bAn Efficient Face Recognition System Based on Sub-Window
International Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-1, Issue-6, January 2012
Extraction Algorithm
('1696227', 'Manish Gupta', 'manish gupta')
('36776003', 'Govind sharma', 'govind sharma')
e3c011d08d04c934197b2a4804c90be55e21d572How to Train Triplet Networks with 100K Identities?
Orion Star
Beijing, China
Orion Star
Beijing, China
Orion Star
Beijing, China
('1747751', 'Chong Wang', 'chong wang')
('46447079', 'Xue Zhang', 'xue zhang')
('26403761', 'Xipeng Lan', 'xipeng lan')
chongwang.nlpr@gmail.com
yuannixue@126.com
xipeng.lan@gmail.com
e39a0834122e08ba28e7b411db896d0fdbbad9ba1368
Maximum Likelihood Estimation of Depth Maps
Using Photometric Stereo
('2964822', 'Adam P. Harrison', 'adam p. harrison')
('39367958', 'Dileepan Joseph', 'dileepan joseph')
e3bb83684817c7815f5005561a85c23942b1f46bFace Verification using Correlation Filters
Electrical and Computer Eng. Dept,
Electrical and Computer Eng. Dept,
Electrical and Computer Eng. Dept,
Carnegie Mellon University
Pittsburgh, PA 15213, U.S.A.
Carnegie Mellon University
Pittsburgh, PA 15213, U.S.A.
Carnegie Mellon University
Pittsburgh, PA 15213, U.S.A.
('1794486', 'Marios Savvides', 'marios savvides')
('36754879', 'Vijaya Kumar', 'vijaya kumar')
('34607721', 'Pradeep Khosla', 'pradeep khosla')
msavvid@ri.cmu.edu
kumar@ece.cmu.edu
pkk@ece.cmu.edu
e30dc2abac4ecc48aa51863858f6f60c7afdf82aFacial Signs and Psycho-physical Status Estimation for Well-being
Assessment
F. Chiarugi, G. Iatraki, E. Christinaki, D. Manousos, G. Giannakakis, M. Pediaditis,
A. Pampouchidou, K. Marias and M. Tsiknakis
Computational Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology - Hellas
70013 Vasilika Vouton, Heraklion, Crete, Greece
Keywords:
Facial Expression, Stress, Anxiety, Feature Selection, Well-being Evaluation, FACS, FAPS, Classification.
{chiarugi, giatraki, echrist, mandim, ggian, mped, pampouch, kmarias, tsiknaki}@ics.forth.gr
e3e2c106ccbd668fb9fca851498c662add257036Appearance, Context and Co-occurrence Ensembles for
Identity Recognition in Personal Photo Collections
University of Colorado at Colorado Springs
T.E.Boult1
2AT&T Labs-Research, Middletown, NJ
('27469806', 'Archana Sapkota', 'archana sapkota')
('33692583', 'Raghuraman Gopalan', 'raghuraman gopalan')
('2900213', 'Eric Zavesky', 'eric zavesky')
1 {asapkota,tboult}@vast.uccs.edu
2{raghuram,ezavesky}@research.att.com
e379e73e11868abb1728c3acdc77e2c51673eb0dIn S.Li and A.Jain, (ed). Handbook of Face Recognition. Springer-Verlag, 2005
Face Databases
The Robotics Inistitute, Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
Because of its nonrigidity and complex three-dimensional (3D) structure, the appearance of a face is affected by a large
number of factors including identity, face pose, illumination, facial expression, age, occlusion, and facial hair. The develop-
ment of algorithms robust to these variations requires databases of sufficient size that include carefully controlled variations
of these factors. Furthermore, common databases are necessary to comparatively evaluate algorithms. Collecting a high
quality database is a resource-intensive task: but the availability of public face databases is important for the advancement of
the field. In this chapter we review 27 publicly available databases for face recognition, face detection, and facial expression
analysis.
1 Databases for Face Recognition
Face recognition continues to be one of the most popular research areas of computer vision and machine learning. Along
with the development of face recognition algorithms, a comparatively large number of face databases have been collected.
However, many of these databases are tailored to the specific needs of the algorithm under development. In this section
we review publicly available databases that are of demonstrated use to others in the community. At the beginning of each
subsection a table summarizing the key features of the database is provided, including (where available) the number of
subjects, recording conditions, image resolution, and total number of images. Table 1 gives an overview of the recording
conditions for all databases discussed in this section. Owing to space constraints not all databases are discussed at the same
level of detail. Abbreviated descriptions of a number of mostly older databases are included in Section 1.13. The scope of
this section is limited to databases containing full face imagery. Note, however, that there are databases of subface images
available, such as the recently released CASIA Iris database [23].
1.1 AR Database
No. of subjects
116
Conditions
Facial expressions
Illumination
Occlusion
Time
Image Resolution
No. of Images
768 × 576
3288
http://rvl1.ecn.purdue.edu/˜aleix/aleix face DB.html
The AR database was collected at the Computer Vision Center in Barcelona, Spain in 1998 [25]. It contains images of
116 individuals (63 men and 53 women). The imaging and recording conditions (camera parameters, illumination setting,
camera distance) were carefully controlled and constantly recalibrated to ensure that settings are identical across subjects.
The resulting RGB color images are 768 × 576 pixels in size. The subjects were recorded twice at a 2–week interval. During
each session 13 conditions with varying facial expressions, illumination and occlusion were captured. Figure 1 shows an
example for each condition. So far, more than 200 research groups have accessed the database.
('33731953', 'Ralph Gross', 'ralph gross')Email: {rgross}@cs.cmu.edu
e39a66a6d1c5e753f8e6c33cd5d335f9bc9c07faUniversity of Massachusetts - Amherst
Dissertations
5-1-2012
Dissertations and Theses
Weakly Supervised Learning for Unconstrained
Face Processing
Follow this and additional works at: http://scholarworks.umass.edu/open_access_dissertations
Recommended Citation
Huang, Gary B., "Weakly Supervised Learning for Unconstrained Face Processing" (2012). Dissertations. Paper 559.
('3219900', 'Gary B. Huang', 'gary b. huang')ScholarWorks@UMass Amherst
University of Massachusetts - Amherst, garybhuang@gmail.com
This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has
been accepted for inclusion in Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact
scholarworks@library.umass.edu.
e3a6e9ddbbfc4c5160082338d46808cea839848aVision-Based Classification of Developmental Disorders
Using Eye-Movements
Stanford University, USA
Stanford University, USA
Stanford University, USA
Stanford University, USA
Stanford University, USA
('3147852', 'Guido Pusiol', 'guido pusiol')
('1811529', 'Andre Esteva', 'andre esteva')
('3472674', 'Arnold Milstein', 'arnold milstein')
('3216322', 'Li Fei-Fei', 'li fei-fei')
e3c8e49ffa7beceffca3f7f276c27ae6d29b35dbFamilies in the Wild (FIW): Large-Scale Kinship Image
Database and Benchmarks
Northeastern University, Boston, USA
College of Computer and Information Science, Northeastern University, Boston, USA
('4056993', 'Joseph P. Robinson', 'joseph p. robinson')
('49248003', 'Ming Shao', 'ming shao')
('47096713', 'Yue Wu', 'yue wu')
('1708679', 'Yun Fu', 'yun fu')
{jrobins1, mingshao, yuewu, yunfu}@ece.neu.edu
e38371b69be4f341baa95bc854584e99b67c6d3aDYAN: A Dynamical Atoms-Based Network
For Video Prediction(cid:63)
Electrical and Computer Engineering, Northeastern University, Boston, MA
http://robustsystems.coe.neu.edu
('40366599', 'WenQian Liu', 'wenqian liu')
('1785252', 'Abhishek Sharma', 'abhishek sharma')
('30929906', 'Octavia Camps', 'octavia camps')
('1687866', 'Mario Sznaier', 'mario sznaier')
liu.wenqi,sharma.abhis@husky.neu.edu, camps,msznaier@northeastern.edu
e3917d6935586b90baae18d938295e5b089b5c62152
Face Localization and Authentication
Using Color and Depth Images
('1807962', 'Filareti Tsalakanidou', 'filareti tsalakanidou')
('1744180', 'Sotiris Malassiotis', 'sotiris malassiotis')
('1721460', 'Michael G. Strintzis', 'michael g. strintzis')
e328d19027297ac796aae2470e438fe0bd334449Automatic Micro-expression Recognition from
Long Video using a Single Spotted Apex
1 Faculty of Computer Science & Information Technology,
University of Malaya, Kuala Lumpur, Malaysia
2 Faculty of Computing & Informatics,
Multimedia University, Cyberjaya, Malaysia
3 Faculty of Engineering,
Multimedia University, Cyberjaya, Malaysia
('39888137', 'Sze-Teng Liong', 'sze-teng liong')
('2339975', 'John See', 'john see')
('1713159', 'KokSheik Wong', 'koksheik wong')
szeteng1206@hotmail.com,koksheik@um.edu.my
johnsee@mmu.edu.my
raphael@mmu.edu.my
e3144f39f473e238374dd4005c8b83e19764ae9eNext-Flow: Hybrid Multi-Tasking with Next-Frame Prediction to Boost
Optical-Flow Estimation in the Wild
University of Freiburg
Germany
('31656404', 'Nima Sedaghat', 'nima sedaghat')nima@cs.uni-freiburg.de
e3a6e5a573619a97bd6662b652ea7d088ec0b352Compare and Contrast: Learning Prominent Visual Differences
The University of Texas at Austin
('50357985', 'Steven Chen', 'steven chen')
('1794409', 'Kristen Grauman', 'kristen grauman')
cfeb26245b57dd10de8f187506d4ed5ce1e2b7ddCapsNet comparative performance evaluation for image
classification
University of Waterloo, ON, Canada
('30421594', 'Rinat Mukhometzianov', 'rinat mukhometzianov')
('36957611', 'Juan Carrillo', 'juan carrillo')
cffebdf88e406c27b892857d1520cb2d7ccda573LEARNING FROM LARGE-SCALE VISUAL DATA
FOR ROBOTS
A Dissertation
Presented to the Faculty of the Graduate School
of Cornell University
in Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy
by
Ozan S¸ener
August 2016
cfa572cd6ba8dfc2ee8ac3cc7be19b3abff1a8a2
cfffae38fe34e29d47e6deccfd259788176dc213TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. X, NO. X, DECEMBER 2012
Matrix Completion for Weakly-supervised
Multi-label Image Classification
('1707876', 'Fernando De la Torre', 'fernando de la torre')
('2884203', 'Alexandre Bernardino', 'alexandre bernardino')
cfd4004054399f3a5f536df71f9b9987f060f434IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. ??, NO. ??, ?? 20??
Person Recognition in Personal Photo Collections
('2390510', 'Seong Joon Oh', 'seong joon oh')
('1798000', 'Rodrigo Benenson', 'rodrigo benenson')
('1739548', 'Mario Fritz', 'mario fritz')
('1697100', 'Bernt Schiele', 'bernt schiele')
cfd933f71f4a69625390819b7645598867900eabINTERNATIONAL JOURNAL OF TECHNOLOGY ENHANCEMENTS AND EMERGING ENGINEERING RESEARCH, VOL 3, ISSUE 03 55
ISSN 2347-4289
Person Authentication Using Face And Palm Vein:
A Survey Of Recognition And Fusion Techniques
College of Engineering, Pune, India
Image Processing & Machine Vision Section, Electronics & Instrumentation Services Division, BARC
('38561481', 'Dhanashree Vaidya', 'dhanashree vaidya')
('2623250', 'Madhuri A. Joshi', 'madhuri a. joshi')
Email: preethimedu@gmail.com, dvaidya33@gmail.com, hod.extc@coep.ac.in, maj.extc@coep.ac.in, skar@barc.gov.in
cfb8bc66502fb5f941ecdb22aec1fdbfdb73adce
cf875336d5a196ce0981e2e2ae9602580f3f62437 What 1
Rosalind W. Picard
It Mean for a Computer to "Have" Emotions?
There is a lot of talk about giving machines emotions, some of
it fluff. Recently at a large technical meeting, a researcher stood up
and talked of how a Bamey stuffed animal [the purple dinosaur for
kids) "has emotions." He did not define what he meant by this, but
after repeating it several times, it became apparent that children
attributed emotions to Barney, and that Barney had deliberately
expressive behaviors that would encourage the kids to think. Bar-
ney had emotions. But kids have attributed emotions to dolls and
stuffed animals for as long a s we know; and most of my technical
colleagues would agree that such toys have never had and still do
not have emotions. What is different now that prompts a researcher
to make such a claim? Is the computational plush an example of a
computer that really does have emotions?
If not Barney, then what would be an example of a computa-
tional system that has emotions? I am not a philosopher, and this
paper will not be a discussion of the meaning of this question in
any philosophical sense. However, as an engineer I am interested
in what capabilities I would require a machine to have before I
would say that it "has emotions," if that is even possible.
Theorists still grappl~ with the problem of defining emotion,
after many decades of discussion, and no clean definition looks
likely to emerge. Even without a precise definition, one can still
begin to say concrete things about certain components of emotion,
at least based on what is known about human and animal emo-
tions. Of course, much is still u d a o w n about human emotions, so
we are nowhere near being able to model them, much less dupli-
cate all their functions in machines.'~lso, all scientific findings are
subject to revision-history has certainly taught us humility, that
what scientists believed to be true at one point has often been
changed at a later date.
I wish to begin by mentioning four motivations for giving
machines certain emotional abilities (and there are more). One goal
is to build robots and synthetic characters that can emulate living
humans and animals-for example, to build a humanoid robot. A
I
cfd8c66e71e98410f564babeb1c5fd6f77182c55Comparative Study of Coarse Head Pose Estimation
IBM T.J. Watson Research Center
Hawthorne, NY 10532
('34609371', 'Lisa M. Brown', 'lisa m. brown')
('40383812', 'Ying-Li Tian', 'ying-li tian')
{lisabr,yltian}@us.ibm.com
cf54a133c89f730adc5ea12c3ac646971120781c
cfbb2d32586b58f5681e459afd236380acd86e28Improving Alignment of Faces for Recognition
Christopher J. Pal
D´epartement de g´enie informatique et g´enie logiciel
´Ecole Polytechnique de Montr´eal,
D´epartement de g´enie informatique et g´enie logiciel
´Ecole Polytechnique de Montr´eal,
Qu´ebec, Canada
Qu´ebec, Canada
('2811524', 'Md. Kamrul Hasan', 'md. kamrul hasan')md-kamrul.hasan@polymtl.ca
christopher.pal@polymtl.ca
cfa92e17809e8d20ebc73b4e531a1b106d02b38cAdvances in Data Analysis and Classification manuscript No.
(will be inserted by the editor)
Parametric Classification with Soft Labels using the
Evidential EM Algorithm
Linear Discriminant Analysis vs. Logistic Regression
Received: date / Accepted: date
('1772306', 'Benjamin Quost', 'benjamin quost')
('2259794', 'Shoumei Li', 'shoumei li')
cf5c9b521c958b84bb63bea9d5cbb522845e4ba7Towards Arbitrary-View Face Alignment by Recommendation Trees∗
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
2SenseTime Group
('2226254', 'Shizhan Zhu', 'shizhan zhu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
zs014@ie.cuhk.edu.hk, chengli@sensetime.com, ccloy@ie.cuhk.edu.hk, xtang@ie.cuhk.edu.hk
cf5a0115d3f4dcf95bea4d549ec2b6bdd7c69150Detection of emotions from video in non-controlled
environment
To cite this version:
Processing. Universit´e Claude Bernard - Lyon I, 2013. English. .

HAL Id: tel-01166539
https://tel.archives-ouvertes.fr/tel-01166539v2
Submitted on 23 Jun 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('1943666', 'Rizwan Ahmed Khan', 'rizwan ahmed khan')
('1943666', 'Rizwan Ahmed Khan', 'rizwan ahmed khan')
cfdc632adcb799dba14af6a8339ca761725abf0aProbabilistic Formulations of Regression with Mixed
Guidance
('38688704', 'Aubrey Gress', 'aubrey gress')
('38673135', 'Ian Davidson', 'ian davidson')
adgress@ucdavis.edu, davidson@cs.ucdavis.edu
cfa931e6728a825caada65624ea22b840077f023Deformable Generator Network: Unsupervised Disentanglement of
Appearance and Geometry
College of Automation, Harbin Engineering University, Heilongjiang, China
University of California, Los Angeles, California, USA
('7306249', 'Xianglei Xing', 'xianglei xing')
('9659905', 'Ruiqi Gao', 'ruiqi gao')
('50495880', 'Tian Han', 'tian han')
('3133970', 'Song-Chun Zhu', 'song-chun zhu')
('39092098', 'Ying Nian Wu', 'ying nian wu')
cfc30ce53bfc204b8764ebb764a029a8d0ad01f4Regularizing Deep Neural Networks by Noise:
Its Interpretation and Optimization
Dept. of Computer Science and Engineering, POSTECH, Korea
('2018393', 'Hyeonwoo Noh', 'hyeonwoo noh')
('2205770', 'Tackgeun You', 'tackgeun you')
('8511875', 'Jonghwan Mun', 'jonghwan mun')
('40030651', 'Bohyung Han', 'bohyung han')
{shgusdngogo,tackgeun.you,choco1916,bhhan}@postech.ac.kr
cff911786b5ac884bb71788c5bc6acf6bf569effMulti-task Learning of Cascaded CNN for
Facial Attribute Classification
School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
School of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
('41034942', 'Ni Zhuang', 'ni zhuang')
('40461734', 'Yan Yan', 'yan yan')
('47336404', 'Si Chen', 'si chen')
('37414077', 'Hanzi Wang', 'hanzi wang')
Email: ni.zhuang@foxmail.com, {yanyan, hanzi.wang}@xmu.edu.cn, chensi@xmut.edu.cn
cf09e2cb82961128302b99a34bff91ec7d198c7cOFFICE ENTRANCE CONTROL WITH FACE RECOGNITION
Dept. of Computer Science and Information Engineering,
National Taiwan University, Taiwan
Dept. of Computer Science and Information Engineering,
National Taiwan University, Taiwan
('1721106', 'Yun-Che Tsai', 'yun-che tsai')
('1703041', 'Chiou-Shann Fuh', 'chiou-shann fuh')
E-mail: jpm9ie8c@gmail.com
E-mail: fuh@csie.ntu.edu.tw
cfc4aa456d9da1a6fabd7c6ca199332f03e35b29University of Amsterdam and Renmin University at TRECVID
Searching Video, Detecting Events and Describing Video
University of Amsterdam
Zhejiang University
Amsterdam, The Netherlands
Hangzhou, China
Renmin University of China
Beijing, China
('46741353', 'Cees G. M. Snoek', 'cees g. m. snoek')
('40240283', 'Jianfeng Dong', 'jianfeng dong')
('9931285', 'Xirong Li', 'xirong li')
('48631563', 'Xiaoxu Wang', 'xiaoxu wang')
('24332496', 'Qijie Wei', 'qijie wei')
('2896042', 'Weiyu Lan', 'weiyu lan')
('2304222', 'Efstratios Gavves', 'efstratios gavves')
('13142264', 'Noureldien Hussein', 'noureldien hussein')
('1769315', 'Dennis C. Koelma', 'dennis c. koelma')
('1705182', 'Arnold W. M. Smeulders', 'arnold w. m. smeulders')
cf805d478aeb53520c0ab4fcdc9307d093c21e52Finding Tiny Faces in the Wild with Generative Adversarial Network
Mingli Ding2
Visual Computing Center, King Abdullah University of Science and Technology (KAUST
School of Electrical Engineering and Automation, Harbin Institute of Technology (HIT
Institute of Software, Chinese Academy of Sciences (CAS
Figure1. The detection results of tiny faces in the wild. (a) is the original low-resolution blurry face, (b) is the result of
re-sizing directly by a bi-linear kernel, (c) is the generated image by the super-resolution method, and our result (d) is learned
by the super-resolution (×4 upscaling) and refinement network simultaneously. Best viewed in color and zoomed in.
('2860057', 'Yancheng Bai', 'yancheng bai')
('48378890', 'Yongqiang Zhang', 'yongqiang zhang')
('2931652', 'Bernard Ghanem', 'bernard ghanem')
baiyancheng20@gmail.com
{zhangyongqiang, dingml}@hit.edu.cn
bernard.ghanem@kaust.edu.sa
cfdc4d0f8e1b4b9ced35317d12b4229f2e3311abQuaero at TRECVID 2010: Semantic Indexing
1UJF-Grenoble 1 / UPMF-Grenoble 2 / Grenoble INP / CNRS, LIG UMR 5217, Grenoble, F-38041, France
Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe, Germany
('2357942', 'Bahjat Safadi', 'bahjat safadi')
('1921500', 'Yubing Tong', 'yubing tong')
('1981024', 'Franck Thollard', 'franck thollard')
('40303076', 'Tobias Gehrig', 'tobias gehrig')
('3025777', 'Hazim Kemal Ekenel', 'hazim kemal ekenel')
cf86616b5a35d5ee777585196736dfafbb9853b5This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Learning Multiscale Active Facial Patches for
Expression Analysis
('29803023', 'Lin Zhong', 'lin zhong')
('1734954', 'Qingshan Liu', 'qingshan liu')
('39606160', 'Peng Yang', 'peng yang')
('1768190', 'Junzhou Huang', 'junzhou huang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
cacd51221c592012bf2d9e4894178c1c1fa307ca
ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 4, Issue 11, May 2015
Face and Expression Recognition Techniques: A
Review

Advanced Communication & Signal Processing Laboratory, Department of Electronics & Communication
engineering, Government College of Engineering Kannur, Kerala, India
('35135054', 'A. Ranjith Ram', 'a. ranjith ram')
ca0363d29e790f80f924cedaf93cb42308365b3dFacial Expression Recognition in Image Sequences
using Geometric Deformation Features and Support
Vector Machines
yAristotle University of Thessaloniki
Department of Informatics
Box 451
54124 Thessaloniki, Greece
('1754270', 'Irene Kotsia', 'irene kotsia')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
email: fekotsia,pitasg@aiia.csd.auth.gr
cad52d74c1a21043f851ae14c924ac689e197d1fFrom Ego to Nos-vision:
Detecting Social Relationships in First-Person Views
Universit`a degli Studi di Modena e Reggio Emilia
Via Vignolese 905, 41125 Modena - Italy
('2452552', 'Stefano Alletto', 'stefano alletto')
('2275344', 'Giuseppe Serra', 'giuseppe serra')
('2175529', 'Simone Calderara', 'simone calderara')
('2059900', 'Francesco Solera', 'francesco solera')
('1741922', 'Rita Cucchiara', 'rita cucchiara')
{name.surname}@unimore.it
cac8bb0e393474b9fb3b810c61efdbc2e2c25c29
ca54d0a128b96b150baef392bf7e498793a6371fImprove Pedestrian Attribute Classification by
Weighted Interactions from Other Attributes
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences
('1739258', 'Jianqing Zhu', 'jianqing zhu')
('40397682', 'Shengcai Liao', 'shengcai liao')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
jianqingzhu@foxmail.com, {scliao, zlei, szli}@cbsr.ia.ac.cn
cad24ba99c7b6834faf6f5be820dd65f1a755b29Understanding hand-object
manipulation by modeling the
contextual relationship between actions,
grasp types and object attributes
Journal Title
XX(X):1–14
c(cid:13)The Author(s) 2016
Reprints and permission:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/ToBeAssigned
www.sagepub.com/
('3172280', 'Minjie Cai', 'minjie cai')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
('9467266', 'Yoichi Sato', 'yoichi sato')
cadba72aa3e95d6dcf0acac828401ddda7ed8924THÈSE PRÉSENTÉE À LA FACULTÉ DES SCIENCES
POUR L’OBTENTION DU GRADE DE DOCTEUR ÈS SCIENCES
Algorithms and VLSI Architectures
for Low-Power Mobile Face Verification
par
Acceptée sur proposition du jury:
Prof. F. Pellandini, directeur de thèse
PD Dr. M. Ansorge, co-directeur de thèse
Prof. P.-A. Farine, rapporteur
Dr. C. Piguet, rapporteur
Soutenue le 2 juin 2005
INSTITUT DE MICROTECHNIQUE
UNIVERSITÉ DE NEUCHÂTEL
2006
('1844418', 'Jean-Luc Nagel', 'jean-luc nagel')
ca37eda56b9ee53610c66951ee7ca66a35d0a846Semantic Concept Discovery for Large-Scale Zero-Shot Event Detection
Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney
Language Technologies Institute, Carnegie Mellon University
Carnegie Mellon University
('1729163', 'Xiaojun Chang', 'xiaojun chang')
('39033919', 'Yi Yang', 'yi yang')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
('1752601', 'Eric P. Xing', 'eric p. xing')
{cxj273, yee.i.yang}@gmail.com, {alex, epxing, yaoliang}@cs.cmu.edu
ca606186715e84d270fc9052af8500fe23befbdaUsing Subclass Discriminant Analysis, Fuzzy Integral and Symlet Decomposition for
Face Recognition
Department of Electrical Engineering,
Iran Univ. of Science and Technology,
Narmak, Tehran, Iran
Department of Electrical Engineering,
Iran Univ. of Science and Technology,
Department of Electrical Engineering,
Iran Univ. of Science and Technology,
Narmak, Tehran, Iran
Narmak, Tehran, Iran
('9267982', 'Seyed Mohammad Seyedzade', 'seyed mohammad seyedzade')
('2532375', 'Sattar Mirzakuchaki', 'sattar mirzakuchaki')
('2535533', 'Amir Tahmasbi', 'amir tahmasbi')
Email: sm.seyedzade@ieee.org
Email: m_kuchaki@iust.ac.ir
Email: a.tahmasbi@ieee.org
e48fb3ee27eef1e503d7ba07df8eb1524c47f4a6Illumination invariant face recognition and impostor rejection
using different MINACE filter algorithms
Carnegie Mellon University, Pittsburgh, PA
('8142777', 'Rohit Patnaik', 'rohit patnaik')
('34925745', 'David Casasent', 'david casasent')
e4bf70e818e507b54f7d94856fecc42cc9e0f73dIJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
FACE RECOGNITION UNDER VARYING BLUR IN AN
UNCONSTRAINED ENVIRONMENT
M.Tech, Information Technology, Madras Institute of Technology, TamilNadu, India
Information Technology, Madras Institute of Technology, TamilNadu, India, email
anubhapearl@gmail.com
hemalatha.ch@gmail.com
e4bc529ced68fae154e125c72af5381b1185f34ePERCEPTUAL GOAL SPECIFICATIONS FOR REINFORCEMENT LEARNING
A Thesis Proposal
Presented to
The Academic Faculty
by
In Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy in the
School of Interactive Computing
Georgia Institute of Technology
November 2017
('12313871', 'Ashley D. Edwards', 'ashley d. edwards')
e465f596d73f3d2523dbf8334d29eb93a35f6da0
e4aeaf1af68a40907fda752559e45dc7afc2de67
e4c3d5d43cb62ac5b57d74d55925bdf76205e306
e42998bbebddeeb4b2bedf5da23fa5c4efc976faGeneric Active Appearance Models Revisited
Imperial College London, United Kingdom
School of Computer Science, University of Lincoln, United Kingdom
Faculty of Electrical Engineering, Mathematics and Computer Science, University
of Twente, The Netherlands
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{gt204, ja310, s.zafeiriou, m.pantic}@imperial.ac.uk
e4a1b46b5c639d433d21b34b788df8d81b518729JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Side Information for Face Completion: a Robust
PCA Approach
('4091869', 'Niannan Xue', 'niannan xue')
('3234063', 'Jiankang Deng', 'jiankang deng')
('1902288', 'Shiyang Cheng', 'shiyang cheng')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
e4c81c56966a763e021938be392718686ba9135e3,100+OPEN ACCESS BOOKS103,000+INTERNATIONALAUTHORS AND EDITORS106+ MILLIONDOWNLOADSBOOKSDELIVERED TO151 COUNTRIESAUTHORS AMONGTOP 1%MOST CITED SCIENTIST12.2%AUTHORS AND EDITORSFROM TOP 500 UNIVERSITIESSelection of our books indexed in theBook Citation Index in Web of Science™Core Collection (BKCI)Chapter from the book Visual Cortex - Current Status and PerspectivesDownloaded from: http://www.intechopen.com/books/visual-cortex-current-status-and-perspectivesPUBLISHED BYWorld's largest Science,Technology & Medicine Open Access book publisherInterested in publishing with InTechOpen?Contact us at book.department@intechopen.com
e4e95b8bca585a15f13ef1ab4f48a884cd6ecfccFace Recognition with Independent Component Based
Super-resolution
aFaculty of Engineering and Natural Sciences, Sabanci Univ., Istanbul, Turkiye, 34956
bSchool of Elec. and Comp. Eng. , Georgia Inst. of Tech., Atlanta, GA, USA, 30332-0250
('1844879', 'Osman Gokhan Sezer', 'osman gokhan sezer')
('3975060', 'Yucel Altunbasak', 'yucel altunbasak')
('31849282', 'Aytul Ercil', 'aytul ercil')
e4df83b7424842ff5864c10fa55d38eae1c45facHindawi Publishing Corporation
Discrete Dynamics in Nature and Society
Volume 2009, Article ID 916382, 8 pages
doi:10.1155/2009/916382
Research Article
Locally Linear Discriminate Embedding for
Face Recognition
Faculty of Information Science and Technology, Multimedia University, 75450 Melaka, Malaysia
Received 21 January 2009; Accepted 12 October 2009
Recommended by B. Sagar
A novel method based on the local nonlinear mapping is presented in this research. The method
is called Locally Linear Discriminate Embedding (cid:2)LLDE(cid:3). LLDE preserves a local linear structure
of a high-dimensional space and obtains a compact data representation as accurately as possible
in embedding space (cid:2)low dimensional(cid:3) before recognition. For computational simplicity and fast
processing, Radial Basis Function (cid:2)RBF(cid:3) classifier is integrated with the LLDE. RBF classifier
is carried out onto low-dimensional embedding with reference to the variance of the data. To
validate the proposed method, CMU-PIE database has been used and experiments conducted in
this research revealed the efficiency of the proposed methods in face recognition, as compared to
the linear and non-linear approaches.
the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
1. Introduction
Linear subspace analysis has been extensively applied to face recognition. A successful face
recognition methodology is largely dependent on the particular choice of features used by
the classifier. Linear methods are easy to understand and are very simple to implement, but
the linearity assumption does not hold in many real-world scenarios. Face appearance lies in
a high-dimensional nonlinear manifold. A disadvantage of the linear techniques is that they
fail to capture the characteristics of the nonlinear appearance manifold. This is due to the
fact that the linear methods extract features only from the input space without considering
the nonlinear information between the components of the input data. However, a globally
nonlinear mapping can often be approximated using a linear mapping in a local region. This
has motivated the design of the nonlinear mapping methods in this study.
The history of the nonlinear mapping is long; it can be traced back to Sammon’s
mapping in 1969 (cid:5)1(cid:6). Over time, different techniques have been proposed such as the
projection pursuit (cid:5)2(cid:6), the projection pursuit regression (cid:5)3(cid:6), self-organizing maps or SOM
('2008201', 'Eimad E. Abusham', 'eimad e. abusham')
('32191265', 'E. K. Wong', 'e. k. wong')
('32191265', 'E. K. Wong', 'e. k. wong')
Correspondence should be addressed to Eimad E. Abusham, eimad.eldin@mmu.edu.my
e4e3faa47bb567491eaeaebb2213bf0e1db989e1Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
Empirical Risk Minimization for Metric
Learning Using Privileged Information
School of Computer and Information, Hefei University of Technology, China
Centre for Quantum Computation and Intelligent Systems, FEIT, University of Technology Sydney, Australia
('2028727', 'Xun Yang', 'xun yang')
('15970836', 'Meng Wang', 'meng wang')
('1763785', 'Luming Zhang', 'luming zhang')
('1692693', 'Dacheng Tao', 'dacheng tao')
{hfutyangxun, eric.mengwang, zglumg}@gmail.com;
dacheng.tao@uts.edu.au;
e43ea078749d1f9b8254e0c3df4c51ba2f4eebd5Facial Expression Recognition Based on Constrained
Local Models and Support Vector Machines
('1901962', 'Nikolay Neshov', 'nikolay neshov')
('34945173', 'Ivo Draganov', 'ivo draganov')
('1750280', 'Agata Manolova', 'agata manolova')
e476cbcb7c1de73a7bcaeab5d0d59b8b3c4c1cbf
e4c2f8e4aace8cb851cb74478a63d9111ca550aeDISTRIBUTED ONE-CLASS LEARNING
cid:63)Queen Mary University of London, Imperial College London
('9920557', 'Ali Shahin Shamsabadi', 'ali shahin shamsabadi')
('1763096', 'Hamed Haddadi', 'hamed haddadi')
('1713138', 'Andrea Cavallaro', 'andrea cavallaro')
e475e857b2f5574eb626e7e01be47b416deff268Facial Emotion Recognition Using Nonparametric
Weighted Feature Extraction and Fuzzy Classifier
('2121174', 'Maryam Imani', 'maryam imani')
('1801348', 'Gholam Ali Montazer', 'gholam ali montazer')
e4391993f5270bdbc621b8d01702f626fba36fc2Author manuscript, published in "18th Scandinavian Conference on Image Analysis (2013)"
DOI : 10.1007/978-3-642-38886-6_31
e43045a061421bd79713020bc36d2cf4653c044dA New Representation of Skeleton Sequences for 3D Action Recognition
The University of Western Australia
Murdoch University
('2796959', 'Qiuhong Ke', 'qiuhong ke')
('1698675', 'Mohammed Bennamoun', 'mohammed bennamoun')
('1782428', 'Senjian An', 'senjian an')
qiuhong.ke@research.uwa.edu.au
{mohammed.bennamoun,senjian.an,farid.boussaid}@uwa.edu.au
f.sohel@murdoch.edu.au
e4d8ba577cabcb67b4e9e1260573aea708574886UM SISTEMA DE RECOMENDAC¸ ˜AO INTELIGENTE BASEADO EM V´IDIO
AULAS PARA EDUCAC¸ ˜AO A DIST ˆANCIA
Gaspare Giuliano Elias Bruno
Tese de Doutorado apresentada ao Programa
de P´os-gradua¸c˜ao em Engenharia de Sistemas e
Computa¸c˜ao, COPPE, da Universidade Federal
do Rio de Janeiro, como parte dos requisitos
necess´arios `a obten¸c˜ao do t´ıtulo de Doutor em
Engenharia de Sistemas e Computa¸c˜ao.
Orientadores: Edmundo Albuquerque de
Souza e Silva
Rosa Maria Meri Le˜ao
Rio de Janeiro
Janeiro de 2016
e475deadd1e284428b5e6efd8fe0e6a5b83b9dcdAccepted in Pattern Recognition Letters
Pattern Recognition Letters
journal homepage: www.elsevier.com
Are you eligible? Predicting adulthood from face images via class specific mean
autoencoder
IIIT-Delhi, New Delhi, 110020, India
Article history:
Received 15 March 2017
('2220719', 'Maneet Singh', 'maneet singh')
('1925017', 'Shruti Nagpal', 'shruti nagpal')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('39129417', 'Richa Singh', 'richa singh')
e4abc40f79f86dbc06f5af1df314c67681dedc51Head Detection with Depth Images in the Wild
Department of Engineering ”Enzo Ferrari”
University of Modena and Reggio Emilia, Italy
Keywords:
Head Detection, Head Localization, Depth Maps, Convolutional Neural Network
('6125279', 'Diego Ballotta', 'diego ballotta')
('12010968', 'Guido Borghi', 'guido borghi')
('1723285', 'Roberto Vezzani', 'roberto vezzani')
('1741922', 'Rita Cucchiara', 'rita cucchiara')
{name.surname}@unimore.it
e4d0e87d0bd6ead4ccd39fc5b6c62287560bac5bImplicit Video Multi-Emotion Tagging by Exploiting Multi-Expression
Relations
('1771215', 'Zhilei Liu', 'zhilei liu')
('1791319', 'Shangfei Wang', 'shangfei wang')
('3558606', 'Zhaoyu Wang', 'zhaoyu wang')
('1726583', 'Qiang Ji', 'qiang ji')
e48e94959c4ce799fc61f3f4aa8a209c00be8d7fHindawi Publishing Corporation
The Scientific World Journal
Volume 2013, Article ID 135614, 6 pages
http://dx.doi.org/10.1155/2013/135614
Research Article
Design of an Efficient Real-Time Algorithm Using Reduced
Feature Dimension for Recognition of Speed Limit Signs
Sogang University, Seoul 121-742, Republic of Korea
2 Samsung Techwin R&D Center, Security Solution Division, 701 Sampyeong-dong, Bundang-gu, Seongnam-si,
Gyeonggi 463-400, Republic of Korea
Received 28 August 2013; Accepted 1 October 2013
Academic Editors: P. Daponte, M. Nappi, and N. Nishchal
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We propose a real-time algorithm for recognition of speed limit signs from a moving vehicle. Linear Discriminant Analysis (LDA)
required for classification is performed by using Discrete Cosine Transform (DCT) coefficients. To reduce feature dimension in
LDA, DCT coefficients are selected by a devised discriminant function derived from information obtained by training. Binarization
and thinning are performed on a Region of Interest (ROI) obtained by preprocessing a detected ROI prior to DCT for further
reduction of computation time in DCT. This process is performed on a sequence of image frames to increase the hit rate of
recognition. Experimental results show that arithmetic operations are reduced by about 60%, while hit rates reach about 100%
compared to previous works.
1. Introduction
Driver safety is the main concern of the advanced vehicle
system which became implementable due to the develop-
ment of the autonomous driving, automatic control, and
imaging technology. An advanced vehicle system gives driver
information related to safety by sensing the surroundings
automatically [1]. Speed limit signs recognition is regarded
to be helpful in safety for drivers using advanced vehicle
system. The system needs to recognize the speed limit sign
in the distance quickly and accurately in order to give
the driver precaution in time since vehicle is moving fast.
But existing algorithms perform recognition by using many
features extracted from captured image, requiring a large
amount of arithmetic operations for classification [2].
Several classification algorithms have been proposed,
which include Neural Networks [2, 3], Support Vector
Machine (SVM) [2], and Linear Discriminant Analysis
(LDA) [2, 4]. Among these, SVM has relatively higher recog-
nition rate, and LDA is used in many classification applica-
tions due to its low computational complexity. However, its
computational complexity needs to be further reduced to be
used in real-time application. It can be achieved by reducing
the number of inputs of LDA.
This paper proposes an efficient real-time algorithm for
recognition of speed limit signs by using reduced feature
dimension. In this research study, DCT is employed and parts
of Discrete Cosine Transform (DCT) coefficients are used as
inputs to LDA instead of features extracted from image. DCT
coefficients are selected by a devised discriminant function.
To further reduce DCT computation time, binarization and
thinning are applied to the detected Region of Interest (ROI).
Image of speed limit sign in the distance obtained from cam-
era has a low resolution and it gives poor rate of recognition.
To resolve this problem, this paper proposes a recognition
system using classification results on a sequence of frames.
It can enhance hit rate of recognition by accumulating the
probability of single frame recognition.
2. Background
In this section, LDA is briefly described, which is popularly
employed for classification. LDA is a classical statistical
('2012225', 'Hanmin Cho', 'hanmin cho')
('5984008', 'Seungwha Han', 'seungwha han')
('6348959', 'Sun-Young Hwang', 'sun-young hwang')
('2012225', 'Hanmin Cho', 'hanmin cho')
Correspondence should be addressed to Sun-Young Hwang; hwang@sogang.ac.kr
e496d6be415038de1636bbe8202cac9c1cea9dbeFacial Expression Recognition in Older Adults using
Deep Machine Learning
National Research Council of Italy, Institute for Microelectronics and Microsystems, Lecce
Italy
('2886068', 'Andrea Caroppo', 'andrea caroppo')
('1796761', 'Alessandro Leone', 'alessandro leone')
('1737181', 'Pietro Siciliano', 'pietro siciliano')
{andrea.caroppo,alessandro.leone,pietro.siciliano}@le.imm.cnr.it
e43cc682453cf3874785584fca813665878adaa7www.ijecs.in
International Journal Of Engineering And Computer Science ISSN:2319-7242
Volume 3 Issue 10 October, 2014 Page No.8830-8834
Face Recognition using Local Derivative Pattern Face
Descriptor
Department of Electronics and Telecommunication
Datta Meghe College of Engineering
Airoli, Navi Mumbai, India 1,2
Mob: 99206746061
Mob: 99870353142
pranitachavan42@gmail.com 1
djpethe@gmail.com 2
fec6648b4154fc7e0892c74f98898f0b51036dfeA Generic Face Processing
Framework: Technologies,
Analyses and Applications
A Thesis Submitted in Partial Ful(cid:12)lment
of the Requirements for the Degree of
Master of Philosophy
in
Computer Science and Engineering
Supervised by
c(cid:13)The Chinese University of Hong Kong
July 2003
The Chinese University of Hong Kong holds the copyright of this thesis. Any
person(s) intending to use a part or whole of the materials in the thesis in
a proposed publication must seek copyright release from the Dean of the
Graduate School.
('1681775', 'Michael R. Lyu', 'michael r. lyu')
fea0a5ed1bc83dd1b545a5d75db2e37a69489ac9Enhancing Recommender Systems for TV by Face Recognition
iMinds - Ghent University, Technologiepark 15, B-9052 Ghent, Belgium
Keywords:
Recommender System, Face Recognition, Face Detection, TV, Emotion Detection.
('1738833', 'Toon De Pessemier', 'toon de pessemier')
('3441798', 'Damien Verlee', 'damien verlee')
('1698239', 'Luc Martens', 'luc martens')
{toon.depessemier, luc.martens}@intec.ugent.be
fe9c460d5ca625402aa4d6dd308d15a40e1010faNeural Architecture for Temporal Emotion
Classification
Universit¨at Ulm, Neuroinformatik, Germany
('1681327', 'Roland Schweiger', 'roland schweiger')
('2331203', 'Pierre Bayerl', 'pierre bayerl')
('1706025', 'Heiko Neumann', 'heiko neumann')
froland.schweiger,pierre.bayerl,heiko.neumanng@informatik.uni-ulm.de
fe7e3cc1f3412bbbf37d277eeb3b17b8b21d71d5IOSR Journal of VLSI and Signal Processing (IOSR-JVSP)
Volume 6, Issue 2, Ver. I (Mar. -Apr. 2016), PP 47-53
e-ISSN: 2319 – 4200, p-ISSN No. : 2319 – 4197
www.iosrjournals.org
Performance Evaluation of Gabor Wavelet Features for Face
Representation and Recognition
Bapuji Institute of Engineering and Technology Davanagere, Karnataka, India
University B.D.T.College of Engineering, Visvesvaraya
Technological University, Davanagere, Karnataka, India
('2038371', 'M. E. Ashalatha', 'm. e. ashalatha')
('3283067', 'Mallikarjun S. Holi', 'mallikarjun s. holi')
fe464b2b54154d231671750053861f5fd14454f5Multi Joint Action in CoTeSys
- Setup and Challenges -
Technical report CoTeSys-TR-10-01
D. Brˇsˇci´c, F. Rohrm¨uller, O. Kourakos, S. Sosnowski, D. Althoff, M. Lawitzky,
{drazen, rohrm, omirosk, sosnowski, dalthoff, lawitzky, moertl, rambow, vicky,
M. Eggers, C. Mayer, T. Kruse, A. Kirsch, M. Beetz and B. Radig 2
T. Lorenz and A. Schub¨o 4
P. Basili and S. Glasauer 5
W. Maier and E. Steinbach 7
Institute of Automatic Control
4 Experimental Psychology Unit
Engineering
Department of Psychology
Department of Electrical Engineering
Ludwig-Maximilians-Universit¨at
and Information Technology
Technische Universit¨at M¨unchen
Arcisstraße 21, 80333 M¨unchen
2Intelligent Autonomous Systems
Department of Informatics
M¨unchen
Leopoldstraße 13, 80802 M¨unchen
5Center for Sensorimotor Research
Clinical Neurosciences and
Department of Neurology
Technische Universit¨at M¨unchen
Ludwig-Maximilians-Universit¨at
Boltzmannstraße 3, 85748 Garching
M¨unchen
bei M¨unchen
Marchionistraße 23, 81377 M¨unchen
Institute for Human-Machine
6Robotics and Embedded Systems
Communication
Department of Informatics
Department of Electrical Engineering
Technische Universit¨at M¨unchen
and Information Technology
Boltzmannstraße 3, 85748 Garching
Technische Universit¨at M¨unchen
Arcisstraße 21, 80333 M¨unchen
bei M¨unchen
Institute for Media Technology
Department of Electrical Engineering
and Information Technology
Technische Universit¨at M¨unchen
Arcisstraße 21, 80333 M¨unchen
('46953125', 'X. Zang', 'x. zang')
('47824592', 'W. Wang', 'w. wang')
('48172476', 'A. Bannat', 'a. bannat')
('30849638', 'G. Panin', 'g. panin')
medina, xueliang zang, wangwei, dirk, kuehnlen, hirche, buss}@lsr.ei.tum.de
{eggers, mayerc, kruset, kirsch, beetz, radig}@in.tum.de
{blume, bannat, rehrl, wallhoff}@tum.de
{lorenz, schuboe}@psy.lmu.de
{p.basili,s.glasauer}@lrz.uni-muenchen.de
{lenz,roeder,panin,knoll}@in.tum.de
{werner.maier, eckehard.steinbach}@tum.de
fe7c0bafbd9a28087e0169259816fca46db1a837
fe5df5fe0e4745d224636a9ae196649176028990University of Massachusetts - Amherst
Dissertations
9-1-2010
Dissertations and Theses
Using Context to Enhance the Understanding of
Face Images
Follow this and additional works at: http://scholarworks.umass.edu/open_access_dissertations
Recommended Citation
Jain, Vidit, "Using Context to Enhance the Understanding of Face Images" (2010). Dissertations. Paper 287.
('2246870', 'Vidit Jain', 'vidit jain')ScholarWorks@UMass Amherst
University of Massachusetts - Amherst, vidit.jain@gmail.com
This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has
been accepted for inclusion in Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact
scholarworks@library.umass.edu.
fe961cbe4be0a35becd2d722f9f364ec3c26bd34Computer-based Tracking, Analysis, and Visualization of Linguistically
Significant Nonmanual Events in American Sign Language (ASL)
Boston University / **Rutgers University / ***Gallaudet University
Boston University, Linguistics Program, 621 Commonwealth Avenue, Boston, MA
Rutgers University, Computer and Information Sciences, 110 Frelinghuysen Road, Piscataway, NJ
Gallaudet University, Technology Access Program, 800 Florida Ave NE, Washington, DC
('1732359', 'Carol Neidle', 'carol neidle')
('38079056', 'Jingjing Liu', 'jingjing liu')
('39132952', 'Bo Liu', 'bo liu')
('4340744', 'Xi Peng', 'xi peng')
('2467082', 'Christian Vogler', 'christian vogler')
('1711560', 'Dimitris Metaxas', 'dimitris metaxas')
E-mail: carol@bu.edu, jl1322@cs.rutgers.edu, lb507@cs.rutgers.edu, px13@cs.rutgers.edu,
christian.vogler@gallaudet.edu, dnm@ cs.rutgers.edu
feb6e267923868bff6e2108603d00fdfd65251caFebruary 1, 2013 15:16 WSPC/INSTRUCTION FILE
S0218213012500297
International Journal on Artificial Intelligence Tools
Vol. 22, No. 1 (2013) 1250029 (30 pages)
c(cid:13) World Scientific Publishing Company
DOI: 10.1142/S0218213012500297
UNSUPERVISED DISCOVERY OF VISUAL FACE CATEGORIES
Institute of Systems Engineering, Southeast University, Nanjing, China
University of Nevada, Reno, USA
College of Computer and Information Sciences
King Saud University, Riyadh 11543, Saudi Arabia
College of Computer and Information Sciences
King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
GHULAM MUHAMMAD
College of Computer and Information Sciences
King Saud University, Riyadh 11543, Saudi Arabia
Received 30 January 2012
Accepted 10 May 2012
Published
Human faces can be arranged into different face categories using information from common visual
cues such as gender, ethnicity, and age. It has been demonstrated that using face categorization as a
precursor step to face recognition improves recognition rates and leads to more graceful errors.1
Although face categorization using common visual cues yields meaningful face categories,
developing accurate and robust gender, ethnicity, and age categorizers is a challenging issue.
Moreover, it limits the overall number of possible face categories and, in practice, yields unbalanced
face categories which can compromise recognition performance. This paper investigates ways to
automatically discover a categorization of human faces from a collection of unlabeled face images
without relying on predefined visual cues. Specifically, given a set of face images from a group of
known individuals (i.e., gallery set), our goal is finding ways to robustly partition the gallery set
(i.e., face categories). The objective is being able to assign novel images of the same individuals
(i.e., query set) to the correct face category with high accuracy and robustness. To address the issue
of face category discovery, we represent faces using local features and apply unsupervised learning
(i.e., clustering). To categorize faces in novel images, we employ nearest-neighbor algorithms
1250029-1
('2884262', 'Shicai Yang', 'shicai yang')
('1808451', 'George Bebis', 'george bebis')
('2363759', 'Muhammad Hussain', 'muhammad hussain')
('39344692', 'Anwar M. Mirza', 'anwar m. mirza')
shicai.yang@gmail.com
bebis@cse.unr.edu
mhussain@ksu.edu.sa
ghulam@ksu.edu.sa
anwar.m.mirza@gmail.com
fe48f0e43dbdeeaf4a03b3837e27f6705783e576
fea83550a21f4b41057b031ac338170bacda8805Learning a Metric Embedding
for Face Recognition
using the Multibatch Method
Orcam Ltd., Jerusalem, Israel
('46273386', 'Oren Tadmor', 'oren tadmor')
('1743988', 'Yonatan Wexler', 'yonatan wexler')
('31601132', 'Tal Rosenwein', 'tal rosenwein')
('2554670', 'Shai Shalev-Shwartz', 'shai shalev-shwartz')
('3140335', 'Amnon Shashua', 'amnon shashua')
firstname.lastname@orcam.com
feeb0fd0e254f38b38fe5c1022e84aa43d63f7ccEURECOM
Multimedia Communications Department
and
Mobile Communications Department
2229, route des Crˆetes
B.P. 193
06904 Sophia-Antipolis
FRANCE
Research Report RR-11-255
Search Pruning with Soft Biometric Systems:
Efficiency-Reliability Tradeoff
June 1st, 2011
Last update June 1st, 2011
1EURECOM’s research is partially supported by its industrial members: BMW Group, Cisco,
Monaco Telecom, Orange, SAP, SFR, Sharp, STEricsson, Swisscom, Symantec, Thales.
('3299530', 'Antitza Dantcheva', 'antitza dantcheva')
('15758502', 'Arun Singh', 'arun singh')
('1688531', 'Petros Elia', 'petros elia')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
fe108803ee97badfa2a4abb80f27fa86afd9aad9
fe0c51fd41cb2d5afa1bc1900bbbadb38a0de139Rahman et al. EURASIP Journal on Image and Video Processing (2015) 2015:35
DOI 10.1186/s13640-015-0090-5
RESEARCH
Open Access
Bayesian face recognition using 2D
Gaussian-Hermite moments
('47081388', 'S. M. Mahbubur Rahman', 's. m. mahbubur rahman')
('2021126', 'Tamanna Howlader', 'tamanna howlader')
c8db8764f9d8f5d44e739bbcb663fbfc0a40fb3dModeling for part-based visual object
detection based on local features
Von der Fakult¨at f¨ur Elektrotechnik und Informationstechnik
der Rheinisch-Westf¨alischen Technischen Hochschule Aachen
zur Erlangung des akademischen Grades eines Doktors
der Ingenieurwissenschaften genehmigte Dissertation
vorgelegt von
Diplom-Ingenieur
aus Neuss
Berichter:
Univ.-Prof. Dr.-Ing. Jens-Rainer Ohm
Univ.-Prof. Dr.-Ing. Til Aach
Tag der m¨undlichen Pr¨ufung: 28. September 2011
Diese Dissertation ist auf den Internetseiten der
Hochschulbibliothek online verf¨ugbar.
('2447988', 'Mark Asbach', 'mark asbach')
c86e6ed734d3aa967deae00df003557b6e937d3dGenerative Adversarial Networks with
Decoder-Encoder Output Noise
conditional distribution of their neighbors. In [32], Portilla and
Simoncelli proposed a parametric texture model based on joint
statistics, which uses a decomposition method that is called
steerable pyramid decomposition to decompose the texture
of images. An example-based super-resolution algorithm [11]
was proposed in 2002, which uses a Markov network to model
the spatial relationship between the pixels of an image. A
scene completion algorithm [16] was proposed in 2007, which
applied a semantic scene match technique. These traditional
algorithms can be applied to particular image generation tasks,
such as texture synthesis and super-resolution. Their common
characteristic is that they predict the images pixel by pixel
rather than generate an image as a whole, and the basic idea
of them is to make an interpolation according to the existing
part of the images. Here, the problem is, given a set of images,
can we generate totally new images with the same distribution
of the given ones?
('2421012', 'Guoqiang Zhong', 'guoqiang zhong')
('46874300', 'Wei Gao', 'wei gao')
('3142351', 'Yongbin Liu', 'yongbin liu')
('47796538', 'Youzhao Yang', 'youzhao yang')
c8a4b4fe5ff2ace9ab9171a9a24064b5a91207a3LOCATING FACIAL LANDMARKS WITH BINARY MAP CROSS-CORRELATIONS
J´er´emie Nicolle
K´evin Bailly
Univ. Pierre & Marie Curie, ISIR - CNRS UMR 7222, F-75005, Paris - France
('3074790', 'Vincent Rapp', 'vincent rapp')
('1680828', 'Mohamed Chetouani', 'mohamed chetouani')
{nicolle, bailly, rapp, chetouani}@isir.upmc.fr
c87f7ee391d6000aef2eadb49f03fc237f4d11701
A real-time and unsupervised face Re-Identification system for Human-Robot
Interaction
Intelligent Behaviour Understanding Group, Imperial College London, London, UK
A B S T R A C T
In the context of Human-Robot Interaction (HRI), face Re-Identification (face Re-ID) aims to verify if certain detected faces have already been
observed by robots. The ability of distinguishing between different users is crucial in social robots as it will enable the robot to tailor the interaction
strategy toward the users’ individual preferences. So far face recognition research has achieved great success, however little attention has been paid
to the realistic applications of Face Re-ID in social robots. In this paper, we present an effective and unsupervised face Re-ID system which
simultaneously re-identifies multiple faces for HRI. This Re-ID system employs Deep Convolutional Neural Networks to extract features, and an
online clustering algorithm to determine the face’s ID. Its performance is evaluated on two datasets: the TERESA video dataset collected by the
TERESA robot, and the YouTube Face Dataset (YTF Dataset). We demonstrate that the optimised combination of techniques achieves an overall
93.55% accuracy on TERESA dataset and an overall 90.41% accuracy on YTF dataset. We have implemented the proposed method into a software
module in the HCI^2 Framework [1] for it to be further integrated into the TERESA robot [2], and has achieved real-time performance at 10~26
Frames per second.
Keywords: Real-Time Face Re-Identification, Open Set Re-ID, Multiple Re-ID, Human-Robot Interaction, CNN Descriptors, Online Clustering
1. Introduction
Face recognition problem is one of the oldest topics in
Computer Vision [3]. Recently, the interest in this problem has
been revamped, mostly due to the observation that standard face
recognition approaches do not perform well in real-time
scenarios where faces can be rotated, occluded, and under
unconstrained illumination. Face recognition tasks are generally
classified into two categories:
1. Face Verification. Given two face images, the task of face
verification is to determine if these two faces belong to the same
person.
2. Face Identification. This refers to the process of finding the
identity of an unknown face image given a database of known
faces.
However, there are certain situations where a third type of
face recognition is needed: face re-identification (face Re-ID). In
the context of Human-Robot Interaction (HRI), the goal of face
Re-ID is to determine if certain faces have been seen by the robot
before, and if so, to determine their identity.
Generally, a real-time and unsupervised face re-identification
system is required to achieve effective interactions between
humans and robots. In the realistic scenarios of HRI, the face re-
identification task is confronted with the following challenges:
a. The system needs to be able to build and update the run-
time user gallery on the fly as there is usually no prior
knowledge about the interaction targets in advance.
b. The system should achieve high processing speed in
order for the robot to maintain real-time interaction with
the users.
c. The method should be robust against high intra-class
illumination changes, partial
variance caused by

occlusion, pose variation, and/or the display of facial
expressions.
d. The system should achieve high recognition accuracy on
low-quality images resulted from motion blur (when the
robot and / or the user is moving), out-of-focus blur,
and/or over /under-exposure.
Recently, deep-learning approaches, especially Convolutional
Neural Networks (CNNs), have achieved great success in solving
face recognition problems [4]–[8]. Comparing
to classic
approaches, deep-learning-based methods are characterised by
their powerful feature extraction abilities. However, as existing
works mostly focused on traditional face identification problems,
the potential applications of deep-learning-based methods in
solving face Re-ID problems is yet to be explored.
that can work effectively
In this paper, we present a real-time unsupervised face re-
identification system
in an
unconstrained environment. Firstly, we employ a pre-trained
CNN [7] as the feature extractor and try to improve its
performance and processing speed in HRI context by utilising a
variety of pre-processing techniques. In the Re-Identification step,
we then use an online clustering algorithm to build and update a
run-time face gallery and to output the probe faces’ ID.
Experiments show that our system can achieve a Re-ID accuracy
of 93.55% and 90.41% on the TERESA video dataset and the
YTF Dataset respectively and is able to achieve a real-time
processing speed of 10~26 FPS.
2. Related Works
Various methods [9]–[15] have been developed to solve the
person Re-ID problem in surveillance context. However, most of
them [9]–[13] are unsuitable to HRI applications as these
approaches often rely on soft biometrics (i.e. clothing’s colours
and textures) that are unavailable to the robot (which usually only
sees the user’s face). Due to the unavailability of such soft
biometrics, it is difficult to apply person re-identification
('2563750', 'Yujiang Wang', 'yujiang wang')
('49927631', 'Jie Shen', 'jie shen')
('2403354', 'Stavros Petridis', 'stavros petridis')
('1694605', 'Maja Pantic', 'maja pantic')
c866a2afc871910e3282fd9498dce4ab20f6a332Noname manuscript No.
(will be inserted by the editor)
Surveillance Face Recognition Challenge
Received: date / Accepted: date
('5314735', 'Zhiyi Cheng', 'zhiyi cheng')
c8ca6a2dc41516c16ea0747e9b3b7b1db788dbdd1 Department of Computer Science
Rutgers University
New Jersey, USA
2 Department of Computer Science
The University of Texas at Arlington
Texas, USA
PENG, XI: TRACK FACIAL POINTS IN UNCONSTRAINED VIDEOS
Track Facial Points in Unconstrained Videos
('4340744', 'Xi Peng', 'xi peng')
('40420376', 'Qiong Hu', 'qiong hu')
('1768190', 'Junzhou Huang', 'junzhou huang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
xipeng.cs@rutgers.edu
qionghu.cs@rutgers.edu
jzhuang@uta.edu
dnm@cs.rutgers.edu
c8292aa152a962763185e12fd7391a1d6df60d07Camera Distance from Face Images
University of California, San Diego
9500 Gilman Drive, La Jolla, CA, USA
('25234832', 'Arturo Flores', 'arturo flores'){aflores,echristiansen,kriegman,sjb}@cs.ucsd.edu
c82c147c4f13e79ad49ef7456473d86881428b89
c84233f854bbed17c22ba0df6048cbb1dd4d3248Exploring Locally Rigid Discriminative
Patches for Learning Relative Attributes
http://researchweb.iiit.ac.in/~yashaswi.verma/
http://www.iiit.ac.in/~jawahar/
CVIT
IIIT-Hyderabad, India
http://cvit.iiit.ac.in
('1694502', 'C. V. Jawahar', 'c. v. jawahar')
('2169614', 'Yashaswi Verma', 'yashaswi verma')
('1694502', 'C. V. Jawahar', 'c. v. jawahar')
c829be73584966e3162f7ccae72d9284a2ebf358shuttleNet: A biologically-inspired RNN with loop connection and parameter
sharing
1 National Engineering Laboratory for Video Technology, School of EE&CS,
Peking University, Beijing, China
2 Cooperative Medianet Innovation Center, China
3 School of Information and Electronics,
Beijing Institute of Technology, Beijing, China
('38179026', 'Yemin Shi', 'yemin shi')
('1705972', 'Yonghong Tian', 'yonghong tian')
('5765799', 'Yaowei Wang', 'yaowei wang')
('34097174', 'Tiejun Huang', 'tiejun huang')
c87d5036d3a374c66ec4f5870df47df7176ce8b9ORIGINAL RESEARCH
published: 12 July 2018
doi: 10.3389/fpsyg.2018.01190
Temporal Dynamics of Natural Static
Emotional Facial Expressions
Decoding: A Study Using Event- and
Eye Fixation-Related Potentials
GIPSA-lab, Institute of Engineering, Universit Grenoble Alpes, Centre National de la Recherche Scienti que, Grenoble INP
Grenoble, France, 2 Department of Conception and Control of Aeronautical and Spatial Vehicles, Institut Supérieur de
l’Aéronautique et de l’Espace, Université Fédérale de Toulouse, Toulouse, France, 3 Laboratoire InterUniversitaire de
Psychologie – Personnalité, Cognition, Changement Social, Université Grenoble Alpes, Université Savoie Mont Blanc,
Grenoble, France, 4 Exploration Fonctionnelle du Système Nerveux, Pôle Psychiatrie, Neurologie et Rééducation
Neurologique, CHU Grenoble Alpes, Grenoble, France, 5 Université Grenoble Alpes, Inserm, CHU Grenoble Alpes, Grenoble
Institut des Neurosciences, Grenoble, France
This study aims at examining the precise temporal dynamics of the emotional facial
decoding as it unfolds in the brain, according to the emotions displayed. To characterize
this processing as it occurs in ecological settings, we focused on unconstrained visual
explorations of natural emotional faces (i.e., free eye movements). The General Linear
Model (GLM; Smith and Kutas, 2015a,b; Kristensen et al., 2017a) enables such a
depiction. It allows deconvolving adjacent overlapping responses of the eye fixation-
related potentials (EFRPs) elicited by the subsequent fixations and the event-related
potentials (ERPs) elicited at the stimuli onset. Nineteen participants were displayed
with spontaneous static facial expressions of emotions (Neutral, Disgust, Surprise, and
Happiness) from the DynEmo database (Tcherkassof et al., 2013). Behavioral results
on participants’ eye movements show that the usual diagnostic features in emotional
decoding (eyes for negative facial displays and mouth for positive ones) are consistent
with the literature. The impact of emotional category on both the ERPs and the EFRPs
elicited by the free exploration of the emotional faces is observed upon the temporal
dynamics of the emotional facial expression processing. Regarding the ERP at stimulus
onset, there is a significant emotion-dependent modulation of the P2–P3 complex
and LPP components’ amplitude at the left frontal site for the ERPs computed by
averaging. Yet, the GLM reveals the impact of subsequent fixations on the ERPs time-
locked on stimulus onset. Results are also in line with the valence hypothesis. The
observed differences between the two estimation methods (Average vs. GLM) suggest
the predominance of the right hemisphere at the stimulus onset and the implication
of the left hemisphere in the processing of the information encoded by subsequent
fixations. Concerning the first EFRP, the Lambda response and the P2 component are
modulated by the emotion of surprise compared to the neutral emotion, suggesting
Edited by:
Eva G. Krumhuber,
University College London
United Kingdom
Reviewed by:
Marie Arsalidou,
National Research University Higher
School of Economics, Russia
Jaana Simola,
University of Helsinki, Finland
*Correspondence:
Specialty section:
This article was submitted to
Emotion Science,
a section of the journal
Frontiers in Psychology
Received: 07 March 2018
Accepted: 20 June 2018
Published: 12 July 2018
Citation:
Guérin-Dugué A, Roy RN,
Kristensen E, Rivet B, Vercueil L and
Tcherkassof A (2018) Temporal
Dynamics of Natural Static Emotional
Facial Expressions Decoding: A Study
Using Event- and Eye Fixation-Related
Potentials. Front. Psychol. 9:1190.
doi: 10.3389/fpsyg.2018.01190
Frontiers in Psychology | www.frontiersin.org
July 2018 | Volume 9 | Article 1190
('7200702', 'Anne Guérin-Dugué', 'anne guérin-dugué')
('20903548', 'Raphaëlle N. Roy', 'raphaëlle n. roy')
('33987947', 'Emmanuelle Kristensen', 'emmanuelle kristensen')
('48223466', 'Bertrand Rivet', 'bertrand rivet')
('2544058', 'Laurent Vercueil', 'laurent vercueil')
('3209946', 'Anna Tcherkassof', 'anna tcherkassof')
('7200702', 'Anne Guérin-Dugué', 'anne guérin-dugué')
anne.guerin@gipsa-lab.grenoble-inp.fr
c8e84cdff569dd09f8d31e9f9ba3218dee65e961Dictionaries for Image and Video-based Face Recognition
Center for Automation Research, UMIACS, University of Maryland, College Park, MD 20742, USA
National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
In recent years, sparse representation and dictionary learning-based methods have emerged as
powerful tools for efficiently processing data in non-traditional ways. A particular area of promise
for these theories is face recognition.
In this paper, we review the role of sparse representation
and dictionary learning for efficient face identification and verification. Recent face recognition
algorithms from still images, videos, and ambiguously label imagery are reviewed. In particular,
discriminative dictionary learning algorithms as well as methods based on weakly supervised learning
and domain adaptation are summarized. Some of the compelling challenges and issues that confront
research in face recognition using sparse representations and dictionary learning are outlined.
OCIS codes: (150.0150) Machine vision; (100.5010) Pattern recognition; (150.1135) Algorithms;
(100.0100) Image processing.
I.
INTRODUCTION
Face recognition is a challenging problem that has been
actively researched for over two decades [59]. Current
systems work very well when the test image is captured
under controlled conditions [35]. However, their perfor-
mance degrades significantly when the test image con-
tains variations that are not present in the training im-
ages. Some of these variations include illumination, pose,
expression, cosmetics, and aging.
It has been observed that since human faces have sim-
ilar overall configuration, face images can be described
by a relatively low dimensional subspace. As a result,
holistic dimensionality reduction subspace methods such
as Principle Component Analysis (PCA) [51], Linear
Discriminant Analysis (LDA) [3], [17] and Independent
Component Analysis (ICA) [2] have been proposed for
the task of face recognition. These approaches can be
classified into either generative or discriminative meth-
ods. An advantage of using generative approaches is their
reduced sensitivity to noise [59], [55].
In recent years, generative and discriminative ap-
proaches based on sparse representations have been gain-
ing a lot of traction in biometrics recognition [32].
In
sparse representation, given a signal and a redundant dic-
tionary, the goal is to represent this signal as a sparse lin-
ear combination of elements (also known as atoms) from
this dictionary. Finding a sparse representation entails
solving a convex optimization problem. Using sparse rep-
resentation, one can extract semantic information from
the signal. For instance, one can sparsely represent a test
sample in an overcomplete dictionary whose elements are
the training samples themselves, provided that sufficient
training samples are available from each class [55]. An in-
teresting property of sparse representations is that they
are robust to noise and occlusion. For instance, good
performance under partial occlusion, missing data and
variations in background has been demonstrated in many
sparsity-based methods [55], [38]. The ability of sparse
representations to extract meaningful information is due
in part to the fact that face images belonging to the same
person lie on a low-dimensional manifold.
In order to successfully apply sparse representation to
face recognition problems, one needs to correctly choose
an appropriate dictionary. Rather than using a pre-
determined dictionary, e.g. wavelets, one can train an
overcomplete data-driven dictionary. An appropriately
trained data-driven dictionary can simultaneously span
the subspace of all faces and support optimal discrimi-
nation of the classes. These dictionaries tend to provide
better classification accuracy than a predetermined dic-
tionary [31].
Data-driven dictionaries can produce state-of-the-art
results in various face recognition tasks. However, when
the target data has a different distribution than the
source data, the learned sparse representation may not
be optimal. As a result, one needs to adapt these learned
representations from one domain to the other. The prob-
lem of transferring a representation or classifier from one
domain to the other is known as domain adaptation or
domain transfer learning [22], [42].
In this paper, we summarize some of the recent ad-
vances in still- and video-based face recognition using
sparse representation and dictionary learning. Discrimi-
native dictionary learning algorithms as well as methods
based on weakly supervised learning and domain adapta-
tion are summarized. These examples show that sparsity
and dictionary learning are powerful tools for face recog-
nition. Understanding how well these algorithms work
can greatly improve our insights into some of the most
compelling challenges in still- and video-based face recog-
nition.
A. Organization of the paper
This paper is organized as follows. In Section II, we
briefly review the idea behind sparse representation and
dictionary learning. Section III presents some recent
('1751078', 'Yi-Chen Chen', 'yi-chen chen')
('9215658', 'Rama Chellappa', 'rama chellappa')
∗ Corresponding author: pvishalm@umiacs.umd.edu
c8829013bbfb19ccb731bd54c1a885c245b6c7d7Flexible Template and Model Matching Using Image Intensity
University College London
Department of Computer Science
Gower Street, London, United Kingdom
('31557997', 'Bernard F. Buxton', 'bernard f. buxton')
('1797883', 'Vasileios Zografos', 'vasileios zografos')
{B.Buxton, V.Zografos}@cs.ucl.ac.uk
c81ee278d27423fd16c1a114dcae486687ee27ffSearch Based Face Annotation Using Weakly
Labeled Facial Images
Savitribai Phule Pune University
D.Y.Patil Institute of Engineering and Technology, Pimpri, Pune
Mahatma Phulenagar, 120/2 Mahaganpati soc, Chinchwad, Pune-19, MH, India
D.Y.Patil Institute of Engineering and Technology, Pimpri, Pune-18, Savitribai Phule Pune University
DYPIET, Pimpri, Pune-18, MH, India
('15731441', 'Shital Shinde', 'shital shinde')
('3392505', 'Archana Chaugule', 'archana chaugule')
c83a05de1b4b20f7cd7cd872863ba2e66ada4d3fBREUER, KIMMEL: A DEEP LEARNING PERSPECTIVE ON FACIAL EXPRESSIONS
A Deep Learning Perspective on the Origin
of Facial Expressions
Department of Computer Science
Technion - Israel Institute of Technology
Technion City, Haifa, Israel
Figure 1: Demonstration of the filter visualization process.
('50484701', 'Ran Breuer', 'ran breuer')
('1692832', 'Ron Kimmel', 'ron kimmel')
rbreuer@cs.technion.ac.il
ron@cs.technion.ac.il
c88ce5ef33d5e544224ab50162d9883ff6429aa3Face Match for Family Reunification:
Real-world Face Image Retrieval
U.S. National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894, USA
Central Washington University, 400 E. University Way, Ellensburg, WA 98926, USA
('1744255', 'Eugene Borovikov', 'eugene borovikov')
('34928283', 'Michael Gill', 'michael gill')
('35029039', 'Szilárd Vajda', 'szilárd vajda')
(FaceMatch@NIH.gov)
(Szilard.Vajda@cwu.edu)
c822bd0a005efe4ec1fea74de534900a9aa6fb93Face Recognition Committee Machines:
Dynamic Vs. Static Structures
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Shatin, Hong Kong
('2899702', 'Ho-Man Tang', 'ho-man tang')
('1681775', 'Michael R. Lyu', 'michael r. lyu')
('1706259', 'Irwin King', 'irwin king')
fhmtang, lyu, kingg@cse.cuhk.edu.hk
c88c21eb9a8e08b66c981db35f6556f4974d27a8Attribute Learning
Using Joint Human and Machine Computation
Edith Law
April 2011
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Luis von Ahn (co-Chair)
Tom Mitchell (co-Chair)
Jaime Carbonell
Eric Horvitz, Microsoft Research
Rob Miller, MIT
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy.
Copyright c(cid:13) 2011 Edith Law
c8adbe00b5661ab9b3726d01c6842c0d72c8d997Deep Architectures for Face Attributes
Computer Vision and Machine Learning Group, Flickr, Yahoo,
('3469274', 'Tobi Baumgartner', 'tobi baumgartner')
('31922487', 'Jack Culpepper', 'jack culpepper')
{tobi, jackcul}@yahoo-inc.com
fb4545782d9df65d484009558e1824538030bbb1
fbf196d83a41d57dfe577b3a54b1b7fa06666e3bExtreme Learning Machine for Large-Scale
Action Recognition
Bo gazi ci University, Turkey
('1764521', 'Albert Ali Salah', 'albert ali salah')
fb2cc3501fc89f92f5ee130d66e69854f8a9ddd1Learning Discriminative Features via Label Consistent Neural Network
†Raytheon BBN Technologies, Cambridge, MA, 02138
University of Maryland, College Park, MD
('34145947', 'Zhuolin Jiang', 'zhuolin jiang')
('1691470', 'Yaming Wang', 'yaming wang')
('2502892', 'Viktor Rozgic', 'viktor rozgic')
{zjiang,wandrews,vrozgic}@bbn.com, {wym,lsd}@umiacs.umd.edu
fbb6ee4f736519f7231830a8e337b263e91f06feIllumination Robust Facial Feature Detection via
Decoupled Illumination and Texture Features
University of Waterloo, Waterloo ON N2L3G1, Canada
WWW home page: http://vip.uwaterloo.ca/ (cid:63)
('2797326', 'Brendan Chwyl', 'brendan chwyl')
('1685952', 'Alexander Wong', 'alexander wong')
('1720258', 'David A. Clausi', 'david a. clausi')
{bchwyl,a28wong,dclausi}@uwaterloo.ca,
fb87045600da73b07f0757f345a937b1c8097463JIA, YANG, ZHU, KUANG, NIU, CHAN: RCCR FOR LARGE POSE
Reflective Regression of 2D-3D Face Shape
Across Large Pose
The University of Hong Kong
National University of Defense
Technology
3 Tencent Inc.
4 Sensetime Inc.
('34760532', 'Xuhui Jia', 'xuhui jia')
('2966679', 'Heng Yang', 'heng yang')
('35130187', 'Xiaolong Zhu', 'xiaolong zhu')
('1874900', 'Zhanghui Kuang', 'zhanghui kuang')
('1939702', 'Yifeng Niu', 'yifeng niu')
('40392393', 'Kwok-Ping Chan', 'kwok-ping chan')
xhjia@cs.hku.hk
yanghengnudt@gmail.com
lucienzhu@gmail.com
kuangzhanghui@sensetime.com
niuyifeng@nudt.edu.cn
kpchan@cs.hku.hk
fb85867c989b9ee6b7899134136f81d6372526a9Learning to Align Images using Weak Geometric Supervision
Georgia Institute of Technology
2 Microsoft Research
('1703391', 'Jing Dong', 'jing dong')
('3288815', 'Byron Boots', 'byron boots')
('2038264', 'Frank Dellaert', 'frank dellaert')
('1757937', 'Sudipta N. Sinha', 'sudipta n. sinha')
fb5280b80edcf088f9dd1da769463d48e7b08390
fb54d3c37dc82891ff9dc7dd8caf31de00c40d6aBeauty and the Burst:
Remote Identification of Encrypted Video Streams
Tel Aviv University, Cornell Tech
Cornell Tech
Tel Aviv University, Columbia University
('39347554', 'Roei Schuster', 'roei schuster')
('1723945', 'Vitaly Shmatikov', 'vitaly shmatikov')
('2337345', 'Eran Tromer', 'eran tromer')
rs864@cornell.edu
shmat@cs.cornell.edu
tromer@cs.tau.ac.il
fba464cb8e3eff455fe80e8fb6d3547768efba2f
International Journal of Engineering and Applied Sciences (IJEAS)
ISSN: 2394-3661, Volume-3, Issue-2, February 2016
Survey Paper on Emotion Recognition
('40502287', 'Prachi Shukla', 'prachi shukla')
('2229305', 'Sandeep Patil', 'sandeep patil')
fbb2f81fc00ee0f257d4aa79bbef8cad5000ac59Reading Hidden Emotions: Spontaneous
Micro-expression Spotting and Recognition
('50079101', 'Xiaobai Li', 'xiaobai li')
('1836646', 'Xiaopeng Hong', 'xiaopeng hong')
('39056318', 'Antti Moilanen', 'antti moilanen')
('47932625', 'Xiaohua Huang', 'xiaohua huang')
('1757287', 'Guoying Zhao', 'guoying zhao')
fb084b1fe52017b3898c871514cffcc2bdb40b73RESEARCH ARTICLE
Illumination Normalization of Face Image
Based on Illuminant Direction Estimation and
Improved Retinex
School of Electronic and Information Engineering, Beihang University, Beijing, 100191, China
Polytechnic University of Milan, Milan, 20156, Italy, 3 Applied Electronics
University POLITEHNICA Timisoara, Timisoara, 300223, Romania
('1699804', 'Jizheng Yi', 'jizheng yi')
('1724834', 'Xia Mao', 'xia mao')
('35153304', 'Lijiang Chen', 'lijiang chen')
('3399189', 'Yuli Xue', 'yuli xue')
('1734732', 'Alberto Rovetta', 'alberto rovetta')
('1860887', 'Catalin-Daniel Caleanu', 'catalin-daniel caleanu')
* clj@ee.buaa.edu.cn
fb9ad920809669c1b1455cc26dbd900d8e719e613D Gaze Estimation from Remote RGB-D Sensors
THÈSE NO 6680 (2015)
PRÉSENTÉE LE 9 OCTOBRE 2015
À LA FACULTÉ DES SCIENCES ET TECHNIQUES DE L'INGÉNIEUR
LABORATOIRE DE L'IDIAP
PROGRAMME DOCTORAL EN GÉNIE ÉLECTRIQUE
ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE
POUR L'OBTENTION DU GRADE DE DOCTEUR ÈS SCIENCES
PAR
acceptée sur proposition du jury:
Prof. K. Aminian, président du jury
Dr J.-M. Odobez, directeur de thèse
Prof. L.-Ph. Morency, rapporteur
Prof. D. Witzner Hansen, rapporteur
Dr R. Boulic, rapporteur
Suisse
2015
('9206411', 'Kenneth Alberto Funes Mora', 'kenneth alberto funes mora')
ed28e8367fcb7df7e51963add9e2d85b46e2d5d6International J. of Engg. Research & Indu. Appls. (IJERIA).
ISSN 0974-1518, Vol.9, No. III (December 2016), pp.23-42
A NOVEL APPROACH OF FACE RECOGNITION USING
CONVOLUTIONAL NEURAL NETWORKS WITH AUTO
ENCODER
1 Research Scholar, Dept. of Electronics & Communication Engineering,
Rayalaseema University Kurnool, Andhra Pradesh
2 Research Supervisor, Professor, Dept. of Electronics & Communication Engineering,
Madanapalle Institute of Technology and Science, Madanapalle, Andhra Pradesh
('7006226', 'S. A. K JILANI', 's. a. k jilani')
ed0cf5f577f5030ac68ab62fee1cf065349484ccRevisiting Data Normalization for
Appearance-Based Gaze Estimation
Max Planck Institute for Informatics
Saarland Informatics Campus,
Graduate School of Information
Science and Technology, Osaka
Max Planck Institute for Informatics
Saarland Informatics Campus,
Germany
University, Japan
Germany
('2520795', 'Xucong Zhang', 'xucong zhang')
('1751242', 'Yusuke Sugano', 'yusuke sugano')
('3194727', 'Andreas Bulling', 'andreas bulling')
xczhang@mpi-inf.mpg.de
sugano@ist.osaka-u.ac.jp
bulling@mpi-inf.mpg.de
edde81b2bdd61bd757b71a7b3839b6fef81f4be4SHIH, MALLYA, SINGH, HOIEM: MULTI-PROPOSAL PART LOCALIZATION
Part Localization using Multi-Proposal
Consensus for Fine-Grained Categorization
University of Illinois
Urbana-Champaign
IL, US
('2525469', 'Kevin J. Shih', 'kevin j. shih')
('36508529', 'Arun Mallya', 'arun mallya')
('37415643', 'Saurabh Singh', 'saurabh singh')
('2433269', 'Derek Hoiem', 'derek hoiem')
kjshih2@illinois.edu
amallya2@illinois.edu
ss1@illinois.edu
dhoiem@illinois.edu
edf98a925bb24e39a6e6094b0db839e780a77b08Simplex Representation for Subspace Clustering
The Hong Kong Polytechnic University, Hong Kong SAR, China
School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China
Spectral clustering based methods have achieved leading performance on subspace clustering problem. State-of-the-art subspace
clustering methods follow a three-stage framework: compute a coefficient matrix from the data by solving an optimization problem;
construct an affinity matrix from the coefficient matrix; and obtain the final segmentation by applying spectral clustering to the
affinity matrix. To construct a feasible affinity matrix, these methods mostly employ the operations of exponentiation, absolutely
symmetrization, or squaring, etc. However, all these operations will force the negative entries (which cannot be explicitly avoided)
the data. In this paper, we introduce the simplex representation (SR) to remedy this problem of representation based subspace
clustering. We propose an SR based least square regression (SRLSR) model to construct a physically more meaningful affinity matrix
by integrating the nonnegative property of graph into the representation coefficient computation while maintaining the discrimination
of original data. The SRLSR model is reformulated as a linear equality-constrained problem, which is solved efficiently under the
alternating direction method of multipliers framework. Experiments on benchmark datasets demonstrate that the proposed SRLSR
algorithm is very efficient and outperforms state-of-the-art subspace clustering methods on accuracy.
Index Terms—Subspace clustering, simplex representation, spectral clustering.
I. INTRODUCTION
H IGH-dimensional data are commonly observed in var-
ious computer vision and image processing prob-
lems. Contrary to their high-dimensional appearance,
the
latent structure of those data usually lie in a union of
low-dimensional subspaces [1]. Recovering the latent low-
dimensional subspaces from the high-dimensional observation
can not only reduce the computational cost and memory
requirements of subsequent algorithms, but also reduce the
learning and computer vision tasks, we need to find the clusters
of high-dimensional data such that each cluster can be fitted
by a subspace, which is referred to as the subspace clustering
(SC) problem [1].
SC has been extensively studied in the past decades [2]–
[33]. Most of existing SC methods can be categorized into
four categories: iterative based methods [2], [3], algebraic
based methods [4]–[6], statistical based methods [7]–[10], and
spectral clustering based methods [14]–[33]. Among these four
categories, spectral clustering based methods have become the
mainstream due to their theoretical guarantees and promising
performance on real-world applications such as motion seg-
mentation [16] and face clustering [18]. The spectral clustering
based methods usually follow a three-step framework: Step
1) obtain a coefficient matrix of the data points by solving
an optimization problem, which usually incorporates sparse
or low rank regularizations due to their good mathematical
properties; Step 2) construct an affinity matrix from the
coefficient matrix by employing exponentiation [14], abso-
lutely symmetrization [15], [16], [20], [23]–[31], and squaring
operations [17]–[19], [32], [33], etc.; Step 3) apply spectral
analysis techniques [34] to the affinity matrix and obtain the
final clusters of the data points.
Most spectral clustering based methods [14]–[33] obtain
the expected coefficient matrix under the self-expressiveness
property [15], [16], which states that each data point in a union
of multiple subspaces can be linearly represented by the other
data points in the same subspace. However, in some real-world
applications, the data points lie in a union of multiple affine
subspaces rather than linear subspaces [16]. A trivial solution
is to ignore the affine structure of the data points and directly
perform clustering as in the subspaces of linear structures.
A non-negligible drawback of this solution is the increasing
dimension of the intersection of two subspaces, which can
make the subspaces indistinguishable from each other [16]. To
cluster data points lying in affine subspaces instead of linear
subspaces, the affine constraint is introduced [15], [16], in
which each data point can be written as an affine combination
of other points with the sum of coefficients being one.
Despite their high clustering accuracy, most of spectral
clustering based methods [14]–[33] suffer from three major
drawbacks. First, under the affine constraint, the coefficient
vector is not flexible enough to handle real-world applications
Second, negative coefficients cannot be fully avoided since
the existing methods do not explicitly consider non-negative
constraint
in real-world applications,
it is physically problematic to reconstruct a data point by
allowing the others to “cancel each other out” with complex
additions and subtractions [35]. Thus, most of these methods
are limited by being stranded at this physical bottleneck. Third,
the exponentiation, absolutely symmetrization, and squaring
operations in Step 2 will force the negative coefficients to
among the data points.
in Step 1. However,
To solve the three drawbacks mentioned above, we intro-
duce the Simplex Representation (SR) for spectral clustering
based SC. Specifically, the SR is introduced from two in-
terdependent aspects. First, to broaden its adaptivity to real
scenarios, we extend the affine constraint to the scaled affine
constraint, in which the coefficient vector in the optimization
('47882783', 'Jun Xu', 'jun xu')
('1803714', 'Deyu Meng', 'deyu meng')
('48571185', 'Lei Zhang', 'lei zhang')
ed08ac6da6f8ead590b390b1d14e8a9b97370794




ISSN(Online): 2320-9801

ISSN (Print): 2320-9798
International Journal of Innovative Research in Computer
and Communication Engineering
(An ISO 3297: 2007 Certified Organization)
Vol. 3, Issue 9, September 2015
An Efficient Approach for 3D Face
Recognition Using ANN Based Classifiers
Shri Shivaji College, Parbhani, M.S, India
Arts, Commerce and Science College, Gangakhed, M.S, India
Dnyanopasak College Parbhani, M.S, India
('34443070', 'Vaibhav M. Pathak', 'vaibhav m. pathak')
ed9d11e995baeec17c5d2847ec1a8d5449254525Efficient Gender Classification Using a Deep LDA-Pruned Net
McGill University
845 Sherbrooke Street W, Montreal, QC H3A 0G4, Canada
('48087399', 'Qing Tian', 'qing tian')
('1699104', 'Tal Arbel', 'tal arbel')
('1713608', 'James J. Clark', 'james j. clark')
{qtian,arbel,clark}@cim.mcgill.ca
edef98d2b021464576d8d28690d29f5431fd5828Pixel-Level Alignment of Facial Images
for High Accuracy Recognition
Using Ensemble of Patches
('1782221', 'Hoda Mohammadzade', 'hoda mohammadzade')
('35809715', 'Amirhossein Sayyafan', 'amirhossein sayyafan')
('24033665', 'Benyamin Ghojogh', 'benyamin ghojogh')
ed04e161c953d345bcf5b910991d7566f7c486f7Combining facial expression analysis and synthesis on a
Mirror my emotions!
robot
('2185308', 'Stefan Sosnowski', 'stefan sosnowski')
('39124596', 'Christoph Mayer', 'christoph mayer')
('1699132', 'Bernd Radig', 'bernd radig')
ed07856461da6c7afa4f1782b5b607b45eebe9f63D Morphable Models as Spatial Transformer Networks
University of York, UK
Centre for Vision, Speech and Signal Processing, University of Surrey, UK
('39180407', 'Anil Bas', 'anil bas')
('39976184', 'Patrik Huber', 'patrik huber')
('1687021', 'William A. P. Smith', 'william a. p. smith')
('46649582', 'Muhammad Awais', 'muhammad awais')
('1748684', 'Josef Kittler', 'josef kittler')
{ab1792,william.smith}@york.ac.uk, {p.huber,m.a.rana,j.kittler}@surrey.ac.uk
ed1886e233c8ecef7f414811a61a83e44c8bbf50Deep Alignment Network: A convolutional neural network for robust face
alignment
Warsaw University of Technology
('2393538', 'Marek Kowalski', 'marek kowalski')
('1930272', 'Jacek Naruniec', 'jacek naruniec')
('1760267', 'Tomasz Trzcinski', 'tomasz trzcinski')
m.kowalski@ire.pw.edu.pl, j.naruniec@ire.pw.edu.pl, t.trzcinski@ii.pw.edu.pl
edd7504be47ebc28b0d608502ca78c0aea6a65a2Recurrent Residual Learning for Action
Recognition
University of Bonn, Germany
('3434584', 'Ahsan Iqbal', 'ahsan iqbal')
('32774629', 'Alexander Richard', 'alexander richard')
('2946643', 'Juergen Gall', 'juergen gall')
{iqbalm,richard,kuehne,gall}@iai.uni-bonn.de
ed388878151a3b841f95a62c42382e634d4ab82eDenseImage Network: Video Spatial-Temporal Evolution
Encoding and Understanding
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
('3162023', 'Xiaokai Chen', 'xiaokai chen')
('2027479', 'Ke Gao', 'ke gao')
{chenxiaokai,kegao}@ict.ac.cn
edbb8cce0b813d3291cae4088914ad3199736aa0Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence
Efficient Subspace Segmentation via Quadratic Programming
College of Computer Science and Technology, Zhejiang University, China
National University of Singapore, Singapore
School of Information Systems, Singapore Management University, Singapore
('35019367', 'Shusen Wang', 'shusen wang')
('2026127', 'Tiansheng Yao', 'tiansheng yao')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('38203359', 'Jialie Shen', 'jialie shen')
wssatzju@gmail.com, eleyuanx@nus.edu.sg, tsyaoo@gmail.com, eleyans@nus.edu.sg, jlshen@smu.edu.sg
edff76149ec44f6849d73f019ef9bded534a38c2Privacy-Preserving Visual Learning Using
Doubly Permuted Homomorphic Encryption
The University of Tokyo
Tokyo, Japan
Michigan State University
East Lansing, MI, USA
The University of Tokyo
Tokyo, Japan
Carnegie Mellon University
Pittsburgh, PA, USA
('1899753', 'Ryo Yonetani', 'ryo yonetani')
('2232940', 'Vishnu Naresh Boddeti', 'vishnu naresh boddeti')
('9467266', 'Yoichi Sato', 'yoichi sato')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
yonetani@iis.u-tokyo.ac.jp
vishnu@msu.edu
kkitani@cs.cmu.edu
ysato@iis.u-tokyo.ac.jp
ed96f2eb1771f384df2349879970065a87975ca7Adversarial Attacks on Face Detectors using Neural
Net based Constrained Optimization
Department of Electrical and
Computer Engineering
University of Toronto
Department of Electrical and
Computer Engineering
University of Toronto
('26418299', 'Avishek Joey Bose', 'avishek joey bose')
('3241876', 'Parham Aarabi', 'parham aarabi')
Email: joey.bose@mail.utoronto.ca
Email: parham@ecf.utoronto.ca
c178a86f4c120eca3850a4915134fff44cbccb48
c1d2d12ade031d57f8d6a0333cbe8a772d752e01Journal of Math-for-Industry, Vol.2(2010B-5), pp.147–156
Convex optimization techniques for the efficient recovery of a sparsely
corrupted low-rank matrix
D 案
Received on August 10, 2010 / Revised on August 31, 2010
E 案
('2372029', 'Silvia Gandy', 'silvia gandy')
('1685085', 'Isao Yamada', 'isao yamada')
c180f22a9af4a2f47a917fd8f15121412f2d0901Facial Expression Recognition by ICA with
Selective Prior
Department of Information Processing, School of Information Science,
Japan Advanced Institute of Science and Technology, Ishikawa-ken 923-1211, Japan
('1753878', 'Fan Chen', 'fan chen')
('1791753', 'Kazunori Kotani', 'kazunori kotani')
{chen-fan, ikko}@jaist.ac.jp
c146aa6d56233ce700032f1cb1797007785576013D Morphable Models as Spatial Transformer Networks
University of York, UK
Centre for Vision, Speech and Signal Processing, University of Surrey, UK
('39180407', 'Anil Bas', 'anil bas')
('39976184', 'Patrik Huber', 'patrik huber')
('1687021', 'William A. P. Smith', 'william a. p. smith')
('9170545', 'Muhammad Awais', 'muhammad awais')
('1748684', 'Josef Kittler', 'josef kittler')
{ab1792,william.smith}@york.ac.uk, {p.huber,m.a.rana,j.kittler}@surrey.ac.uk
c1f07ec629be1c6fe562af0e34b04c54e238dcd1A Novel Facial Feature Localization Method Using Probabilistic-like Output*
Microsoft Research Asia

Other methods utilize the face structure information and
heuristically search the facial features within the facial
regions [12]. Though the method is fast in localizing feature
points, it might be sensitive to some noises, such as eye
glasses, and thus fail in localization.
To address these problems, we proposed a learning-based
facial feature localization method under probabilistic-like
framework. We modified an object detection method [12] so
that it could generate a unified probabilistic-like output for
each point. We therefore proposed an algorithm to locate
the facial features using this probabilistic-like output.
Because this method is learning-based, it is robust to pose,
illumination, expression and appearance variations. The
localization speed of the proposed method is extremely fast.
It takes only about 10ms on the computer with a P4 1.3G
CPU to locate five feature points and the accuracy is
comparable with hand labeled results.
This paper is organized as follows. Section 2 first
describes the algorithm to calculate probabilistic-like output,
and then presents the proposed localization approach based
on the probabilistic-like output. Experiments will be given
at Section 3. Section 4 gives the conclusion remarks and
discusses future works.
2. FACIAL FEATURE POINT LOCALIZATION
The framework of the proposed method is illustrated in
Figure 1.
Figure 1.Feature Point Localization Framework
ECE dept, University of Miami
1251 Memorial Drive, EB406
Coral Gables, Florida, 33124, U.S.
('1684635', 'Lei Zhang', 'lei zhang')
('9310930', 'Long', 'long')
('8392859', 'Mingjing Li', 'mingjing li')
('38188346', 'Hongjiang Zhang', 'hongjiang zhang')
('1679242', 'Longbin Chen', 'longbin chen')
{leizhang, mjli,hjzhang}@microsoft.com
longzhu@msrchina.research.microsoft.com
l.chen6@umiami.edu
c1cc2a2a1ab66f6c9c6fabe28be45d1440a57c3dDual-Agent GANs for Photorealistic and Identity
Preserving Profile Face Synthesis
National University of Singapore
3 Panasonic R&D Center Singapore
National University of Defense Technology
Franklin. W. Olin College of Engineering
Qihoo 360 AI Institute
('46509484', 'Jian Zhao', 'jian zhao')
('33419682', 'Lin Xiong', 'lin xiong')
('2757639', 'Jianshu Li', 'jianshu li')
('40345914', 'Fang Zhao', 'fang zhao')
('2513111', 'Zhecan Wang', 'zhecan wang')
('2668358', 'Sugiri Pranata', 'sugiri pranata')
('3493398', 'Shengmei Shen', 'shengmei shen')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('33221685', 'Jiashi Feng', 'jiashi feng')
{zhaojian90, jianshu}@u.nus.edu
{lin.xiong, karlekar.jayashree, sugiri.pranata, shengmei.shen}@sg.panasonic.com
zhecan.wang@students.olin.edu
{elezhf, eleyans, elefjia}@u.nus.edu
c10a15e52c85654db9c9343ae1dd892a2ac4a279Int J Comput Vis (2012) 100:134–153
DOI 10.1007/s11263-011-0494-3
Learning the Relative Importance of Objects from Tagged Images
for Retrieval and Cross-Modal Search
Received: 16 December 2010 / Accepted: 23 August 2011 / Published online: 18 October 2011
© Springer Science+Business Media, LLC 2011
('35788904', 'Sung Ju Hwang', 'sung ju hwang')
c1fc70e0952f6a7587b84bf3366d2e57fc572fd7
c1dfabe36a4db26bf378417985a6aacb0f769735Journal of Computer Vision and Image Processing, NWPJ-201109-50
1
Describing Visual Scene through EigenMaps
('2630005', 'Shizhi Chen', 'shizhi chen')
('35484757', 'YingLi Tian', 'yingli tian')
c1482491f553726a8349337351692627a04d5dbe
c1ff88493721af1940df0d00bcfeefaa14f1711fCVPR
#1369
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
CVPR 2010 Submission #1369. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
CVPR
#1369
Subspace Regression: Predicting a Subspace from one Sample
Anonymous CVPR submission
Paper ID 1369
c11eb653746afa8148dc9153780a4584ea529d28Global and Local Consistent Wavelet-domain Age
Synthesis
('2112221', 'Peipei Li', 'peipei li')
('49995036', 'Yibo Hu', 'yibo hu')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
c1ebbdb47cb6a0ed49c4d1cf39d7565060e6a7eeRobust Facial Landmark Localization Based on ('19254504', 'Yiyun Pan', 'yiyun pan')
('7934466', 'Junwei Zhou', 'junwei zhou')
('46636537', 'Yongsheng Gao', 'yongsheng gao')
('2065968', 'Shengwu Xiong', 'shengwu xiong')
c17a332e59f03b77921942d487b4b102b1ee73b6Learning an appearance-based gaze estimator
from one million synthesised images
Tadas Baltruˇsaitis2
('34399452', 'Erroll Wood', 'erroll wood')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
('39626495', 'Peter Robinson', 'peter robinson')
('3194727', 'Andreas Bulling', 'andreas bulling')
1University of Cambridge, United Kingdom {erroll.wood,peter.robinson}@cam.ac.uk
2Carnegie Mellon University, United States {tbaltrus,morency}@cs.cmu.edu
3Max Planck Institute for Informatics, Germany bulling@mpi-inf.mpg.de
c1e76c6b643b287f621135ee0c27a9c481a99054
c10b0a6ba98aa95d740a0d60e150ffd77c7895adHANSELMANN, YAN, NEY: DEEP FISHER FACES
Deep Fisher Faces
Human Language Technology and
Pattern Recognition Group
RWTH Aachen University
Aachen, Germany
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('35362682', 'Shen Yan', 'shen yan')
('1685956', 'Hermann Ney', 'hermann ney')
hanselmann@cs.rwth-aachen.de
shen.yan@rwth-aachen.de
ney@cs.rwth-aachen.de
c1298120e9ab0d3764512cbd38b47cd3ff69327bDisguised Faces in the Wild
IIIT-Delhi, India
IBM TJ Watson Research Center, USA
Rama Chellappa
University of Maryland, College Park, USA
('2573268', 'Vineet Kushwaha', 'vineet kushwaha')
('2220719', 'Maneet Singh', 'maneet singh')
('50631607', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('47733712', 'Nalini Ratha', 'nalini ratha')
{maneets, rsingh, mayank}@iiitd.ac.in
ratha@us.ibm.com
rama@umiacs.umd.ed
c1dd69df9dfbd7b526cc89a5749f7f7fabc1e290Unconstrained face identification with multi-scale block-based
correlation
Gaston, J., MIng, J., & Crookes, D. (2016). Unconstrained face identification with multi-scale block-based
correlation. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal
Processing (pp. 1477-1481). [978-1-5090-4117-6/17] Institute of Electrical and Electronics Engineers (IEEE
Published in:
Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing
Document Version:
Peer reviewed version
Queen's University Belfast - Research Portal
Link to publication record in Queen's University Belfast Research Portal
Publisher rights
© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
General rights
Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other
copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated
with these rights.
Take down policy
The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to
ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the
Download date:29. Nov. 2017
Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk.
c68ec931585847b37cde9f910f40b2091a662e83(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 9, No. 6, 2018
A Comparative Evaluation of Dotted Raster-
Stereography and Feature-Based Techniques for
Automated Face Recognition
S. Talha Ahsan
Department of Computer Science
Department of Electrical Engineering
Usman Institute of Technology
Usman Institute of Technology
Karachi, Pakistan
Karachi, Pakistan
Department of Computer Science
Usman Institute of Technology
Karachi, Pakistan
and
feature-based
system. The
techniques
two candidate
('49508503', 'Muhammad Wasim', 'muhammad wasim')
('3251091', 'Lubaid Ahmed', 'lubaid ahmed')
('33238128', 'Syed Faisal Ali', 'syed faisal ali')
c696c9bbe27434cb6279223a79b17535cd6e88c8International Journal of Information Technology Vol.11 No.9 2005
*
Discriminant Analysis
Facial Expression Recognition with Pyramid Gabor
Features and Complete Kernel Fisher Linear
1 School of Electronic and Information Engineering, South China
University of Technology, Guangzhou, 510640, P.R.China
Motorola China Research Center, Shanghai, 210000, P.R.China
('30193721', 'Duan-Duan Yang', 'duan-duan yang')
('2949795', 'Lian-Wen Jin', 'lian-wen jin')
('9215052', 'Jun-Xun Yin', 'jun-xun yin')
('1751744', 'Li-Xin Zhen', 'li-xin zhen')
('34824270', 'Jian-Cheng Huang', 'jian-cheng huang')
{ddyang, eelwjin,eejxyin}@scut.edu.cn
{Li-Xin.Zhen, Jian-Cheng.Huang}@motorola.com
c65e4ffa2c07a37b0bb7781ca4ec2ed7542f18e3Recurrent Neural Networks for Facial Action Unit
Recognition from Image Sequences
School of Computer Science
University of Witwatersrand
Private Bag 3, Wits 2050, South Africa
Department of Computer Science
University of the Western Cape
Bellville, South Africa
Middle East Technical University
Northern Cyprus Campus
Güzelyurt, Mersin10, Turkey
('1903882', 'H Nyongesa', 'h nyongesa')Hima.vadapalli@wits.ac.za
hnyongesa@uwc.ac.za
Omlin@metu.edu.tr
c614450c9b1d89d5fda23a54dbf6a27a4b821ac0Vol.60: e17160480, January-December 2017
http://dx.doi.org/10.1590/1678-4324-2017160480
ISSN 1678-4324 Online Edition
1
Engineering,Technology and Techniques
BRAZILIAN ARCHIVES OF
BIOLOGY AND TECHNOLOGY
A N I N T E R N A T I O N A L J O U R N A L
Face Image Retrieval of Efficient Sparse Code words and
Multiple Attribute in Binning Image
Srm Easwari Engineering College, Ramapuram, Bharathi Salai, Chennai, Tamil Nadu, India
c6096986b4d6c374ab2d20031e026b581e7bf7e9A Framework for Using Context to
Understand Images of People
Submitted in partial fulfillment of the
requirements for the
degree of Doctor of Philosophy
Department of Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
May 2009
Thesis Committee:
Tsuhan Chen, Chair
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1763086', 'Alexei A. Efros', 'alexei a. efros')
('1709305', 'Martial Hebert', 'martial hebert')
('33642939', 'Jiebo Luo', 'jiebo luo')
('1794486', 'Marios Savvides', 'marios savvides')
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
c6608fdd919f2bc4f8d7412bab287527dcbcf505Unsupervised Alignment of Natural
Language with Video
by
Submitted in Partial Fulfillment
of the
Requirements for the Degree
Doctor of Philosophy
Supervised by
Professor Daniel Gildea
Department of Computer Science
Arts, Sciences and Engineering
Edmund A. Hajim School of Engineering and Applied Sciences
University of Rochester
Rochester, New York
2015
('2296971', 'Iftekhar Naim', 'iftekhar naim')
c6f3399edb73cfba1248aec964630c8d54a9c534A Comparison of CNN-based Face and Head Detectors for
Real-Time Video Surveillance Applications
1 ´Ecole de technologie sup´erieure, Universit´e du Qu´ebec, Montreal, Canada
2 Genetec Inc., Montreal, Canada
('38993564', 'Le Thanh Nguyen-Meidine', 'le thanh nguyen-meidine')
('1697195', 'Eric Granger', 'eric granger')
('40185782', 'Madhu Kiran', 'madhu kiran')
('38755219', 'Louis-Antoine Blais-Morin', 'louis-antoine blais-morin')
lethanh@livia.etsmtl.ca, eric.granger@etsmtl.ca, mkiran@livia.etsmtl.ca
lablaismorin@genetec.com
c62c910264658709e9bf0e769e011e7944c45c90Recent Progress of Face Image Synthesis
National Laboratory of Pattern Recognition, CASIA
Center for Research on Intelligent Perception and Computing, CASIA
Center for Excellence in Brain Science and Intelligence Technology, CAS
University of Chinese Academy of Sciences, Beijing, 100049, China
('9702077', 'Zhihe Lu', 'zhihe lu')
('7719475', 'Zhihang Li', 'zhihang li')
('1680853', 'Jie Cao', 'jie cao')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
{luzhihe2016, lizhihang2016, caojie2016}@ia.ac.cn, {rhe, znsun}@nlpr.ia.ac.cn
c678920facffd35853c9d185904f4aebcd2d8b49Learning to Anonymize Faces for
Privacy Preserving Action Detection
1 EgoVid Inc., South Korea
University of California, Davis
('10805888', 'Zhongzheng Ren', 'zhongzheng ren')
('1883898', 'Yong Jae Lee', 'yong jae lee')
('1766489', 'Michael S. Ryoo', 'michael s. ryoo')
{zzren,yongjaelee}@ucdavis.edu, mryoo@egovid.com
c660500b49f097e3af67bb14667de30d67db88e3www.elsevier.com/locate/cviu
Facial asymmetry quantification for
expression invariant human identification
and Sinjini Mitrac
a The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
University of Pittsburgh, Pittsburgh, PA 15260, USA
Carnegie Mellon University, Pittsburgh, PA 15213, USA
Received 15 February 2002; accepted 24 March 2003
('1689241', 'Yanxi Liu', 'yanxi liu')
('2185899', 'Karen L. Schmidt', 'karen l. schmidt')
c6241e6fc94192df2380d178c4c96cf071e7a3acAction Recognition with Trajectory-Pooled Deep-Convolutional Descriptors
The Chinese University of Hong Kong
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('33345248', 'Limin Wang', 'limin wang')
('33427555', 'Yu Qiao', 'yu qiao')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
07wanglimin@gmail.com, yu.qiao@siat.ac.cn, xtang@ie.cuhk.edu.hk
c6ffa09c4a6cacbbd3c41c8ae7a728b0de6e10b6This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/copyright
c6526dd3060d63a6c90e8b7ff340383c4e0e0dd8OPEN
Received: 22 December 2015
Accepted: 04 April 2016
Published: 21 April 2016
Anxiety promotes memory for
mood-congruent faces but does not
alter loss aversion
Pathological anxiety is associated with disrupted cognitive processing, including working memory and
decision-making. In healthy individuals, experimentally-induced state anxiety or high trait anxiety
often results in the deployment of adaptive harm-avoidant behaviours. However, how these processes
affect cognition is largely unknown. To investigate this question, we implemented a translational
within-subjects anxiety induction, threat of shock, in healthy participants reporting a wide range of
trait anxiety scores. Participants completed a gambling task, embedded within an emotional working
memory task, with some blocks under unpredictable threat and others safe from shock. Relative to the
safe condition, threat of shock improved recall of threat-congruent (fearful) face location, especially in
highly trait anxious participants. This suggests that threat boosts working memory for mood-congruent
stimuli in vulnerable individuals, mirroring memory biases in clinical anxiety. By contrast, Bayesian
analysis indicated that gambling decisions were better explained by models that did not include threat
or treat anxiety, suggesting that: (i) higher-level executive functions are robust to these anxiety
manipulations; and (ii) decreased risk-taking may be specific to pathological anxiety. These findings
provide insight into the complex interactions between trait anxiety, acute state anxiety and cognition,
and may help understand the cognitive mechanisms underlying adaptive anxiety.
Anxiety disorders constitute a major global health burden1, and are characterized by negative emotional process-
ing biases, as well as disrupted working memory and decision-making2,3. On the other hand, anxiety can also be
an adaptive response to stress, stimulating individuals to engage in harm-avoidant behaviours. Influential the-
ories of pathological anxiety propose that clinical anxiety emerges through dysregulation of adaptive anxiety4,5.
Therefore, in order to understand how this dysregulation emerges in pathological anxiety, it is crucial to first
understand the cognitive features associated with adaptive or ‘non-pathological’ anxiety, in other words anxiety
levels that can vary within and between individuals but do not result in the development of clinical symptoms
associated with anxiety disorders.
Several methods exists to induce anxiety in healthy individuals, including threat of shock (ToS), the Trier
social stressor test (TSST), and the cold pressor test (CPT). During the ToS paradigm, subjects typically perform
a cognitive task while either at risk of or safe from rare, but unpleasant, electric shocks. Compared to the other
methodologies, ToS has the advantage of allowing for within-subjects, within-sessions, designs (for a review
on its effects on cognition, see Robinson et al.2), and ensures the task is performed while being anxious, rather
than after being relieved from the stressor. In addition, ToS paradigms have good translational analogues6, are
well-validated7, and are thus considered a reliable model for examining adaptive anxiety in healthy individuals.
Because the engagement of adaptive anxiety processes may vary with individuals’ vulnerability to developing
pathological anxiety8–10, we were also interested in examining how the effects of state anxiety induced by threat
of shock interact with dispositional or trait anxiety, as reflected in self-report questionnaire scores such as the
State-Trait Anxiety Inventory11 (STAI). High levels of self-reported trait anxiety are indeed considered a strong
vulnerability factor in the development of pathological anxiety4,12.
The extent to which induced state anxiety (elicited by the laboratory procedures discussed above) and
trait anxiety interact to alter cognition has rarely been studied10. In particular, does induced anxiety have a
Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, UK. 2Affective Brain
Lab, University College London, London WC1H 0AP, UK. 3Clinical
Psychopharmacology Unit, Educational and Health Psychology, University College
London, WC1E 7HB. *These authors contributed equally to this work. †These authors jointly supervised this work.
('4177273', 'Chandni Hindocha', 'chandni hindocha')Correspondence and requests for materials should be addressed to C.J.C. (email: caroline.charpentier.11@ucl.ac.uk)
c62c07de196e95eaaf614fb150a4fa4ce49588b4Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18)
1078
c65a394118d34beda5dd01ae0df163c3db88fcebIn press : Proceedings of the 30th European Conference On Information Retrieval
Glasgow, March-April 2008
Finding the Best Picture:
Cross-Media Retrieval of Content
Katholieke Universiteit Leuven
Celestijnenlaan 200A, B-3001 Heverlee, Belgium
http://www.cs.kuleuven.be/~liir/
('1797588', 'Koen Deschacht', 'koen deschacht')
('1802161', 'Marie-Francine Moens', 'marie-francine moens')
{Koen.Deschacht,Marie-Francine.Moens}@cs.kuleuven.be
ec90d333588421764dff55658a73bbd3ea3016d2Research Article
Protocol for Systematic Literature Review of Face
Recognition in Uncontrolled Environment
Bacha Khan University, Charsadda, KPK, Pakistan
('12144785', 'Faizan Ullah', 'faizan ullah')
('46463663', 'Sabir Shah', 'sabir shah')
('49669073', 'Dilawar Shah', 'dilawar shah')
('12579194', 'Shujaat Ali', 'shujaat ali')
faizanullah@bkuc.edu.pk
ec8ec2dfd73cf3667f33595fef84c95c42125945Pose-Invariant Face Alignment with a Single CNN
Michigan State University
2Visualization Group, Bosch Research and Technology Center North America
('2357264', 'Amin Jourabloo', 'amin jourabloo')
('3876303', 'Mao Ye', 'mao ye')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('3334600', 'Liu Ren', 'liu ren')
1,2 {jourablo, liuxm}@msu.edu, {mao.ye2, liu.ren}@us.bosch.com
ec1e03ec72186224b93b2611ff873656ed4d2f74JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
3D Reconstruction of “In-the-Wild” Faces in
Images and Videos
('47456731', 'James Booth', 'james booth')
('2931390', 'Anastasios Roussos', 'anastasios roussos')
('31243357', 'Evangelos Ververas', 'evangelos ververas')
('2015036', 'Stylianos Ploumpis', 'stylianos ploumpis')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
ec12f805a48004a90e0057c7b844d8119cb21b4aDistance-Based Descriptors and Their
Application in the Task of Object Detection
Technical University of Ostrava, FEECS
17. Listopadu 15, 708 33 Ostrava-Poruba, Czech Republic
('2467747', 'Radovan Fusek', 'radovan fusek')
('2557877', 'Eduard Sojka', 'eduard sojka')
{radovan.fusek,eduard.sojka}@vsb.cz
ec22eaa00f41a7f8e45ed833812d1ac44ee1174e
ec54000c6c0e660dd99051bdbd7aed2988e27ab8TWO IN ONE: JOINT POSE ESTIMATION AND FACE RECOGNITION WITH P2CA1
*Dept. Teoria del Senyal i Comunicacions - Universitat Politècnica de Catalunya, Barcelona, Spain
+Dipartimento di Elettronica e Informazione - Politecnico di Milano, Meiland, Italy
('2771575', 'Francesc Tarres', 'francesc tarres')
('31936578', 'Antonio Rama', 'antonio rama')
('2158932', 'Davide Onofrio', 'davide onofrio')
('1729506', 'Stefano Tubaro', 'stefano tubaro')
{tarres, alrama}@gps.tsc.upc.edu
{d.onofrio, tubaro}@elet.polimi.it
ec0104286c96707f57df26b4f0a4f49b774c486b758
An Ensemble CNN2ELM for Age Estimation
('40402919', 'Mingxing Duan', 'mingxing duan')
('39893222', 'Kenli Li', 'kenli li')
('34373985', 'Keqin Li', 'keqin li')
ec05078be14a11157ac0e1c6b430ac886124589bLongitudinal Face Aging in the Wild - Recent Deep Learning Approaches
Concordia University
Montreal, Quebec, Canada
Concordia University
Montreal, Quebec, Canada
CyLab Biometrics Center
Dept. of Electrical and Computer Engineering
Carnegie Mellon University Pittsburgh, PA, USA
Concordia University
Montreal, Quebec, Canada
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
('2687827', 'Kha Gia Quach', 'kha gia quach')
('1769788', 'Khoa Luu', 'khoa luu')
('1699922', 'Tien D. Bui', 'tien d. bui')
Email: c duon@encs.concordia.ca
Email: k q@encs.concordia.ca
Email: kluu@andrew.cmu.edu
Email: bui@encs.concordia.ca
4e7ed13e541b8ed868480375785005d33530e06dFace Recognition Using Deep Multi-Pose Representations
Ram Nevatiab Gerard Medionib
Prem Natarajana
aInformation Sciences Institute
University of Southern California
Marina Del Rey, CA
b Institute for Robotics and Intelligent Systems
University of Southern California
Los Angeles, California
cThe Open University
Raanana, Israel
('1746738', 'Yue Wu', 'yue wu')
('38696444', 'Stephen Rawls', 'stephen rawls')
('35840854', 'Shai Harel', 'shai harel')
('11269472', 'Iacopo Masi', 'iacopo masi')
('1689391', 'Jongmoo Choi', 'jongmoo choi')
('2955822', 'Jatuporn Toy Leksut', 'jatuporn toy leksut')
('5911467', 'Jungyeon Kim', 'jungyeon kim')
('1756099', 'Tal Hassner', 'tal hassner')
4e30107ee6a2e087f14a7725e7fc5535ec2f5a5fПредставление новостных сюжетов с помощью
событийных фотографий
© М.М. Постников
© Б.В. Добров
Московский государственный университет имени М.В. Ломоносова
факультет вычислительной математики и кибернетики,
Москва, Россия
Аннотация. Рассмотрена задача аннотирования новостного сюжета изображениями,
ассоциированными с конкретными текстами сюжета. Введено понятие «событийной фотографии»,
содержащей конкретную информацию, дополняющую текст сюжета. Для решения задачи применены
нейронные сети с использованием переноса обучения (Inception v3) для специальной размеченной
коллекции из 4114 изображений. Средняя точность полученных результатов составила более 94,7%.
Ключевые слова: событийная фотография, новостные иллюстрации, перенос обучения.
News Stories Representation Using Event Photos
© M.M. Postnikov
© B.V. Dobrov
Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics
Moscow, Russia
mihanlg@yandex.ru
dobrov_bv@mail.ru
mihanlg@yandex.ru
dobrov_bv@mail.ru
4e5dc3b397484326a4348ccceb88acf309960e86Hindawi Publishing Corporation
e Scientific World Journal
Volume 2014, Article ID 219732, 12 pages
http://dx.doi.org/10.1155/2014/219732
Research Article
Secure Access Control and Large Scale Robust Representation
for Online Multimedia Event Detection
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
School of Computer Science, Wuyi University, Jiangmen 529020, China
State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China
Received 2 April 2014; Accepted 30 June 2014; Published 22 July 2014
Academic Editor: Vincenzo Eramo
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large
scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For
the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role
based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed
that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the
object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were
extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such
as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap
between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging
TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art
approaches.
1. Introduction
As one of the most interesting aspects of multimedia content
analysis, the multimedia event detection (MED) is becoming
an important research area for computer vision in recent
years. According to the definition by the National Institute
of Standards and Technology (NIST) [1], an event (1) is
a complex activity occurring at a specific place and time,
(2) involves people interacting with other people and/or
objects, (3) consists of a number of human actions, processes,
and activities that are loosely or tightly organized and that
have significant temporal and semantic relationships to the
overarching activity, and (4) is directly observable. A MED
task is to indicate whether an event is occurred in a specified
test clip based on a standard event kit [1], which includes an
event name, a textual definition, a textual explication with an
attribute list, an evidential description, and a set of illustrative
video examples. Although there are many other definitions
available, such as the MED definitions from the NIST, the
research on the MED is still far from reaching its maturity.
Most of the current researches are focused on specific areas,
such as sports video [2], news video [3], and surveillance
video [4]. These approaches do not perform well when used
for the online or web based event detection due to two types
of issues, which are the secure access control issue and the
large scale robust representation issue. Thus, we developed an
online multimedia event detection system, trying to provide
general MED services.
The first issue is about how we can obtain a secure access
control for the online multimedia event detection system.
Compared with that of traditional distributed systems, it is a
kind of service relationships between access control subjects
and objects in the online multimedia event detection system.
The service could establish, recombine, destruct, and even
inherit efficiently to requested parameters which cannot be
satisfied well by traditional access control models, such as
('1706701', 'Changyu Liu', 'changyu liu')
('40371462', 'Bin Lu', 'bin lu')
('1780591', 'Huiling Li', 'huiling li')
('1706701', 'Changyu Liu', 'changyu liu')
Correspondence should be addressed to Bin Lu; lbscut@gmail.com
4e6c17966efae956133bf8f22edeffc24a0470c1Face Classification: A Specialized Benchmark
Study
1School of Electronic, Electrical and Communication Engineering
2Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
University of Chinese Academy of Sciences
Institute of Automation, Chinese Academy of Sciences
Macau University of Science and Technology
('37614515', 'Jiali Duan', 'jiali duan')
('40397682', 'Shengcai Liao', 'shengcai liao')
('2950852', 'Shuai Zhou', 'shuai zhou')
('34679741', 'Stan Z. Li', 'stan z. li')
{jli.duan,shuaizhou.palm}@gmail.com, {scliao,szli}@nlpr.ia.ac.cn
4e1836914bbcf94dc00e604b24b1b0d6d7b61e66Dynamic Facial Expression Recognition Using Boosted
Component-based Spatiotemporal Features and
Multi-Classifier Fusion
1. Machine Vision Group, Department of Electrical and Information Engineering,
University of Oulu, Finland
Research Center for Learning Science, Southeast University, China
http://www.ee.oulu.fi/mvg
('18780812', 'Xiaohua Huang', 'xiaohua huang')
('1757287', 'Guoying Zhao', 'guoying zhao')
('40608983', 'Wenming Zheng', 'wenming zheng')
{huang.xiaohua,gyzhao,mkp}@ee.oulu.fi
wenming_zheng@seu.edu.cn
4e4fa167d772f34dfffc374e021ab3044566afc3Learning Low-Rank Representations with Classwise
Block-Diagonal Structure for Robust Face Recognition
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
School of Computer Science, Nanjing University of Science and Technology
University of Maryland, College Park
('1689181', 'Yong Li', 'yong li')
('38188270', 'Jing Liu', 'jing liu')
('3233021', 'Zechao Li', 'zechao li')
('34868330', 'Yangmuzi Zhang', 'yangmuzi zhang')
('1694235', 'Hanqing Lu', 'hanqing lu')
('38450168', 'Songde Ma', 'songde ma')
{yong.li,jliu,luhq}@nlpr.ia.ac.cn, zechao.li@gmail.com, ymzhang@umiacs.umd.edu, masd@most.cn
4e32fbb58154e878dd2fd4b06398f85636fd0cf4A Hierarchical Matcher using Local Classifier Chains
L. Zhang and I.A. Kakadiaris
Computational Biomedicine Lab, 4849 Calhoun Rd, Rm 373, Houston, TX 77204
4ed54d5093d240cc3644e4212f162a11ae7d1e3bLearning Visual Compound Models from Parallel
Image-Text Datasets
Bielefeld University
University of Toronto
('2872318', 'Jan Moringen', 'jan moringen')
('1724954', 'Sven Wachsmuth', 'sven wachsmuth')
('1792908', 'Suzanne Stevenson', 'suzanne stevenson')
{jmoringe,swachsmu}@techfak.uni-bielefeld.de
{sven,suzanne}@cs.toronto.edu
4e8c608fc4b8198f13f8a68b9c1a0780f6f50105How Related Exemplars Help Complex Event Detection in Web Videos?
ITEE, The University of Queensland, Australia
ECE, National University of Singapore, Singapore
§†
School of Computer Science, Carnegie Mellon University, USA
('39033919', 'Yi Yang', 'yi yang')
('1727419', 'Zhigang Ma', 'zhigang ma')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
('2351434', 'Zhongwen Xu', 'zhongwen xu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
{yiyang,kevinma,alex}@cs.cmu.edu z.xu3@uq.edu.au
4ea53e76246afae94758c1528002808374b75cfaLasbela, U. J.Sci. Techl., vol.IV , pp. 57-70, 2015
Review ARTICLE
A Review of Scholastic Examination and Models for Face Recognition
ISSN 2306-8256
and Retrieval in Video

SBK Women s University, Quetta, Balochistan
University of Balochistan, Quetta
University of Balochistan, Quetta
Institute of Biochemistry, University of Balochistan, Quetta
('35415301', 'Varsha Sachdeva', 'varsha sachdeva')
('2139801', 'Junaid Baber', 'junaid baber')
('3343681', 'Maheen Bakhtyar', 'maheen bakhtyar')
('1903979', 'Muzamil Bokhari', 'muzamil bokhari')
('1702753', 'Imran Ali', 'imran ali')
4ed2d7ecb34a13e12474f75d803547ad2ad811b2Common Action Discovery and Localization in Unconstrained Videos
School of Electrical and Electronic Engineering
Nanyang Technological University, Singapore
('1691251', 'Jiong Yang', 'jiong yang')
('34316743', 'Junsong Yuan', 'junsong yuan')
yang0374@e.ntu.edu.sg, jsyuan@ntu.edu.sg
4e97b53926d997f451139f74ec1601bbef125599Discriminative Regularization for Generative Models
Montreal Institute for Learning Algorithms, Universit e de Montr eal
('2059369', 'Alex Lamb', 'alex lamb')
('3074927', 'Vincent Dumoulin', 'vincent dumoulin')
FIRST.LAST@UMONTREAL.CA
4e8168fbaa615009d1618a9d6552bfad809309e9Deep Convolutional Neural Network Features and the Original Image
School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
University of Maryland, College Park, USA
('7493834', 'Connor J. Parde', 'connor j. parde')
('3363752', 'Matthew Q. Hill', 'matthew q. hill')
('15929465', 'Y. Ivette Colon', 'y. ivette colon')
('2716670', 'Swami Sankaranarayanan', 'swami sankaranarayanan')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
4e0636a1b92503469b44e2807f0bb35cc0d97652Adversarial Localization Network
Tsinghua University
Stanford University
Stanford University
('2548303', 'Lijie Fan', 'lijie fan')
('3303970', 'Shengjia Zhao', 'shengjia zhao')
('2490652', 'Stefano Ermon', 'stefano ermon')
flj14@mails.tsinghua.edu.cn
sjzhao@stanford.edu
ermon@stanford.edu
4e27fec1703408d524d6b7ed805cdb6cba6ca132SSD-Sface: Single shot multibox detector for small faces
C. Thuis
4e6c9be0b646d60390fe3f72ce5aeb0136222a10Long-term Temporal Convolutions
for Action Recognition
('1785596', 'Ivan Laptev', 'ivan laptev')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
4ea4116f57c5d5033569690871ba294dc3649ea5Multi-View Face Alignment Using 3D Shape Model for
View Estimation
Tsinghua University
2Core Technology Center, Omron Corporation
('1739678', 'Yanchao Su', 'yanchao su')
('1679380', 'Haizhou Ai', 'haizhou ai')
('1710195', 'Shihong Lao', 'shihong lao')
ahz@mail.tsinghua.edu.cn
4e444db884b5272f3a41e4b68dc0d453d4ec1f4c
4ef0a6817a7736c5641dc52cbc62737e2e063420International Journal of Advanced Computer Research (ISSN (Print): 2249-7277 ISSN (Online): 2277-7970)
Volume-4 Number-4 Issue-17 December-2014
Study of Face Recognition Techniques
Received: 10-November-2014; Revised: 18-December-2014; Accepted: 23-December-2014
©2014 ACCENTS
('7874804', 'Sangeeta Kaushik', 'sangeeta kaushik')
('33551600', 'R. B. Dubey', 'r. b. dubey')
('1680807', 'Abhimanyu Madan', 'abhimanyu madan')
4e4d034caa72dce6fca115e77c74ace826884c66RESEARCH ARTICLE
Sex differences in facial emotion recognition
across varying expression intensity levels
from videos
University of Bath, Bath, Somerset, United Kingdom
☯ These authors contributed equally to this work.
¤ Current address: Social and Affective Neuroscience Laboratory, Centre for Health and Biological Sciences,
Mackenzie Presbyterian University, S o Paulo, S o Paulo, Brazil
('2708124', 'Chris Ashwin', 'chris ashwin')
('39455300', 'Mark Brosnan', 'mark brosnan')
* tanja.wingenbach@bath.edu
4e7ebf3c4c0c4ecc48348a769dd6ae1ebac3bf1b
4e0e49c280acbff8ae394b2443fcff1afb9bdce6Automatic learning of gait signatures for people identification
F.M. Castro
Univ. of Malaga
fcastrouma.es
M.J. Mar´ın-Jim´enez
Univ. of Cordoba
mjmarinuco.es
N. Guil
Univ. of Malaga
nguiluma.es
N. P´erez de la Blanca
Univ. of Granada
nicolasugr.es
4e4e8fc9bbee816e5c751d13f0d9218380d74b8f
20a88cc454a03d62c3368aa1f5bdffa73523827b
20a432a065a06f088d96965f43d0055675f0a6c1In: Proc. of the 25th Int. Conference on Artificial Neural Networks (ICANN)
Part II, LNCS 9887, pp. 80-87, Barcelona, Spain, September 2016
The final publication is available at Springer via
http://dx.doi.org//10.1007/978-3-319-44781-0_10
The Effects of Regularization on Learning Facial
Expressions with Convolutional Neural Networks

Vogt-Koelln-Strasse 30, 22527 Hamburg, Germany
http://www.informatik.uni-hamburg.de/WTM
('11634287', 'Tobias Hinz', 'tobias hinz')
('1736513', 'Stefan Wermter', 'stefan wermter')
{4hinz,barros,wermter}@informatik.uni-hamburg.de
20a3ce81e7ddc1a121f4b13e439c4cbfb01adfbaSparse-MVRVMs Tree for Fast and Accurate
Head Pose Estimation in the Wild
Augmented Vision Research Group,
German Research Center for Arti cial Intelligence (DFKI
Tripstaddterstr. 122, 67663 Kaiserslautern, Germany
Technical University of Kaiserslautern
http://www.av.dfki.de
('2585383', 'Mohamed Selim', 'mohamed selim')
('1771057', 'Alain Pagani', 'alain pagani')
('1807169', 'Didier Stricker', 'didier stricker')
{mohamed.selim,alain.pagani,didier.stricker}@dfki.uni-kl.de
20b994a78cd1db6ba86ea5aab7211574df5940b3Enriched Long-term Recurrent Convolutional Network
for Facial Micro-Expression Recognition
Faculty of Computing and Informatics, Multimedia University, Malaysia
Faculty of Engineering, Multimedia University, Malaysia
Shanghai Jiao Tong University, China
('30470673', 'Huai-Qian Khor', 'huai-qian khor')
('2339975', 'John See', 'john see')
('8131625', 'Weiyao Lin', 'weiyao lin')
Emails: 1hqkhor95@gmail.com, 2johnsee@mmu.edu.my, 3raphael@mmu.edu.my, 4wylin@sjtu.edu.cn
2004afb2276a169cdb1f33b2610c5218a1e47332Hindawi
Computational Intelligence and Neuroscience
Volume 2018, Article ID 3803627, 11 pages
https://doi.org/10.1155/2018/3803627
Research Article
Deep Convolutional Neural Network Used in Single Sample per
Person Face Recognition
School of Information Engineering, Wuyi University, Jiangmen 529020, China
Received 27 November 2017; Revised 23 May 2018; Accepted 26 July 2018; Published 23 August 2018
Academic Editor: Jos´e Alfredo Hern´andez-P´erez
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Face recognition (FR) with single sample per person (SSPP) is a challenge in computer vision. Since there is only one sample to be
trained, it makes facial variation such as pose, illumination, and disguise difficult to be predicted. To overcome this problem, this paper
proposes a scheme combined traditional and deep learning (TDL) method to process the task. First, it proposes an expanding sample
method based on traditional approach. Compared with other expanding sample methods, the method can be used easily and
conveniently. Besides, it can generate samples such as disguise, expression, and mixed variation. Second, it uses transfer learning and
introduces a well-trained deep convolutional neural network (DCNN) model and then selects some expanding samples to fine-tune the
DCNN model. 0ird, the fine-tuned model is used to implement experiment. Experimental results on AR face database, Extend Yale B
face database, FERET face database, and LFW database demonstrate that TDL achieves the state-of-the-art performance in SSPP FR.
1. Introduction
As artificial
intelligence (AI) becomes more and more
popular, computer vision (CV) also has been proved to be
a very hot topic in academic such as face recognition [1],
facial expression recognition [2], and object recognition [3].
It is well known that the basic and important foundation in
CV is that there are an amount of training samples. But in
actual scenarios such as immigration management, fugitive
tracing, and video surveillance, there may be only one
sample, which leads to single sample per person (SSPP)
problem such as gait recognition [4], face recognition (FR)
[5, 6], and low-resolution face recognition [7] in CV.
However, as the widely use of second-generation ID card
which is convenient to be collected, SSPP FR becomes one of
the most popular topics no matter in academic or in
industry.
Beymer and Poggio [8] proposed one example view
problem in 1996. In [8], it was researched that how to
perform face recognition (FR) using one example view.
Firstly, it exploited prior knowledge to generate multiple
virtual views. 0en, the example view and these multiple
virtual views were used as example views in a view-based,
pose-invariant
face recognizer. Later, SSPP FR became
a popular research topic at the beginning of the 21st century.
Recently, many methods have been proposed. Generally
speaking, these methods can be summarized in five basic
methods: direct method, generic learning method, patch-
based method, expanding sample method, and deep learning
(DL) method. Direct method does experiment based on the
SSPP directly by using an algorithm. Generic learning
method is the way that using an auxiliary dataset to build
a generic dataset from which some variation information
can be learned by single sample. Patch-based method par-
titions single sample into several patches first, then extracts
features on these patches, respectively, and does classifica-
tion finally. 0e expanding sample method is with some
special means such as perturbation-based method [9, 10],
photometric transforms, and geometric distortion [11] to
increase sample so that abundant training samples can be
used to process this task. 0e DL method uses the DL model
to perform the research.
Attracted by the good performance of DCNN, inspired
by [12] and driven by AI, in this paper, a scheme combined
('9363278', 'Junying Zeng', 'junying zeng')
('12054657', 'Xiaoxiao Zhao', 'xiaoxiao zhao')
('2926767', 'Junying Gan', 'junying gan')
('40552250', 'Chaoyun Mai', 'chaoyun mai')
('1716453', 'Fan Wang', 'fan wang')
('3003242', 'Yikui Zhai', 'yikui zhai')
('9363278', 'Junying Zeng', 'junying zeng')
Correspondence should be addressed to Xiaoxiao Zhao; xiaoxiao-zhao@foxmail.com
20e504782951e0c2979d9aec88c76334f7505393Robust LSTM-Autoencoders for Face De-Occlusion
in the Wild
('37182704', 'Fang Zhao', 'fang zhao')
('33221685', 'Jiashi Feng', 'jiashi feng')
('39913117', 'Jian Zhao', 'jian zhao')
('1898172', 'Wenhan Yang', 'wenhan yang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
209324c152fa8fab9f3553ccb62b693b5b10fb4dCROWDSOURCED VISUAL KNOWLEDGE REPRESENTATIONS
VISUAL GENOME
A THESIS
SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
MASTERS OF SCIENCE
March 2016
('2580593', 'Ranjay Krishna', 'ranjay krishna')
2050847bc7a1a0453891f03aeeb4643e360fde7dAccio: A Data Set for Face Track Retrieval
in Movies Across Age
Istanbul Technical University, Istanbul, Turkey
Karlsruhe Institute of Technology, Karlsruhe, Germany
('2398366', 'Esam Ghaleb', 'esam ghaleb')
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('2256981', 'Ziad Al-Halah', 'ziad al-halah')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
{ghalebe, ekenel}@itu.edu.tr, {tapaswi, ziad.al-halah, rainer.stiefelhagen}@kit.edu
20ade100a320cc761c23971d2734388bfe79f7c5IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Subspace Clustering via Good Neighbors
('1755872', 'Jufeng Yang', 'jufeng yang')
('1780418', 'Jie Liang', 'jie liang')
('39329211', 'Kai Wang', 'kai wang')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
202d8d93b7b747cdbd6e24e5a919640f8d16298aFace Classification via Sparse Approximation
Bilgi University, Dolapdere, Istanbul, TR
Bo gazici University, Istanbul, TR
Y ld z Teknik University, Istanbul, TR
('2804969', 'Songul Albayrak', 'songul albayrak')
20767ca3b932cbc7b8112db21980d7b9b3ea43a3
20a16efb03c366fa4180659c2b2a0c5024c679daSCREENING RULES FOR OVERLAPPING GROUP LASSO
Carnegie Mellon University
Recently, to solve large-scale lasso and group lasso problems,
screening rules have been developed, the goal of which is to reduce
the problem size by efficiently discarding zero coefficients using simple
rules independently of the others. However, screening for overlapping
group lasso remains an open challenge because the overlaps between
groups make it infeasible to test each group independently. In this
paper, we develop screening rules for overlapping group lasso. To ad-
dress the challenge arising from groups with overlaps, we take into
account overlapping groups only if they are inclusive of the group
being tested, and then we derive screening rules, adopting the dual
polytope projection approach. This strategy allows us to screen each
group independently of each other. In our experiments, we demon-
strate the efficiency of our screening rules on various datasets.
1. Introduction. We propose efficient screening rules for regression
with the overlapping group lasso penalty. Our goal is to develop simple
rules to discard groups with zero coefficients in the optimization problem
with the following form:
(cid:13)(cid:13)βg
(cid:13)(cid:13)2 ,
ng
(1.1)
min
(cid:107)y − Xβ(cid:107)2
2 + λ
(cid:88)
g∈G
where X ∈ RN×J is the input data for J inputs and N samples, y ∈ RN×1
is the output vector, β ∈ RJ×1 is the vector of regression coefficients, ng
is the size of group g, and λ is a regularization parameter that determines
the sparsity of β. In this setting, G represents a set of groups of coefficients,
defined a priori, and we allow arbitrary overlap between different groups,
hence “overlapping” group lasso. Overlapping group lasso is a general model
that subsumes lasso (Tibshirani, 1996), group lasso (Yuan and Lin, 2006),
sparse group lasso (Simon et al., 2013), composite absolute penalties (Zhao,
Rocha and Yu, 2009), and tree lasso (Zhao, Rocha and Yu, 2009; Kim et al.,
2012) with (cid:96)1/(cid:96)2 penalty because they are a specific form of overlapping
group lasso.
In this paper, we do not consider the latent group lasso proposed by
Jacob et al. (Jacob, Obozinski and Vert, 2009), where support is defined
by the union of groups with nonzero coefficients. Instead, we consider the
('1918078', 'Seunghak Lee', 'seunghak lee')
('1752601', 'Eric P. Xing', 'eric p. xing')
205b34b6035aa7b23d89f1aed2850b1d3780de35504
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
†Shenzhen Key Lab. of Information Sci&Tech,
Nagaoka University of Technology, Japan
RECOGNITION
1. INTRODUCTION
20c2a5166206e7ffbb11a23387b9c5edf42b5230
20e505cef6d40f896e9508e623bfc01aa1ec3120Fast Online Incremental Attribute-based Object
Classification using Stochastic Gradient Descent and Self-
Organizing Incremental Neural Network
Department of Computational Intelligence and Systems Science,
Tokyo Institute of Technology
4259 Nagatsuta, Midori-ku, Yokohama, 226-8503 JAPAN
('2641676', 'Sirinart Tangruamsub', 'sirinart tangruamsub')
('1711160', 'Aram Kawewong', 'aram kawewong')
('1727786', 'Osamu Hasegawa', 'osamu hasegawa')
(tangruamsub.s.aa, kawewong.a.aa, hasegawa.o.aa)@m.titech.ac.jp
205e4d6e0de81c7dd6c83b737ffdd4519f4f7ffaA Model-Based Facial Expression Recognition
Algorithm using Principal Components Analysis
N. Vretos, N. Nikolaidis and I.Pitas
Informatics and Telematics Institute
Centre for Research and Technology Hellas, Greece
Aristotle University of Thessaloniki
Thessaloniki 54124, Greece Tel,Fax: +30-2310996304
e-mail: vretos,nikolaid,pitas@aiia.csd.auth.gr
2098983dd521e78746b3b3fa35a22eb2fa630299
20b437dc4fc44c17f131713ffcbb4a8bd672ef00Head pose tracking from RGBD sensor based on
direct motion estimation
Warsaw University of Technology, Poland
('1899063', 'Adam Strupczewski', 'adam strupczewski')
('2393538', 'Marek Kowalski', 'marek kowalski')
('1930272', 'Jacek Naruniec', 'jacek naruniec')
206e24f7d4b3943b35b069ae2d028143fcbd0704Learning Structure and Strength of CNN Filters for Small Sample Size Training
IIIT-Delhi, India
('3390448', 'Rohit Keshari', 'rohit keshari')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('39129417', 'Richa Singh', 'richa singh')
{rohitk, mayank, rsingh}@iiitd.ac.in
208a2c50edb5271a050fa9f29d3870f891daa4dchttp://www.journalofvision.org/content/11/13/24
The resolution of facial expressions of emotion
Aleix M. Martinez
The Ohio State University, Columbus, OH, USA
The Ohio State University, Columbus, OH, USA
Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in
each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents
a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition
degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We
show that recognition is only impaired in practice when the image resolution goes below 20  30 pixels. A study of the
confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and
that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This
asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle
activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the
asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.
Keywords: resolution, facial expressions, emotion
http://www.journalofvision.org/content/11/13/24, doi:10.1167/11.13.24.
Introduction
Emotions are fundamental in studies of cognitive science
(Damassio, 1995), neuroscience (LeDoux, 2000), social
psychology (Adolphs, 2003), sociology (Massey, 2002),
economics (Connolly & Zeelenberg, 2002), human evo-
lution (Schmidt & Cohn, 2001), and engineering and
computer science (Pentland, 2000). Emotional states and
emotional analysis are known to influence or mediate
behavior and cognitive processing. Many of these emo-
tional processes may be hidden to an outside observer,
whereas others are visible through facial expressions of
emotion.
Facial expressions of emotion are a consequence of the
movement of the muscles underneath the skin of our face
(Duchenne, 1862/1990). The movement of these muscles
causes the skin of the face to deform in ways that an
external observer can use to interpret the emotion of that
person. Each muscle employed to create these facial
constructs is referred to as an Action Unit (AU). Ekman
and Friesen (1978) identified those AUs responsible for
generating the emotions most commonly seen in the
majority of culturesVanger, sadness, fear, surprise,
happiness, and disgust. For example, happiness generally
involves an upper–backward movement of the mouth
corners; while the mouth is upturned (to produce the
smile), the cheeks lift and the upper corner of the eyes
wrinkle. This is known as the Duchenne (1862/1990)
smile. It requires the activation of two facial muscles:
the zygomatic major (AU 12) to raise the corners of the
mouth and the orbicularis oculi (AU 42) to uplift the
cheeks and form the eye corner wrinkles. The muscles and
mechanisms used to produce the abovementioned facial
expressions of emotion are now quite well understood and
it has been shown that the AUs used in each expression
are relatively consistent from person to person and among
distinct cultures (Burrows & Cohn, 2009).
Yet, as much as we understand the generative process
of facial expressions of emotion, much still needs to be
learned about their interpretation by our cognitive system.
Thus, an important open problem is to define the
computational (cognitive) space of facial expressions of
emotion of the human visual system. In the present paper,
we study the limits of this visual processing of facial
expressions of emotion and what it tells us about how
emotions are represented and recognized by our visual
system. Note that the term “computational space” is used
here to specify the combination of features (dimensions)
used by the cognitive system to determine (i.e., analyze
and classify)
for each facial
expression of emotion.
the appropriate label
To properly address the problem stated in the preceding
paragraph, it is worth recalling that some facial expressions
of emotion may have evolved to enhance or reduce our
sensory inputs (Susskind et al., 2008). For example, fear is
associated with a facial expression with open mouth,
nostrils, and eyes and an inhalation of air, as if to enhance
the perception of our environment, while the expression of
disgust closes these channels (Chapman, Kim, Susskind,
& Anderson, 2009). Other emotions, though, may have
evolved for communication purposes (Schmidt & Cohn,
2001). Under this assumption,
the evolution of this
capacity to express emotions had to be accompanied by
doi: 10.1167/11.13.24
Received January 25, 2011; published November 30, 2011
ISSN 1534-7362 * ARVO
Downloaded From: http://jov.arvojournals.org/pdfaccess.ashx?url=/data/journals/jov/932792/ on 06/20/2017
('2323717', 'Shichuan Du', 'shichuan du')
207798603e3089a1c807c93e5f36f7767055ec06Modeling the Correlation between
Modality Semantics and Facial Expressions
* Key Laboratory of Pervasive Computing, Ministry of Education
Tsinghua National Laboratory for Information Science and Technology (TNList)
Tsinghua University, Beijing 100084, China
Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems
Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China
† Human-Computer Communications Laboratory, Department of Systems Engineering and Engineering Management,
The Chinese University of Hong Kong, Hong Kong SAR, China
('25714033', 'Jia Jia', 'jia jia')
('37783013', 'Xiaohui Wang', 'xiaohui wang')
('3860920', 'Zhiyong Wu', 'zhiyong wu')
('7239047', 'Lianhong Cai', 'lianhong cai')
Contact E-mail: # zywu@sz.tsinghua.edu.cn, * jjia@tsinghua.edu.cn
20be15dac7d8a5ba4688bf206ad24cab57d532d6Face Shape Recovery and Recognition Using a
Surface Gradient Based Statistical Model
1 Centro de Investigaci´on y Estudios Avanzados del I.P.N., Ramos Arizpe 25900,
Coahuila, Mexico
The University of York, Heslington, York YO10 5DD, United Kingdom
('1679753', 'Edwin R. Hancock', 'edwin r. hancock')mario.castelan@cinvestav.edu.mx
erh@cs.york.ac.uk
2059d2fecfa61ddc648be61c0cbc9bc1ad8a9f5bTRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 23, NO. 4, APRIL 2015
Co-Localization of Audio Sources in Images Using
Binaural Features and Locally-Linear Regression
∗ INRIA Grenoble Rhˆone-Alpes, Montbonnot Saint-Martin, France
† Univ. Grenoble Alpes, GIPSA-Lab, France
‡ Dept. Electrical Eng., Technion-Israel Inst. of Technology, Haifa, Israel
('3307172', 'Antoine Deleforge', 'antoine deleforge')
206fbe6ab6a83175a0ef6b44837743f8d5f9b7e8
2042aed660796b14925db17c0a8b9fbdd7f3ebacSaliency in Crowd
Department of Electrical and Computer Engineering
National University of Singapore, Singapore
('40452812', 'Ming Jiang', 'ming jiang')
('1946538', 'Juan Xu', 'juan xu')
('3243515', 'Qi Zhao', 'qi zhao')
eleqiz@nus.edu.sg
202dc3c6fda654aeb39aee3e26a89340fb06802aSpatio-Temporal Instance Learning:
Action Tubes from Class Supervision
University of Amsterdam
('2606260', 'Pascal Mettes', 'pascal mettes')
20111924fbf616a13d37823cd8712a9c6b458cd6International Journal of Computer Applications (0975 – 8887)
Volume 130 – No.11, November2015
Linear Regression Line based Partial Face Recognition
Naveena M.
Department of Studies in
Computer Science,
Manasagagothri,
Mysore.
Department of Studies in
Computer Science,
Manasagagothri,
Mysore.
P. Nagabhushan
Department of Studies in
Computer Science,
Manasagagothri,
Mysore.
images. In
('33377948', 'G. Hemantha Kumar', 'g. hemantha kumar')
20ebbcb6157efaacf7a1ceb99f2f3e2fdf1384e6Appears in the Second International Conference on Audio- and Video-based Biometric Person Authentication, AVBPA’99, Washington D. C. USA, March 22-24, 1999.
Comparative Assessment of Independent Component
Analysis (ICA) for Face Recognition
George Mason University
University Drive, Fairfax, VA 22030-4444, USA
cliu, wechsler
('39664966', 'Chengjun Liu', 'chengjun liu')
('1781577', 'Harry Wechsler', 'harry wechsler')
@cs.gmu.edu
20532b1f80b509f2332b6cfc0126c0f80f438f10A deep matrix factorization method for learning
attribute representations
Bj¨orn W. Schuller, Senior member, IEEE
('2814229', 'George Trigeorgis', 'george trigeorgis')
('2732737', 'Konstantinos Bousmalis', 'konstantinos bousmalis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
205af28b4fcd6b569d0241bb6b255edb325965a4Intel Serv Robotics (2008) 1:143–157
DOI 10.1007/s11370-007-0014-z
SPECIAL ISSUE
Facial expression recognition and tracking for intelligent human-robot
interaction
Received: 27 June 2007 / Accepted: 6 December 2007 / Published online: 23 January 2008
© Springer-Verlag 2008
('1716880', 'Y. Yang', 'y. yang')
20cfb4136c1a984a330a2a9664fcdadc2228b0bcSparse Coding Trees with Application to Emotion Classification
Harvard University, Cambridge, MA
('3144257', 'Hsieh-Chung Chen', 'hsieh-chung chen')
('2512314', 'Marcus Z. Comiter', 'marcus z. comiter')
('1731308', 'H. T. Kung', 'h. t. kung')
('1841852', 'Bradley McDanel', 'bradley mcdanel')
20c02e98602f6adf1cebaba075d45cef50de089fVideo Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video
Action Recognition
Georgia Institute of Technology
Carnegie Mellon University
Irfan Essa
Georgia Institute of Technology
('2308598', 'Unaiza Ahsan', 'unaiza ahsan')
('37714701', 'Rishi Madhok', 'rishi madhok')
uahsan3@gatech.edu
rmadhok@andrew.cmu.edu
irfan@gatech.edu
2020e8c0be8fa00d773fd99b6da55029a6a83e3dAn Evaluation of the Invariance Properties
of a Biologically-Inspired System
for Unconstrained Face Recognition
Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Rowland Institute at Harvard, Cambridge, MA 02142, USA
('30017846', 'Nicolas Pinto', 'nicolas pinto')pinto@mit.edu
cox@rowland.harvard.edu
20a0b23741824a17c577376fdd0cf40101af5880Learning to track for spatio-temporal action localization
Zaid Harchaouia,b
b NYU
a Inria∗
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
firstname.lastname@inria.fr
18c72175ddbb7d5956d180b65a96005c100f6014IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 6,
JUNE 2001
643
From Few to Many: Illumination Cone
Models for Face Recognition under
Variable Lighting and Pose
('3230391', 'Athinodoros S. Georghiades', 'athinodoros s. georghiades')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
('1765887', 'David J. Kriegman', 'david j. kriegman')
18636347b8741d321980e8f91a44ee054b051574978-1-4244-5654-3/09/$26.00 ©2009 IEEE
37
ICIP 2009
18206e1b988389eaab86ef8c852662accf3c3663
189b1859f77ddc08027e1e0f92275341e5c0fdc6Sparse Representations and Distance Learning for
Attribute based Category Recognition
1 Center for Imaging Science, 2 Department of Computer Engineering
Rochester Institute of Technology, Rochester, NY
('2272443', 'Grigorios Tsagkatakis', 'grigorios tsagkatakis'){gxt6260, andreas.savakis}@rit.edu
18a9f3d855bd7728ed4f988675fa9405b5478845ISSN: 0976-9102 (ONLINE)
DOI: 10.21917/ijivp.2013.0103
ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, NOVEMBER 2013, VOLUME: 04, ISSUE: 02
AN ILLUMINATION INVARIANT TEXTURE BASED FACE RECOGNITION
J. P. College of Engineering, India
Manonmaniam Sundaranar University, India
St. Xavier s Catholic College of Engineering, India
('2792485', 'K. Meena', 'k. meena')
('3311251', 'A. Suruliandi', 'a. suruliandi')
('1998086', 'Reena Rose', 'reena rose')
E-mail: meen.nandhu@gmail.com
E-mail: suruliandi@yahoo.com
E-mail: mailtoreenarose@yahoo.in
181045164df86c72923906aed93d7f2f987bce6cRHEINISCH-WESTFÄLISCHE TECHNISCHE
HOCHSCHULE AACHEN
KNOWLEDGE-BASED SYSTEMS GROUP
Detection and Recognition of Human
Faces using Random Forests for a
Mobile Robot
MASTER OF SCIENCE THESIS
MATRICULATION NUMBER: 26 86 51
SUPERVISOR:
SECOND SUPERVISOR:
PROF. ENRICO BLANZIERI, PH. D.
ADVISERS:
('1779592', 'GERHARD LAKEMEYER', 'gerhard lakemeyer')
('2181555', 'VAISHAK BELLE', 'vaishak belle')
('1779592', 'GERHARD LAKEMEYER', 'gerhard lakemeyer')
('1686596', 'STEFAN SCHIFFER', 'stefan schiffer')
('1879646', 'THOMAS DESELAERS', 'thomas deselaers')
18166432309000d9a5873f989b39c72a682932f5LEARNING A WARPED SUBSPACE MODEL OF FACES
WITH IMAGES OF UNKNOWN POSE AND
ILLUMINATION
GRASP Laboratory, University of Pennsylvania, 3330 Walnut Street, Philadelphia, PA, USA
Keywords:
('2720935', 'Jihun Ham', 'jihun ham')
('1732066', 'Daniel D. Lee', 'daniel d. lee')
jhham@seas.upenn.edu, ddlee@seas.upenn.edu
18d5b0d421332c9321920b07e0e8ac4a240e5f1fCollaborative Representation Classification
Ensemble for Face Recognition
('2972883', 'Suah Kim', 'suah kim')
('2434811', 'Run Cui', 'run cui')
('1730037', 'Hyoung Joong Kim', 'hyoung joong kim')
18d51a366ce2b2068e061721f43cb798177b4bb7Cognition and Emotion
ISSN: 0269-9931 (Print) 1464-0600 (Online) Journal homepage: http://www.tandfonline.com/loi/pcem20
Looking into your eyes: observed pupil size
influences approach-avoidance responses
eyes: observed pupil size influences approach-avoidance responses, Cognition and Emotion, DOI:
10.1080/02699931.2018.1472554
To link to this article: https://doi.org/10.1080/02699931.2018.1472554
View supplementary material
Published online: 11 May 2018.
Submit your article to this journal
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=pcem20
('47930228', 'Marco Brambilla', 'marco brambilla')
('41074530', 'Marco Biella', 'marco biella')
('47930228', 'Marco Brambilla', 'marco brambilla')
('41074530', 'Marco Biella', 'marco biella')
18c6c92c39c8a5a2bb8b5673f339d3c26b8dcaaeLearning invariant representations and applications
to face verification
Center for Brains, Minds and Machines
McGovern Institute for Brain Research
Massachusetts Institute of Technology
Cambridge MA 02139
('1694846', 'Qianli Liao', 'qianli liao')lql@mit.edu, jzleibo@mit.edu, tp@ai.mit.edu
185263189a30986e31566394680d6d16b0089772Efficient Annotation of Objects for Video Analysis
Thesis submitted in partial fulfillment
of the requirements for the degree of
MS in Computer Science and Engineering
by
Research
by
Sirnam Swetha
201303014
International Institute of Information Technology
Hyderabad - 500 032, INDIA
June 2018
sirnam.swetha@research.iiit.ac.in
1885acea0d24e7b953485f78ec57b2f04e946eafCombining Local and Global Features for 3D Face Tracking
Megvii (face++) Research
('40448951', 'Pengfei Xiong', 'pengfei xiong')
('1775836', 'Guoqing Li', 'guoqing li')
('3756559', 'Yuhang Sun', 'yuhang sun')
{xiongpengfei, liguoqing, sunyuhang}@megvii.com
184750382fe9b722e78d22a543e852a6290b3f70
18b9dc55e5221e704f90eea85a81b41dab51f7daAttention-based Temporal Weighted
Convolutional Neural Network for
Action Recognition
Xi an Jiaotong University, Xi an, Shannxi 710049, P.R.China
2HERE Technologies, Chicago, IL 60606, USA
3Alibaba Group, Hangzhou, Zhejiang 311121, P.R.China
4Microsoft Research, Redmond, WA 98052, USA
('14800230', 'Jinliang Zang', 'jinliang zang')
('40367806', 'Le Wang', 'le wang')
('46324995', 'Qilin Zhang', 'qilin zhang')
('1786361', 'Zhenxing Niu', 'zhenxing niu')
('1745420', 'Gang Hua', 'gang hua')
('1715389', 'Nanning Zheng', 'nanning zheng')
18a849b1f336e3c3b7c0ee311c9ccde582d7214fInt J Comput Vis
DOI 10.1007/s11263-012-0564-1
Efficiently Scaling up Crowdsourced Video Annotation
A Set of Best Practices for High Quality, Economical Video Labeling
Received: 31 October 2011 / Accepted: 20 August 2012
© Springer Science+Business Media, LLC 2012
('1856025', 'Carl Vondrick', 'carl vondrick')
18cd79f3c93b74d856bff6da92bfc87be1109f80International Journal of Advances in Engineering & Technology, May 2012.
©IJAET ISSN: 2231-1963
AN APPLICATION TO HUMAN FACE PHOTO-SKETCH
SYNTHESIS AND RECOGNITION
1Student and 2Professor & Head,
Bharti Vidyapeeth Deemed University, Pune, India
('35541779', 'Amit R. Sharma', 'amit r. sharma')
('2731104', 'Prakash. R. Devale', 'prakash. r. devale')
182470fd0c18d0c5979dff75d089f1da176ceeebA Multimodal Annotation Schema for Non-Verbal Affective
Analysis in the Health-Care Domain
Federico M. Sukno
Adrià Ruiz
Department of Information and Communication Technologies
Pompeu Fabra University, Spain
Human-Centered Multimedia
Augsburg University, Germany
Louisa Praagst
Institute of Communications Engineering
Ulm University, Germany
Information Technologies Institute
Centre for Research & Technology Hellas, Greece
('33451278', 'Mónica Domínguez', 'mónica domínguez')
('34326647', 'Dominik Schiller', 'dominik schiller')
('2565410', 'Florian Lingenfelser', 'florian lingenfelser')
('8632684', 'Ekeni Kamateri', 'ekeni kamateri')
1862cb5728990f189fa91c67028f6d77b5ac94f6Speeding Up Tracking by Ignoring Features
Hamdi Dibeklio˘glu
Pattern Recognition and Bioinformatics Group, Delft University of Technology
Mekelweg 4, 2628 CD Delft, The Netherlands
('2883723', 'Lu Zhang', 'lu zhang')
('1803520', 'Laurens van der Maaten', 'laurens van der maaten')
{lu.zhang, h.dibeklioglu, l.j.p.vandermaaten}@tudelft.nl
1862bfca2f105fddfc79941c90baea7db45b8b16Annotator Rationales for Visual Recognition
University of Texas at Austin
('7408951', 'Jeff Donahue', 'jeff donahue')
('1794409', 'Kristen Grauman', 'kristen grauman')
{jdd,grauman}@cs.utexas.edu
1886b6d9c303135c5fbdc33e5f401e7fc4da6da4Knowledge Guided Disambiguation for Large-Scale
Scene Classification with Multi-Resolution CNNs
('39709927', 'Limin Wang', 'limin wang')
('2072196', 'Sheng Guo', 'sheng guo')
('1739171', 'Weilin Huang', 'weilin huang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('40285012', 'Yu Qiao', 'yu qiao')
1888bf50fd140767352158c0ad5748b501563833PA R T 1
THE BASICS
187d4d9ba8e10245a34f72be96dd9d0fb393b1aaGAIDON et al.: MINING VISUAL ACTIONS FROM MOVIES
Mining visual actions from movies
http://lear.inrialpes.fr/people/gaidon/
Marcin Marszałek2
http://www.robots.ox.ac.uk/~marcin/
http://lear.inrialpes.fr/people/schmid/
1 LEAR
INRIA, LJK
Grenoble, France
2 Visual Geometry Group
University of Oxford
Oxford, UK
('1799820', 'Adrien Gaidon', 'adrien gaidon')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
182f3aa4b02248ff9c0f9816432a56d3c8880706Sparse Coding for Classification via Discrimination Ensemble∗
1School of Computer Science & Engineering, South China Univ. of Tech., Guangzhou 510006, China
2School of Automation Science & Engineering, South China Univ. of Tech., Guangzhou 510006, China
National University of Singapore, Singapore
('2217653', 'Yuhui Quan', 'yuhui quan')
('1725160', 'Yong Xu', 'yong xu')
('2111796', 'Yuping Sun', 'yuping sun')
('34881546', 'Yan Huang', 'yan huang')
('39689301', 'Hui Ji', 'hui ji')
{csyhquan@scut.edu.cn, yxu@scut.edu.cn, ausyp@scut.edu.cn, matjh@nus.edu.sg}
18941b52527e6f15abfdf5b86a0086935706e83bDeepGUM: Learning Deep Robust Regression with a
Gaussian-Uniform Mixture Model
1 Inria Grenoble Rhˆone-Alpes, Montbonnot-Saint-Martin, France,
University of Granada, Granada, Spain
University of Trento, Trento, Italy
('2793152', 'Pablo Mesejo', 'pablo mesejo')
('1780201', 'Xavier Alameda-Pineda', 'xavier alameda-pineda')
('1794229', 'Radu Horaud', 'radu horaud')
firstname.name@inria.fr
185360fe1d024a3313042805ee201a75eac50131299
Person De-Identification in Videos
('35624289', 'Prachi Agrawal', 'prachi agrawal')
('1729020', 'P. J. Narayanan', 'p. j. narayanan')
1824b1ccace464ba275ccc86619feaa89018c0adOne Millisecond Face Alignment with an Ensemble of Regression Trees
KTH, Royal Institute of Technology
Computer Vision and Active Perception Lab
Teknikringen 14, Stockholm, Sweden
('2626422', 'Vahid Kazemi', 'vahid kazemi')
('1736906', 'Josephine Sullivan', 'josephine sullivan')
{vahidk,sullivan}@csc.kth.se
18dfc2434a95f149a6cbb583cca69a98c9de9887
27a00f2490284bc0705349352d36e9749dde19abVoxCeleb2: Deep Speaker Recognition
Visual Geometry Group, Department of Engineering Science,
University of Oxford, UK
('2863890', 'Joon Son Chung', 'joon son chung')
('19263506', 'Arsha Nagrani', 'arsha nagrani')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{joon,arsha,az}@robots.ox.ac.uk
271e2856e332634eccc5e80ba6fa9bbccf61f1be3D Spatio-Temporal Face Recognition Using Dynamic Range Model Sequences
Department of Computer Science
State University of New York at Binghamton, Binghamton, NY
('1681656', 'Yi Sun', 'yi sun')
('8072251', 'Lijun Yin', 'lijun yin')
27846b464369095f4909f093d11ed481277c8bbaJournal of Signal and Information Processing, 2017, 8, 99-112
http://www.scirp.org/journal/jsip
ISSN Online: 2159-4481
ISSN Print: 2159-4465
Real-Time Face Detection and Recognition in
Complex Background
Illinois Institute of Technology, Chicago, Illinois, USA
How to cite this paper: Zhang, X., Gon-
not, T. and Saniie, J. (2017) Real-Time
Face Detection and Recognition in Com-
plex Background. Journal of Signal and
Information Processing, 8, 99-112.
https://doi.org/10.4236/jsip.2017.82007
Received: March 25, 2017
Accepted: May 16, 2017
Published: May 19, 2017
Copyright © 2017 by authors and
Scientific Research Publishing Inc.
This work is licensed under the Creative
Commons Attribution International
License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/

Open Access
('1682913', 'Xin Zhang', 'xin zhang')
('2324553', 'Thomas Gonnot', 'thomas gonnot')
('1691321', 'Jafar Saniie', 'jafar saniie')
27eb7a6e1fb6b42516041def6fe64bd028b7614dJoint Unsupervised Deformable Spatio-Temporal Alignment of Sequences
Imperial College London, UK
University of Twente, The Netherlands
Center for Machine Vision and Signal Analysis, University of Oulu, Finland
('1786302', 'Lazaros Zafeiriou', 'lazaros zafeiriou')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('1694605', 'Maja Pantic', 'maja pantic')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
⋆{l.zafeiriou12, e.antonakos, s.zafeiriou, m.pantic}@imperial.ac.uk, †PanticM@cs.utwente.nl
2717998d89d34f45a1cca8b663b26d8bf10608a9Real-time Action Recognition with Enhanced Motion Vector CNNs
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai, China
3Computer Vision Lab, ETH Zurich, Switzerland
('3047890', 'Bowen Zhang', 'bowen zhang')
('33345248', 'Limin Wang', 'limin wang')
('1915826', 'Zhe Wang', 'zhe wang')
('33427555', 'Yu Qiao', 'yu qiao')
('2774427', 'Hanli Wang', 'hanli wang')
27c66b87e0fbb39f68ddb783d11b5b7e807c76e8Fast Simplex-HMM for One-Shot Learning Activity Recognition
Zaragoza University
Zaragoza, Spain.
Kingston University
London,UK.
('1783769', 'Carlos Medrano', 'carlos medrano')
('1687002', 'Dimitrios Makris', 'dimitrios makris')
[mrodrigo, corrite, ctmedra]@unizar.es
D.Makris@kingston.ac.uk
27a0a7837f9114143717fc63294a6500565294c2Face Recognition in Unconstrained Environments: A
Comparative Study
To cite this version:
Environments: A Comparative Study: . Workshop on Faces in ’Real-Life’ Images: Detection,
Alignment, and Recognition, Oct 2008, Marseille, France. 2008.
HAL Id: inria-00326730
https://hal.inria.fr/inria-00326730
Submitted on 5 Oct 2008
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('1689681', 'Rodrigo Verschae', 'rodrigo verschae')
('1737300', 'Javier Ruiz-Del-Solar', 'javier ruiz-del-solar')
('34047285', 'Mauricio Correa', 'mauricio correa')
('1689681', 'Rodrigo Verschae', 'rodrigo verschae')
('1737300', 'Javier Ruiz-Del-Solar', 'javier ruiz-del-solar')
('34047285', 'Mauricio Correa', 'mauricio correa')
27d709f7b67204e1e5e05fe2cfac629afa21699d
271df16f789bd2122f0268c3e2fa46bc0cb5f195Mining Discriminative Co-occurrence Patterns for Visual Recognition
School of EEE
Nanyang Technological University
Singapore 639798
Dept. of Media Analytics
NEC Laboratories America
Cupertino, CA, 95014 USA
EECS Dept.
Northwestern University
Evanston, IL, 60208 USA
('34316743', 'Junsong Yuan', 'junsong yuan')
('40634508', 'Ming Yang', 'ming yang')
('39955137', 'Ying Wu', 'ying wu')
jsyuan@ntu.edu.sg
myang@sv.nec-labs.com
yingwu@eecs.northwestern.edu
275b5091c50509cc8861e792e084ce07aa906549Institut für Informatik
der Technischen
Universität München
Dissertation
Leveraging the User’s Face as a Known Object
in Handheld Augmented Reality
Sebastian Bernhard Knorr
27218ff58c3f0e7d7779fba3bb465d746749ed7cActive Learning for Image Ranking
Over Relative Visual Attributes
by
Department of Computer Science
University of Texas at Austin
('2548555', 'Lucy Liang', 'lucy liang')
('1794409', 'Kristen Grauman', 'kristen grauman')
276dbb667a66c23545534caa80be483222db77693D Res. 2, 03(2011)4
10.1007/3DRes.03(2011)4
3DR REVIEW w
An Introduction to Image-based 3D Surface Reconstruction and a
Survey of Photometric Stereo Methods
for
introduction
image-based 3D
techniques. Then we describe
Received: 21Feburary 2011 / Revised: 20 March 2011 / Accepted: 11 May 2011
D Research Center, Kwangwoon University and Springer
('1908324', 'Steffen Herbort', 'steffen herbort')
270733d986a1eb72efda847b4b55bc6ba9686df4We are IntechOpen,
the first native scientific
publisher of Open Access books
3,350
108,000
1.7 M
Open access books available
International authors and editors
Downloads
Our authors are among the
151
Countries delivered to
TOP 1%
12.2%
most cited scientists
Contributors from top 500 universities
Selection of our books indexed in the Book Citation Index
in Web of Science™ Core Collection (BKCI)
Interested in publishing with us?
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Contact book.department@intechopen.com
27c6cd568d0623d549439edc98f6b92528d39bfeRegressive Tree Structured Model for Facial Landmark Localization
Artificial Vision Lab., Dept Mechanical Engineering
National Taiwan University of Science and Technology
('2329565', 'Kai-Hsiang Chang', 'kai-hsiang chang')
('2421405', 'Shih-Chieh Huang', 'shih-chieh huang')
jison@mail.ntust.edu.tw
273b0511588ab0a81809a9e75ab3bd93d6a0f1e3The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-016-3428-9
Recognition of Facial Expressions Based on Salient
Geometric Features and Support Vector Machines
Korea Electronics Technology Institute, Jeonju-si, Jeollabuk-do 561-844, Rep
Division of Computer Engineering, Chonbuk National University, Jeonju-si, Jeollabuk-do
School of Computing Science, Simon Fraser University, Burnaby, B.C., Canada; E-Mail
Tel.: +82-63-270-2406; Fax: +82-63-270-2394.
('32322842', 'Deepak Ghimire', 'deepak ghimire')
('2034182', 'Joonwhoan Lee', 'joonwhoan lee')
('1689656', 'Ze-Nian Li', 'ze-nian li')
('31984909', 'SungHwan Jeong', 'sunghwan jeong')
of Korea; E-Mails: (deepak, shjeong)@keti.re.kr
Rep. of Korea; E-Mail: chlee@jbnu.ac.kr
li@cs.sfu.ca
* Author to whom correspondence should be addressed; E-Mail: chlee@jbnu.ac.kr;
27169761aeab311a428a9dd964c7e34950a62a6bInternational Journal of the Physical Sciences Vol. 5(13), pp. 2020 -2029, 18 October, 2010
Available online at http://www.academicjournals.org/IJPS
ISSN 1992 - 1950 ©2010 Academic Journals
Full Length Research Paper
Face recognition using 3D head scan data based on
Procrustes distance
Kongju National University, South Korea
Korean Research Institute of Standards and Science (KRISS), Korea
Accepted 6 July, 2010
Recently, face recognition has attracted significant attention from the researchers and scientists in
various fields of research, such as biomedical informatics, pattern recognition, vision, etc due its
applications in commercially available systems, defense and security purpose. In this paper a practical
method for face reorganization utilizing head cross section data based on Procrustes analysis is
proposed. This proposed method relies on shape signatures of the contours extracted from face data.
The shape signatures are created by calculating the centroid distance of the boundary points, which is
a translation and rotation invariant signature. The shape signatures for a selected region of interest
(ROI) are used as feature vectors and authentication is done using them. After extracting feature
vectors a comparison analysis is performed utilizing Procrustes distance to differentiate their face
pattern from each other. The proposed scheme attains an equal error rate (EER) of 4.563% for the 400
head data for 100 subjects. The performance analysis of face recognition was analyzed based on K
nearest neighbour classifier. The experimental results presented here verify that the proposed method
is considerable effective.
Key words: Face, biometrics, Procrustes distance, equal error rate, k nearest classifier.
INTRODUCTION
Perhaps face is the easiest means of identifying a person
by another person. In general humans can identify
themselves and others by faces in a scene without hard
effort, but face recognition systems that implement these
tasks are very challenging to design. The challenges are
even extensive when there is a wide range of variation
due to imaging situations. Both inter- and intra-subject
variations are related with face images. Physical similarity
among
inter-subject
variation whereas intra-subject variation is dependent on
the following aspects such as age, head pose facial app-
roach, presence of light and presence of other obje-
cts/people etc. However, in face recognition, it has been
observed that inter-person variations are available due to
variations in local geometric features. Automatic face
recognition has been widely studied during the last few
decades. It is an active research area spanning many di-
sciplines such as image processing, pattern recognition,
responsible
individuals
for
is
computer vision, neural networks, artificial intelligence,
and biometrics.
Many researchers from these different disciplines work
toward the goal of endowing machines or computers with
the ability to recognize human faces as we human beings
do, effortlessly, in our everyday life (Brunelli and Poggio,
1993; Samaria, 1994; Wiskott et al., 1997; Turk and
Pentland, 1991; Belhumeur et al., 1997; He et al., 2005;
Wiskott et al., 1997; Lanitis et al., 1995; Cootes et al.,
2001; Brunelli and Poggio, 1993; Turk, 1991; Bellhumer
et al., 1997). Face recognition has a wide range of
potential applications
for commercial, security, and
forensic purposes. These applications include automated
crowd
shot
identification (e.g., for issuing driver licenses), credit card
authorization, ATM machine access control, design of
human computer interfaces, etc. The rapid evaluation in
face recognition research can be found by the progress
of systematic evaluation standards that includes the
FERET, FRVT 2000, FRVT 2002, and XM2VTS
protocols, and many existing software packages for
example FaceIt, FaceVACS, FaceSnap Recorder,
control, mug
surveillance,
access
('3222448', 'Sikyung Kim', 'sikyung kim')
('2387342', 'Se Jin Park', 'se jin park')
*Corresponding author. E-mail: mynudding@yahoo.com.
27da432cf2b9129dce256e5bf7f2f18953eef5a5
27961bc8173ac84fdbecacd01e5ed6f7ed92d4bdTo Appear in The IEEE 6th International Conference on Biometrics: Theory, Applications and
Systems (BTAS), Sept. 29-Oct. 2, 2013, Washington DC, USA
Automatic Multi-view Face Recognition via 3D Model Based Pose Regularization
Department of Computer Science and Engineering
Michigan State University, East Lansing, MI, U.S.A
('1883998', 'Koichiro Niinuma', 'koichiro niinuma')
('34393045', 'Hu Han', 'hu han')
('6680444', 'Anil K. Jain', 'anil k. jain')
{niinumak, hhan, jain}@msu.edu
27173d0b9bb5ce3a75d05e4dbd8f063375f24bb5ISSN : 2248-9622, Vol. 4, Issue 10( Part - 3), October 2014, pp.40-44
RESEARCH ARTICLE
OPEN ACCESS
Effect of Different Occlusion on Facial Expressions Recognition
RGPV University, Indore
RGPV University, Indore
('2890210', 'Ramchand Hablani', 'ramchand hablani')
2784d9212dee2f8a660814f4b85ba564ec333720Learning Class-Specific Image Transformations with Higher-Order Boltzmann
Machines
Erik Learned-Miller
University of Massachusetts Amherst
Amherst, MA
('3219900', 'Gary B. Huang', 'gary b. huang'){gbhuang,elm}@cs.umass.edu
2717b044ae9933f9ab87f16d6c611352f66b2033GNAS: A Greedy Neural Architecture Search Method for
Multi-Attribute Learning
Zhejiang University, 2Southwest Jiaotong University, 3Carnegie Mellon University
('2986516', 'Siyu Huang', 'siyu huang')
('50079147', 'Xi Li', 'xi li')
('1720488', 'Zhongfei Zhang', 'zhongfei zhang')
{siyuhuang,xilizju,zhongfei}@zju.edu.cn,zhiqicheng@gmail.com,alex@cs.cmu.edu
2770b095613d4395045942dc60e6c560e882f887GridFace: Face Rectification via Learning Local
Homography Transformations
Face++, Megvii Inc.
('1848243', 'Erjin Zhou', 'erjin zhou')
('2695115', 'Zhimin Cao', 'zhimin cao')
('40055995', 'Jian Sun', 'jian sun')
{zej,czm,sunjian}@megvii.com
27cccf992f54966feb2ab4831fab628334c742d8International Journal of Computer Applications (0975 – 8887)
Volume 64– No.18, February 2013
Facial Expression Recognition by Statistical, Spatial
Features and using Decision Tree
Assistant Professor
CSIT Department
GGV BIlaspur, Chhattisgarh
India
Assistant Professor
Electronics (ECE) Department
JECRC Jaipur, Rajasthan India
IshanBhardwaj
Student of Ph.D.
Electrical Department
NIT Raipur, Chhattisgarh India
('8836626', 'Nazil Perveen', 'nazil perveen')
('2092589', 'Darshan Kumar', 'darshan kumar')
27883967d3dac734c207074eed966e83afccb8c3This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
Two-dimensional Maximum Local Variation based on Image Euclidean Distance for Face
Recognition
State Key Laboratory of Integrated Services Networks, Xidian University, Xi an 710071 China
State Key Laboratory of CAD and CG, ZHE JIANG University, HangZhou, 310058 China
The Chinese University of Hong Kong, Hong Kong
to
improve
in
images and
in estimating
('38469552', 'Quanxue Gao', 'quanxue gao')
270e5266a1f6e76954dedbc2caf6ff61a5fbf8d0EmotioNet Challenge: Recognition of facial expressions of emotion in the wild
Dept. Electrical and Computer Engineering
The Ohio State University
('8038057', 'Ramprakash Srinivasan', 'ramprakash srinivasan')
('9947018', 'Qianli Feng', 'qianli feng')
('1678691', 'Yan Wang', 'yan wang')
27f8b01e628f20ebfcb58d14ea40573d351bbaadDEPARTMENT OF INFORMATION ENGINEERING AND COMPUTER SCIENCE
ICT International Doctoral School
Events based Multimedia Indexing
and Retrieval
SUBMITTED TO THE DEPARTMENT OF
INFORMATION ENGINEERING AND COMPUTER SCIENCE (DISI)
IN THE PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE
OF
DOCTOR OF PHILOSOPHY
Advisor:
Examiners: Prof. Marco Carli, Universit`a degli Studi di Roma Tre, Italy
Prof. Nicola Conci, Universit`a degli Studi di Trento, Italy
Prof. Pietro Zanuttigh, Universit`a degli Studi di Padova, Italy
Prof. Giulia Boato, Universit`a degli Studi di Trento, Italy
December 2017
('36296712', 'Kashif Ahmad', 'kashif ahmad')
2742a61d32053761bcc14bd6c32365bfcdbefe35Submitted 9/13; Revised 6/14; Published 2/15
Learning Transformations for Clustering and Classification
Department of Electrical and Computer Engineering
Duke University
Durham, NC 27708, USA
Department of Electrical and Computer Engineering
Department of Computer Science
Department of Biomedical Engineering
Duke University
Durham, NC 27708, USA
Editor: Ben Recht
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
qiang.qiu@duke.edu
guillermo.sapiro@duke.edu
27dafedccd7b049e87efed72cabaa32ec00fdd45Unsupervised Visual Alignment with Similarity Graphs
Tampere University of Technology, Finland
('2416841', 'Fatemeh Shokrollahi Yancheshmeh', 'fatemeh shokrollahi yancheshmeh')
('40394658', 'Ke Chen', 'ke chen')
{fatemeh.shokrollahiyancheshmeh, ke.chen, joni.kamarainen}@tut.fi
27a299b834a18e45d73e0bf784bbb5b304c197b3Social Role Discovery in Human Events
Stanford University
br. maids
bride
groom
gr. man
Pairwise interaction features
Social Role Model
Σ𝛼
Σ𝛽
Introduction
• Social Roles describe humans in an event
•Social roles of humans are dependent on
- their actions in a social setting
- their interactions with other roles
• Obtaining role annotations for training is expensive
•Goal: Discover role clusters in a social event based on
role-specific interactions
1. Input: videos
with human tracks
2. Extract unary and
interaction features
3. Output: Cluster
people into social roles
Our Approach
- Does not require
role annotations
- Clusters people
into roles based
on interactions as
well as person-
specific features
Results: Clustering Accuracy
• New YouTube dataset: ~40 videos with 160-240 people per event
• Human tracks and ground-truth roles annotated
Method
prior
K-means
Only unary
Interaction
as context
Birthday Wedding Award
Function
62.97%
31.97%
69.31%
77.75%
20.17%
29.43%
39.22%
38.83%
29.32%
33.88%
38.25%
41.53%
Physical
Training
65.93%
57.67%
76.69%
77.91%
No spatial
43.72%
No proxemic 43.72%
44.81%
Full Model
36.41%
39.32%
42.72%
79.54%
79.80%
83.12%
82.82%
77.91%
82.82%
• Only unary – No
interaction feature
Interaction as
context – Average
interaction as unary
• No spatial – Only
proxemic interaction
• No proxemic – Only
spatial interaction
Ψ𝑃
- Spatio-temporal trajectory features
- Proxemic[2] interaction features
Unary features
- HOG3D and Trajectory to capture action
- Gender and Color Histogram features
- Object interaction features
Ψ𝑢
𝒔𝑖
𝛼 - Unary feature weight
𝒔𝑖
- Social role assignment
- Reference role assignment
Interaction feature weight
Jointly infer
by variational
inference
Ψ𝑢
Ψ𝑝
Interaction restricted
to reference role for
tractable inference
• Spatial relations in wedding. Cross-arrow is the position of the reference
Results: Role Clusters
role (groom)
Bride
Priest
Brides maid
Grooms man
• Color of cross represents ground-truth role for wrong assignments
bride
groom
priest
grooms men
brides maid
b’day person
parent
friends
guest
presenter
recipient
host
distributor
[1] V. Ramanathan, B. Yao, L. Fei-Fei. Social Role Discovery in Human Events. In CVPR, 2013.
[2] Y. Yang, S. Baker, A. Kannan, and D. Ramanan. Recognizing proxemics in personal photos. In CVPR, 2012.
This work was supported in part by DARPA Minds Eye, NSF, Intel, Microsoft Research, Google Research and the Intelligence Advanced
Research Projects Activity* (IARPA) via Department of Interior National Business Center contract number D11PC20069.
* The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright thereon. Disclaimer: The views and conclusions contained herein are
those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/NBC, or the U.S. Government.
instructor
presenter
('34066479', 'Vignesh Ramanathan', 'vignesh ramanathan')
('38916673', 'Bangpeng Yao', 'bangpeng yao')
('3216322', 'Li Fei-Fei', 'li fei-fei')
{vigneshr, bangpeng, feifeili}@cs.stanford.edu
27b1670e1b91ab983b7b1ecfe9eb5e6ba951e0baComparison between k-nn and svm method
for speech emotion recognition
Anjuman College of Engineering and Technology, Sadar, Nagpur, India
('27879696', 'Muzaffar Khan', 'muzaffar khan')
274f87ad659cd90382ef38f7c6fafc4fc7f0d74d
27ee8482c376ef282d5eb2e673ab042f5ded99d7Scale Normalization for the Distance Maps AAM.
Avenue de la boulaie, BP 81127,
35 511 Cesson-S´evign´e, France
Sup´elec, IETR-SCEE Team
('31491147', 'Denis Giri', 'denis giri')
('2861129', 'Maxime Rosenwald', 'maxime rosenwald')
('32420329', 'Benjamin Villeneuve', 'benjamin villeneuve')
('3353560', 'Sylvain Le Gallou', 'sylvain le gallou')
Email: {denis.giri, maxime.rosenwald, benjamin.villeneuve, sylvain.legallou, renaud.seguier}@supelec.fr
4b4106614c1d553365bad75d7866bff0de6056edUnconstrained Facial Images: Database for Face
Recognition under Real-world Conditions⋆
1 Dept. of Computer Science & Engineering
University of West Bohemia
Plzeˇn, Czech Republic
2 NTIS - New Technologies for the Information Society
University of West Bohemia
Plzeˇn, Czech Republic
('2628715', 'Ladislav Lenc', 'ladislav lenc'){llenc,pkral}@kiv.zcu.cz
4bb03b27bc625e53d8d444c0ba3ee235d2f17e86Reading Between The Lines: Object Localization
Using Implicit Cues from Image Tags
Department of Computer Science
University of Texas at Austin
('35788904', 'Sung Ju Hwang', 'sung ju hwang')
('1794409', 'Kristen Grauman', 'kristen grauman')
{sjhwang,grauman}@cs.utexas.edu
4b89cf7197922ee9418ae93896586c990e0d2867LATEX Author Guidelines for CVPR Proceedings
First Author
Institution1
Institution1 address
firstauthor@i1.org
4bc9a767d7e63c5b94614ebdc24a8775603b15c9University of Trento
Doctoral Thesis
Understanding Visual Information:
from Unsupervised Discovery to
Minimal Effort Domain Adaptation
Author:
Supervisor:
Dr. Nicu Sebe
A thesis submitted in fulfilment of the requirements
for the degree of Doctor of Philosophy
in the
International Doctorate School in Information and Communication Technologies
Department of Information Engineering and Computer Science
Multimedia and Human Understanding Group (MHUG)
April 2015
('2933565', 'Gloria Zen', 'gloria zen')
4b519e2e88ccd45718b0fc65bfd82ebe103902f7A Discriminative Model for Age Invariant Face
Recognition
Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, China
Michigan State University, E. Lansing, MI 48823, USA
Korea University, Seoul 136-713, Korea
('1911510', 'Zhifeng Li', 'zhifeng li')
('2222919', 'Unsang Park', 'unsang park')
('6680444', 'Anil K. Jain', 'anil k. jain')
4b3f425274b0c2297d136f8833a31866db2f2aecThis is a pre-print of the original paper accepted for publication in the CVPR 2017 Biometrics Workshop.
Toward Open-Set Face Recognition
Manuel G¨unther
Vision and Security Technology Lab, University of Colorado Colorado Springs
('39616991', 'Steve Cruz', 'steve cruz')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
('39886114', 'Ethan M. Rudd', 'ethan m. rudd')
{mgunther,scruz,erudd,tboult}@vast.uccs.edu
4b7c110987c1d89109355b04f8597ce427a7cd72ORIGINAL RESEARCH ARTICLE
published: 16 October 2014
doi: 10.3389/fnhum.2014.00804
Feature- and Face-Exchange illusions: new insights and
applications for the study of the binding problem
American University, Washington, DC, USA
University of Nevada, Reno, Reno, NV, USA
Edited by:
Baingio Pinna, University of
Sassari, Italy
Reviewed by:
Stephen Louis Macknik, Barrow
Neurological Institute, USA
Susana Martinez-Conde, Barrow
Neurological Institute, USA
*Correspondence:
Psychology, American University
4400 Massachusetts Avenue NW,
Washington, DC 20016, USA
The binding problem is a longstanding issue in vision science: i.e., how are humans able to
maintain a relatively stable representation of objects and features even though the visual
system processes many aspects of the world separately and in parallel? We previously
investigated this issue with a variant of the bounce-pass paradigm, which consists of two
rectangular bars moving in opposite directions; if the bars are identical and never overlap,
the motion could equally be interpreted as bouncing or passing. Although bars of different
colors should be seen as passing each other (since the colors provide more information
about the bars’ paths), we found “Feature Exchange”: observers reported the paradoxical
perception that the bars appear to bounce off of each other and exchange colors. Here we
extend our previous findings with three demonstrations. “Peripheral Feature-Exchange”
consists of two colored bars that physically bounce (they continually meet in the middle
of the monitor and return to the sides). When viewed in the periphery, the bars appear
to stream past each other even though this percept relies on the exchange of features
and contradicts the information provided by the color of the bars. In “Face-Exchange”
two different faces physically pass each other. When fixating centrally, observers typically
report the perception of bouncing faces that swap features, indicating that the Feature
Exchange effect can occur even with complex objects. In “Face-Go-Round,” one face
repeatedly moves from left to right on the top of the monitor, and the other from right
to left at the bottom of the monitor. Observers typically perceive the faces moving in a
circle—a percept that contradicts information provided by the identity of the faces. We
suggest that Feature Exchange and the paradigms used to elicit it can be useful for the
investigation of the binding problem as well as other contemporary issues of interest to
vision science.
Keywords: motion perception, object perception, binding problem, visual periphery, animation, bouncing
streaming illusions, illusion of causality
INTRODUCTION
The “binding problem” refers to the observation that the brain
processes many aspects of the visual world separately and in
parallel, yet we perceive a unified world, populated by coherent
objects (James, 1890; Treisman, 1996; Holcombe et al., 2009). The
implication is that the visual system binds together the output of
separate processes (which presumably compute features, textures,
colors, motion gradients, etc.) prior to creating our object-centric
perceptual world. Two fundamental questions of the binding
problem can be summarized as follows: (1) How, and under
what conditions, does the brain combine (or fail to combine) the
outputs of these separate processes to construct an object rep-
resentation? (2) How are object representations maintained over
time and space?
We recently examined the spatiotemporal conditions and the
role feature-level processes play in representing and maintaining
objects (Caplovitz et al., 2011) using a variant of the “bounce-
pass paradigm” (Metzger, 1934; Michotte, 1946/1963; Kanizsa,
1969). In a typical version of the bounce pass paradigm, the
interpretation of motion direction and object correspondence
direction is intrinsically ambiguous, and the degree to which
observers report one or the other of the potential percepts has
been used to study a range of perceptual and cognitive processes.
For example, versions of this basic paradigm have been used to
study properties of cross-modal interactions and motion per-
ception as well as object representations (Bertenthal et al., 1993;
Watanabe and Shimojo, 1998; Sekuler and Sekuler, 1999; Mitroff
et al., 2005; Feldman and Tremoulet, 2006).
The basic paradigm (illustrated in Figure 1A) consists of two
rectangles; one that moves from right to left while the other moves
from left to right. The display is ambiguous because the stimulus
is wholly consistent with each rectangle passing from one side of
the screen to the other (i.e., the perception of streaming) or as
bouncing off of the other rectangle and returning to its point of
origin (i.e., the perception of bouncing). If, at the point of inter-
section, one rectangle overlaps with the other rectangle observers
will commonly perceive streaming (Sekuler and Sekuler, 1999).
In our experiments, this potential cue is removed: at the critical
point of intersection, the rectangles exactly exchange places and
thus never have an overlapping edge. When the two rectangles are
Frontiers in Human Neuroscience
www.frontiersin.org
October 2014 | Volume 8 | Article 804 | 1
HUMAN NEUROSCIENCE
('31981243', 'Arthur G. Shapiro', 'arthur g. shapiro')
('8369036', 'Gideon P. Caplovitz', 'gideon p. caplovitz')
('23863232', 'Erica L. Dixon', 'erica l. dixon')
('31981243', 'Arthur G. Shapiro', 'arthur g. shapiro')
e-mail: arthur.shapiro@american.edu
4bd088ba3f42aa1e43ae33b1988264465a643a1fTechnical Report, IDE0852, May 2008
Multiview Face Detection Using
Gabor Filters and
Support Vector Machine
Bachelor’s Thesis in Computer Systems Engineering
School of Information Science, Computer and Electrical Engineering

Halmstad University
4bfce41cc72be315770861a15e467aa027d91641Active Annotation Translation
Caltech
Kristj´an Eldj´arn Hj¨orleifsson
University of Iceland
Caltech
('3251767', 'Steve Branson', 'steve branson')
('1690922', 'Pietro Perona', 'pietro perona')
sbranson@caltech.edu
keh4@hi.is
perona@caltech.edu
4b61d8490bf034a2ee8aa26601d13c83ad7f843aA Modulation Module for Multi-task Learning with
Applications in Image Retrieval
Northwestern University
2 AIBee
3 Bytedance AI Lab
Carnegie Mellon University
('8343585', 'Xiangyun Zhao', 'xiangyun zhao')
4bd3de97b256b96556d19a5db71dda519934fd53Latent Factor Guided Convolutional Neural Networks for Age-Invariant Face
Recognition
School of Electronic and Information Engineering, South China University of Technology
Shenzhen Key Lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('2512949', 'Yandong Wen', 'yandong wen')
('32787758', 'Zhifeng Li', 'zhifeng li')
('33427555', 'Yu Qiao', 'yu qiao')
yd.wen@siat.ac.cn, zhifeng.li@siat.ac.cn, yu.qiao@siat.ac.cn
4b04247c7f22410681b6aab053d9655cf7f3f888Robust Face Recognition by Constrained Part-based
Alignment
('1692992', 'Yuting Zhang', 'yuting zhang')
('2370507', 'Kui Jia', 'kui jia')
('7135663', 'Yueming Wang', 'yueming wang')
('1734380', 'Gang Pan', 'gang pan')
('1926757', 'Tsung-Han Chan', 'tsung-han chan')
('1700297', 'Yi Ma', 'yi ma')
4b60e45b6803e2e155f25a2270a28be9f8bec130Attribute Based Object Identification ('1686318', 'Yuyin Sun', 'yuyin sun')
('1766509', 'Liefeng Bo', 'liefeng bo')
('1731079', 'Dieter Fox', 'dieter fox')
4b48e912a17c79ac95d6a60afed8238c9ab9e553JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Minimum Margin Loss for Deep Face Recognition
('49141822', 'Xin Wei', 'xin wei')
('3552546', 'Hui Wang', 'hui wang')
('2986129', 'Huan Wan', 'huan wan')
4b5eeea5dd8bd69331bd4bd4c66098b125888deaHuman Activity Recognition Using Conditional
Random Fields and Privileged Information
submitted to
the designated by the General Assembly Composition of the
Department of Computer Science & Engineering Inquiry
Committee
by
in partial fulfillment of the Requirements for the Degree of
DOCTOR OF PHILOSOPHY
February 2016
('2045915', 'Michalis Vrigkas', 'michalis vrigkas')
4bbbee93519a4254736167b31be69ee1e537f942
4b74f2d56cd0dda6f459319fec29559291c61bffCHIACHIA ET AL.: PERSON-SPECIFIC SUBSPACES FOR FAMILIAR FACES
Person-Specific Subspace Analysis for
Unconstrained Familiar Face Identification
David Cox2
Institute of Computing
University of Campinas
Campinas, Brazil
Rowland Institute
Harvard University
Cambridge, USA
McGovern Institute
Massachusetts Institute of Technology
Cambridge, USA
4 Department of Computer Science
Universidade Federal de Minas Gerais
Belo Horizonte, Brazil
('1761151', 'Giovani Chiachia', 'giovani chiachia')
('30017846', 'Nicolas Pinto', 'nicolas pinto')
('1679142', 'William Robson Schwartz', 'william robson schwartz')
('2145405', 'Anderson Rocha', 'anderson rocha')
('1716806', 'Alexandre X. Falcão', 'alexandre x. falcão')
giovanichiachia@gmail.com
pinto@mit.edu
william@dcc.ufmg.br
anderson.rocha@ic.unicamp.br
afalcao@ic.unicamp.br
davidcox@fas.harvard.edu
4ba38262fe20fab3e4c80215147b498f83843b93MAKIANDCIPOLLA:OBTAININGTHESHAPEOFAMOVINGOBJECT
Obtaining the Shape of a Moving Object
with a Specular Surface
Toshiba Research Europe
Cambridge Research Laboratory
Department of Engineering
University of Cambridge
('1801052', 'Atsuto Maki', 'atsuto maki')
('1745672', 'Roberto Cipolla', 'roberto cipolla')
atsuto.maki@crl.toshiba.co.uk
cipolla@cam.ac.uk
4bbe460ab1b279a55e3c9d9f488ff79884d01608GAGAN: Geometry-Aware Generative Adversarial Networks
Jean Kossaifi∗
Middlesex University London
Imperial College London
('47801605', 'Linh Tran', 'linh tran')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1694605', 'Maja Pantic', 'maja pantic')
{jean.kossaifi;linh.tran;i.panagakis;m.pantic}@imperial.ac.uk
4b3eaedac75ac419c2609e131ea9377ba8c3d4b8FAST NEWTON ACTIVE APPEARANCE MODELS
Jean Kossaifi(cid:63)
cid:63) Imperial College London, UK
University of Lincoln, UK
University of Twente, The Netherlands
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1694605', 'Maja Pantic', 'maja pantic')
4b507a161af8a7dd41e909798b9230f4ac779315A Theory of Multiplexed Illumination
Dept. Electrical Engineering
Technion - Israel Inst. Technology
Haifa 32000, ISRAEL
Dept. Computer Science
Columbia University
New York, NY 10027
('2159538', 'Yoav Y. Schechner', 'yoav y. schechner')
('1750470', 'Shree K. Nayar', 'shree k. nayar')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
yoav@ee.technion.ac.il
{nayar,belhumeur}@cs.columbia.edu
4b02387c2db968a70b69d98da3c443f139099e91Detecting facial landmarks in the video based on a hybrid framework
School of Information Engineering, Guangdong University of Technology, 510006 Guangzhou, China
School of Electromechanical Engineering, Guangdong University of Technology, 510006 Guangzhou, China
('1850205', 'Nian Cai', 'nian cai')
('3468993', 'Zhineng Lin', 'zhineng lin')
('2686365', 'Fu Zhang', 'fu zhang')
('39038751', 'Guandong Cen', 'guandong cen')
('40465036', 'Han Wang', 'han wang')
4b6be933057d939ddfa665501568ec4704fabb39
4b71d1ff7e589b94e0f97271c052699157e6dc4aHindawi Publishing Corporation
EURASIP Journal on Advances in Signal Processing
Volume 2008, Article ID 748483, 18 pages
doi:10.1155/2008/748483
Research Article
Pose-Encoded Spherical Harmonics for Face Recognition and
Synthesis Using a Single Image
Center for Automation Research, University of Maryland, College Park, MD 20742, USA
2 Vision Technologies Lab, Sarnoff Corporation, Princeton, NJ 08873, USA
Received 1 May 2007; Accepted 4 September 2007
Recommended by Juwei Lu
Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this
paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test
image that is acquired under different pose and illumination condition from only one training sample (also known as a gallery
image) of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting
sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian
reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose.
In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact
that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient
transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different
view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A
more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for
solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal
view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix;
for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view.
Very good recognition results are obtained using this method for both synthetic and challenging real images.
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1.
INTRODUCTION
Face recognition is one of the most successful applications
of image analysis and understanding [1]. Given a database of
training images (sometimes called a gallery set, or gallery im-
ages), the task of face recognition is to determine the facial ID
of an incoming test image. Built upon the success of earlier
efforts, recent research has focused on robust face recogni-
tion to handle the issue of significant difference between a
test image and its corresponding training images (i.e., they
belong to the same subject). Despite significant progress, ro-
bust face recognition under varying lighting and different
pose conditions remains to be a challenging problem. The
problem becomes even more difficult when only one train-
ing image per subject is available. Recently, methods have
been proposed to handle the combined pose and illumina-
tion problem when only one training image is available, for
example, the method based on morphable models [2] and its
extension [3] that proposes to handle the complex illumina-
tion problem by integrating spherical harmonics representa-
tion [4, 5]. In these methods, either arbitrary illumination
conditions cannot be handled [2] or the expensive computa-
tion of harmonic basis images is required for each pose per
subject [3].
Under the assumption of Lambertian reflectance, the
spherical harmonics representation has proved to be effec-
tive in modelling illumination variations for a fixed pose. In
this paper, we extend the harmonic representation to encode
pose information. We utilize the fact that all the harmonic
basis images of a subject at various poses are related to each
other via close-form linear transformations [6, 7], and de-
rive a more convenient transformation matrix to analytically
synthesize basis images of a subject at various poses from
just one set of basis images at a fixed pose, say, the frontal
('39265975', 'Zhanfeng Yue', 'zhanfeng yue')
('38480590', 'Wenyi Zhao', 'wenyi zhao')
('9215658', 'Rama Chellappa', 'rama chellappa')
('39265975', 'Zhanfeng Yue', 'zhanfeng yue')
Correspondence should be addressed to Zhanfeng Yue, zyue@cfar.umd.edu
4b0a2937f64df66cadee459a32ad7ae6e9fd7ed2Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
Jo˜ao Carreira†
†DeepMind
University of Oxford
('1688869', 'Andrew Zisserman', 'andrew zisserman')joaoluis@google.com
zisserman@google.com
4b4ecc1cb7f048235605975ab37bb694d69f63e5Nonlinear Embedding Transform for
Unsupervised Domain Adaptation
Center for Cognitive Ubiquitous Computing
Arizona State University, AZ, USA
('3151995', 'Hemanth Venkateswara', 'hemanth venkateswara')
('2471253', 'Shayok Chakraborty', 'shayok chakraborty')
('1743991', 'Sethuraman Panchanathan', 'sethuraman panchanathan')
{hemanthv,schakr10,panch}@asu.edu
4be03fd3a76b07125cd39777a6875ee59d9889bdCONTENT-BASED ANALYSIS FOR ACCESSING AUDIOVISUAL ARCHIVES:
ALTERNATIVES FOR CONCEPT-BASED INDEXING AND SEARCH
ESAT/PSI - IBBT
KU Leuven, Belgium
('1704728', 'Tinne Tuytelaars', 'tinne tuytelaars')Tinne.Tuytelaars@esat.kuleuven.be
4be774af78f5bf55f7b7f654f9042b6e288b64bdVariational methods for Conditional Multimodal Learning:
Generating Human Faces from Attributes
Indian Institute of Science
Bangalore, India
('2686270', 'Gaurav Pandey', 'gaurav pandey')
('2440174', 'Ambedkar Dukkipati', 'ambedkar dukkipati')
{gp88, ad}@csa.iisc.ernet.in
4b321065f6a45e55cb7f9d7b1055e8ac04713b41Affective Computing Models for Character
Animation
School of Computing and Mathematical Sciences
Liverpool John Moores University
Byrom Street, L3 3AF, Liverpool, UK
('1794784', 'Abdennour El Rhalibi', 'abdennour el rhalibi')
('36782007', 'Christopher Carter', 'christopher carter')
('1768270', 'Madjid Merabti', 'madjid merabti')
R.L.Duarte@2010.ljmu.ac.uk;{A.Elrhalibi; C.J.Carter;M.Merabti}@ljmu.ac.uk
4b605e6a9362485bfe69950432fa1f896e7d19bfTo appear in the CVPR Workshop on Biometrics, June 2016
A Comparison of Human and Automated Face Verification Accuracy on
Unconstrained Image Sets∗
Noblis
Noblis
Noblis
Noblis
Michigan State University
('1917247', 'Austin Blanton', 'austin blanton')
('7996649', 'Kristen C. Allen', 'kristen c. allen')
('15282121', 'Tim Miller', 'tim miller')
('1718102', 'Nathan D. Kalka', 'nathan d. kalka')
('6680444', 'Anil K. Jain', 'anil k. jain')
imaus10@gmail.com
kristen.allen@noblis.org
timothy.miller@noblis.org
nathan.kalka@noblis.org
jain@cse.msu.edu
4b3dd18882ff2738aa867b60febd2b35ab34dffcFACIAL FEATURE ANALYSIS OF
SPONTANEOUS FACIAL EXPRESSION
Computer Laboratory
University of Cambridge
William Gates Building,
Cambridge CB3 0FD UK
Department of Computer Science
The American University in Cairo
113 Kasr Al Aini Street,
P.O. Box 2511, Cairo, Egypt
('1754451', 'Rana El Kaliouby', 'rana el kaliouby')
('3337337', 'Amr Goneid', 'amr goneid')
rana.el-kaliouby@cl.cam.ac.uk
goneid@aucegypt.edu
11a2ef92b6238055cf3f6dcac0ff49b7b803aee3TOWARDS REDUCTION OF THE TRAINING AND SEARCH RUNNING TIME
COMPLEXITIES FOR NON-RIGID OBJECT SEGMENTATION
Instituto de Sistemas e Rob´otica, Instituto Superior T´ecnico, 1049-001 Lisboa, Portugal(a)
Australian Centre for Visual Technologies, The University of Adelaide, Australia (b
('3259175', 'Jacinto C. Nascimento', 'jacinto c. nascimento')
('3265767', 'Gustavo Carneiro', 'gustavo carneiro')
11dc744736a30a189f88fa81be589be0b865c9faA Unified Multiplicative Framework for Attribute Learning
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
('2582309', 'Kongming Liang', 'kongming liang')
('1783542', 'Hong Chang', 'hong chang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{kongming.liang, hong.chang, shiguang.shan, xilin.chen}@vipl.ict.ac.cn
11a210835b87ccb4989e9ba31e7559bb7a9fd292Hub
ScienceDirect
Scopus
SciTopics
Applications
Register
Login
Go to SciVal Suite
Search
Sources
Analytics
My alerts
My list
My settings
Quick Search
View search history | Back to results | < Previous 4 of 11 Next >
Help
Download PDF
Add to 2collab
Export
Print
E-mail
Create bibliography
Add to My List
Cited by since 1996
Proceedings of the 2010 10th International Conference on Intelligent Systems Design and
Applications, ISDA'10
2010, Article number 5687029, Pages 1154-1158
This article has been cited 0 times in Scopus.
Inform me when this document is cited in Scopus:
Set alert
Set feed
ISBN: 978-142448135-4
DOI: 10.1109/ISDA.2010.5687029
Document Type: Conference Paper
Source Type: Conference Proceeding
Sponsors: Machine Intelligence Research Labs (MIR Labs
View references (23)
My Applications
Add
More By These Authors
The authors of this article have a total of 67 records in
Scopus:
(Showing 5 most recent)
Shekofteh, S.K.,Maryam Baradaran, K.,Toosizadeh,
S.,Akbarzadeh-T., M.-R.,Hashemi, M.
Head pose estimation using fuzzy approximator
augmented by redundant membership functions
(2010)ICSTE 2010 - 2010 2nd International Conference on
Software Technology and Engineering, Proceedings
Kamkar, I.,Akbarzadeh-T, M.-R.,Yaghoobi, M.
Intelligent water drops a new optimization algorithm
Hide Applications
Find related documents
In Scopus based on
References
Authors
Keywords
Cairo; 29 November 2010 through 1 December 2010; Category number CFP10394-CDR; Code
83753
View at publisher |
A fuzzy approximator with Gaussian membership functions
to estimate a human's head pose
Baradaran-K, M.a
, Toosizadeh, S.a
Islamic Azad University, Mashhad Branch, Mashhad, Iran
Ferdowsi University of Mashhad, Mashhad, Iran
, Akbarzadeh-T, M.-R.b
, Shekofteh, S.K.b
118ca3b2e7c08094e2a50137b1548ada7935e505Workshop track - ICLR 2018
A DATASET TO EVALUATE THE REPRESENTATIONS
LEARNED BY VIDEO PREDICTION MODELS
Toyota Research Institute, Cambridge, MA 2 University of Michigan, Ann Arbor, MI
('34246012', 'Ryan Szeto', 'ryan szeto')
('2307158', 'Simon Stent', 'simon stent')
('3587688', 'Jason J. Corso', 'jason j. corso')
{szetor,jjcorso}@umich.edu
{simon.stent,german.ros}@tri.global
11aa527c01e61ec3a7a67eef8d7ffe9d9ce63f1dAutomated measurement of mouse social behaviors
using depth sensing, video tracking, and
machine learning
and David J. Andersona,1
aDivision of Biology and Biological Engineering 156-29, Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA
and bDivision of Engineering and Applied Sciences 136-93, California Institute of Technology, Pasadena, CA
Contributed by David J. Anderson, August 16, 2015 (sent for review May 20, 2015)
A lack of automated, quantitative, and accurate assessment of social
behaviors in mammalian animal models has limited progress toward
understanding mechanisms underlying social interactions and their
disorders such as autism. Here we present a new integrated hard-
ware and software system that combines video tracking, depth
sensing, and machine learning for automatic detection and quanti-
fication of social behaviors involving close and dynamic interactions
between two mice of different coat colors in their home cage. We
designed a hardware setup that integrates traditional video cameras
with a depth camera, developed computer vision tools to extract the
body “pose” of individual animals in a social context, and used a
supervised learning algorithm to classify several well-described so-
cial behaviors. We validated the robustness of the automated classi-
fiers in various experimental settings and used them to examine how
genetic background, such as that of Black and Tan Brachyury (BTBR)
mice (a previously reported autism model), influences social behavior.
Our integrated approach allows for rapid, automated measurement
of social behaviors across diverse experimental designs and also af-
fords the ability to develop new, objective behavioral metrics.
social behavior | behavioral tracking | machine vision | depth sensing |
supervised machine learning
Social behaviors are critical for animals to survive and re-
produce. Although many social behaviors are innate, they
must also be dynamic and flexible to allow adaptation to a rap-
idly changing environment. The study of social behaviors in model
organisms requires accurate detection and quantification of such
behaviors (1–3). Although automated systems for behavioral
scoring in rodents are available (4–8), they are generally limited to
single-animal assays, and their capabilities are restricted either to
simple tracking or to specific behaviors that are measured using a
dedicated apparatus (6–11) (e.g., elevated plus maze, light-dark
box, etc.). By contrast, rodent social behaviors are typically scored
manually. This is slow, highly labor-intensive, and subjective,
resulting in analysis bottlenecks as well as inconsistencies between
different human observers. These issues limit progress toward
understanding the function of neural circuits and genes controlling
social behaviors and their dysfunction in disorders such as autism
(1, 12). In principle, these obstacles could be overcome through
the development of automated systems for detecting and mea-
suring social behaviors.
Automating tracking and behavioral measurements during
social interactions pose a number of challenges not encountered
in single-animal assays, however, especially in the home cage
environment (2). During many social behaviors, such as aggression
or mating, two animals are in close proximity and often cross or
touch each other, resulting in partial occlusion. This makes track-
ing body positions, distinguishing each mouse, and detecting be-
haviors particularly difficult. This is compounded by the fact that
such social interactions are typically measured in the animals’
home cage, where bedding, food pellets, and other moveable items
can make tracking difficult. Nevertheless a home-cage environment
is important for studying social behaviors, because it avoids the
stress imposed by an unfamiliar testing environment.
Recently several techniques have been developed to track
social behaviors in animals with rigid exoskeletons, such as the
fruit fly Drosophila, which have relatively few degrees of freedom
in their movements (13–23). These techniques have had a trans-
formative impact on the study of social behaviors in that species
(2). Accordingly, the development of similar methods for mam-
malian animal models, such as the mouse, could have a similar
impact as well. However, endoskeletal animals exhibit diverse and
flexible postures, and their actions during any one social behavior,
such as aggression, are much less stereotyped than in flies. This
presents a dual challenge to automated behavior classification:
first, to accurately extract a representation of an animal’s posture
from observed data, and second, to map that representation to the
correct behavior (24–27). Current machine vision algorithms that
track social interactions in mice mainly use the relative positions of
two animals (25, 28–30); this approach generally cannot discrimi-
nate social interactions that involve close proximity and vigorous
physical activity, or identify specific behaviors such as aggression
and mounting. In addition, existing algorithms that measure social
interactions use a set of hardcoded, “hand-crafted” (i.e., pre-
defined) parameters that make them difficult to adapt to new ex-
perimental setups and conditions (25, 31).
In this study, we combined 3D tracking and machine learning
in an integrated system that can automatically detect, classify,
and quantify distinct social behaviors, including those involving
Significance
Accurate, quantitative measurement of animal social behaviors
is critical, not only for researchers in academic institutions
studying social behavior and related mental disorders, but also for
pharmaceutical companies developing drugs to treat disorders
affecting social interactions, such as autism and schizophrenia.
Here we describe an integrated hardware and software system
that combines video tracking, depth-sensing technology, machine
vision, and machine learning to automatically detect and score
innate social behaviors, such as aggression, mating, and social
investigation, between mice in a home-cage environment. This
technology has the potential to have a transformative impact on
the study of the neural mechanisms underlying social behavior
and the development of new drug therapies for psychiatric dis-
orders in humans.
Author contributions: W.H., P.P., and D.J.A. designed research; W.H. performed research;
W.H., X.P.B.-A., and S.G.N. contributed new reagents/analytic tools; W.H., A.K., M.Z., P.P.,
and D.J.A. analyzed data; and W.H., A.K., M.Z., P.P., and D.J.A. wrote the paper.
The authors declare no conflict of interest.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.
1073/pnas.1515982112/-/DCSupplemental.
www.pnas.org/cgi/doi/10.1073/pnas.1515982112
PNAS | Published online September 9, 2015 | E5351–E5360
('4502168', 'Weizhe Hong', 'weizhe hong')
('6201086', 'Ann Kennedy', 'ann kennedy')
('4195968', 'Moriel Zelikowsky', 'moriel zelikowsky')
('1690922', 'Pietro Perona', 'pietro perona')
1To whom correspondence may be addressed. Email: whong@caltech.edu, perona@
caltech.edu, or wuwei@caltech.edu.
113c22eed8383c74fe6b218743395532e2897e71MODEC: Multimodal Decomposable Models for Human Pose Estimation
Ben Sapp
Google, Inc
University of Washington
('1685978', 'Ben Taskar', 'ben taskar')bensapp@google.com
taskar@cs.washington.edu
11408af8861fb0a977412e58c1a23d61b8df458cA Robust Learning Algorithm Based on
SURF and PSM for Facial Expression Recognition
Graduate School of Engineering, Kobe University, Kobe, 657-8501, Japan
Graduate School of System Informatics, Kobe University, Kobe, 657-8501, Japan
('2866465', 'Jinhui Chen', 'jinhui chen')
('39484328', 'Xiaoyan Lin', 'xiaoyan lin')
('1744026', 'Tetsuya Takiguchi', 'tetsuya takiguchi')
('1678564', 'Yasuo Ariki', 'yasuo ariki')
ianchen@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp
11cc0774365b0cc0d3fa1313bef3d32c345507b1Face Recognition Using Active Near-IR
Illumination
Centre for Vision, Speech and Signal Processing
University of Surrey, United Kingdom
x.zou, j.kittler, k.messer
('38746097', 'Xuan Zou', 'xuan zou')
('1748684', 'Josef Kittler', 'josef kittler')
('2173900', 'Kieron Messer', 'kieron messer')
@surrey.ac.uk
11f7f939b6fcce51bdd8f3e5ecbcf5b59a0108f5Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem
Institute of Systems and Robotics - University of Coimbra, Portugal
Portugal
('2117944', 'Rui Caseiro', 'rui caseiro')
('39458914', 'Pedro Martins', 'pedro martins')
('36478254', 'João F. Henriques', 'joão f. henriques')
{ruicaseiro, pedromartins, henriques, batista}@isr.uc.pt, fleite@mat.uc.pt
11269e98f072095ff94676d3dad34658f4876e0eFacial Expression Recognition with Multithreaded
Cascade of Rotation-invariant HOG
Graduate School of System Informatics
Graduate School of System Informatics
Graduate School of System Informatics
Kobe University
Kobe, 657-8501, Japan
Kobe University
Kobe, 657-8501, Japan
Kobe University
Kobe, 657-8501, Japan
In this paper, we propose a novel framework that adopts
robust feature representation for training the multithreading
boosting cascade. We adopt rotation-invariant HOG (Ri-HOG)
as features, which is reminiscent of Dalal et al.’s HOG [9].
However, in this paper, we noticeably enhance the conven-
tional HOG in rotation-invariant ability and feature extraction
speed. We carry out a detailed study of the effects of various
implementation choices in descriptor performance. We subdi-
vide the local patch into annular spatial bins to achieve spatial
binning invariance. Besides, we apply radial gradient to attain
gradient binning invariance, which is inspired by Takacs et
al.’s RGT (Radial Gradient Transform) [10].
('2866465', 'Jinhui Chen', 'jinhui chen')
('1744026', 'Tetsuya Takiguchi', 'tetsuya takiguchi')
('1678564', 'Yasuo Ariki', 'yasuo ariki')
Email: ianchen@me.cs.scitec.kobe-u.ac.jp
Email: takigu@kobe-u.ac.jp
Email: ariki@kobe-u.ac.jp
113e5678ed8c0af2b100245057976baf82fcb907Facing Imbalanced Data
Recommendations for the Use of Performance Metrics
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
1Carnegie Mellon University, Pittsburgh, PA, laszlo.jeni@ieee.org,ftorre@cs.cmu.edu
2University of Pittsburgh, Pittsburgh, PA, jeffcohn@cs.cmu.edu
11691f1e7c9dbcbd6dfd256ba7ac710581552baaSoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
('22314218', 'Silvio Giancola', 'silvio giancola')
('41022271', 'Mohieddine Amine', 'mohieddine amine')
('41015552', 'Tarek Dghaily', 'tarek dghaily')
('2931652', 'Bernard Ghanem', 'bernard ghanem')
silvio.giancola@kaust.edu.sa, maa249@mail.aub.edu, tad05@mail.aub.edu, bernard.ghanem@kaust.edu.sa
11c04c4f0c234a72f94222efede9b38ba6b2306cReal-Time Human Action Recognition by Luminance Field
Trajectory Analysis
Dept of Computing
Kowloon, Hong Kong
+852 2766-7316
Hong Kong Polytechnic University
University of Illinois at Urbana
National University of Singapore
Dept of ECE
Champaign, USA
+1 217-244-2960
Dept of ECE
Singapore
+65 6516-2116
('2659956', 'Zhu Li', 'zhu li')
('1708679', 'Yun Fu', 'yun fu')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
zhu.li@ieee.org
{yunfu2,huang}@ifp.uiuc.edu
elesyan@ece.nus.edu.sg
1128a4f57148cec96c0ef4ae3b5a0fbf07efbad9Action Recognition by Learning Deep Multi-Granular
Spatio-Temporal Video Representation∗
University of Science and Technology of China, Hefei 230026, P. R. China
2 Microsoft Research, Beijing 100080, P. R. China
University of Rochester, NY 14627, USA
('35539590', 'Qing Li', 'qing li')
('3430743', 'Zhaofan Qiu', 'zhaofan qiu')
('2053452', 'Ting Yao', 'ting yao')
('1724211', 'Tao Mei', 'tao mei')
('3663422', 'Yong Rui', 'yong rui')
('33642939', 'Jiebo Luo', 'jiebo luo')
{sealq, qiudavy}@mail.ustc.edu.cn; {tiyao, tmei, yongrui}@microsoft.com;
jluo@cs.rochester.edu
1149c6ac37ae2310fe6be1feb6e7e18336552d95Proc. Int. Conf. on Artificial Neural Networks (ICANN’05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005
Classification of Face Images for Gender, Age,
Facial Expression, and Identity1
Department of Neuroinformatics and Cognitive Robotics
Ilmenau Technical University, P.O.Box 100565, 98684 Ilmenau, Germany
('34420922', 'Torsten Wilhelm', 'torsten wilhelm')
11f17191bf74c80ad0b16b9f404df6d03f7c8814Recognition of Visually Perceived Compositional
Human Actions by Multiple Spatio-Temporal Scales
Recurrent Neural Networks
('1754201', 'Minju Jung', 'minju jung')
('1780524', 'Jun Tani', 'jun tani')
11367581c308f4ba6a32aac1b4a7cdb32cd63137
11a47a91471f40af5cf00449954474fd6e9f7694Article
NIRFaceNet: A Convolutional Neural Network for
Near-Infrared Face Identification
Southwest University, Chongqing 400715, China
† These authors contribute equally to this work.
Academic Editor: Willy Susilo
Received: 16 July 2016; Accepted: 24 October 2016; Published: 27 October 2016
('34063916', 'Min Peng', 'min peng')
('8206607', 'Chongyang Wang', 'chongyang wang')
('34520676', 'Tong Chen', 'tong chen')
('2373829', 'Guangyuan Liu', 'guangyuan liu')
peng2014m@email.swu.edu.cn (M.P.); mvrjustid520@email.swu.edu.cn (C.W.); liugy@swu.edu.cn (G.L.)
* Correspondence: c_tong@swu.edu.cn; Tel.: +86-23-6825-4309
1198572784788a6d2c44c149886d4e42858d49e4Learning Discriminative Features using Encoder/Decoder type Deep
Neural Nets
('2162592', 'Vishwajeet Singh', 'vishwajeet singh')
('40835709', 'Killamsetti Ravi Kumar', 'killamsetti ravi kumar')
1ALPES, Bolarum, Hyderabad 500010, vsthakur@gmail.com
2ALPES, Bolarum, Hyderabad 500010, ravi.killamsetti@gmail.com
3SNIST, Ghatkesar, Hyderabad 501301, kumar.e@gmail.com
11fe6d45aa2b33c2ec10d9786a71c15ec4d3dca8970
JUNE 2008
Tied Factor Analysis for Face Recognition
across Large Pose Differences
('1792404', 'James H. Elder', 'james h. elder')
('1734784', 'Jonathan Warrell', 'jonathan warrell')
('2338011', 'Fatima M. Felisberti', 'fatima m. felisberti')
1134a6be0f469ff2c8caab266bbdacf482f32179IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
FACIAL EXPRESSION IDENTIFICATION USING FOUR-BIT CO-
OCCURRENCE MATRIXFEATURES AND K-NN CLASSIFIER
Aditya College of Engineering, Surampalem, East Godavari
District, Andhra Pradesh, India
('8118823', 'Bala Shankar', 'bala shankar')
('27686729', 'S R Kumar', 's r kumar')
11b3877df0213271676fa8aa347046fd4b1a99adUnsupervised Identification of Multiple Objects of
Interest from Multiple Images: dISCOVER
Carnegie Mellon University
('1713589', 'Devi Parikh', 'devi parikh')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
{dparikh,tsuhan}@cmu.edu
112780a7fe259dc7aff2170d5beda50b2bfa7bda
1130c38e88108cf68b92ecc61a9fc5aeee8557c9Dynamically Encoded Actions based on Spacetime Saliency
Institute of Electrical Measurement and Measurement Signal Processing, TU Graz, Austria
York University, Toronto, Canada
('2322150', 'Christoph Feichtenhofer', 'christoph feichtenhofer')
('1718587', 'Axel Pinz', 'axel pinz')
('1709096', 'Richard P. Wildes', 'richard p. wildes')
{feichtenhofer, axel.pinz}@tugraz.at
wildes@cse.yorku.ca
11b89011298e193d9e6a1d99302221c1d8645bdaStructured Feature Selection
Rensselaer Polytechnic Institute, USA
('39965604', 'Tian Gao', 'tian gao')
('2860279', 'Ziheng Wang', 'ziheng wang')
('1726583', 'Qiang Ji', 'qiang ji')
{gaot, wangz10, jiq}@rpi.edu
111a9645ad0108ad472b2f3b243ed3d942e7ff16Facial Expression Classification Using
Combined Neural Networks
DEE/PUC-Rio, Marquês de São Vicente 225, Rio de Janeiro – RJ - Brazil
('14032279', 'Rafael V. Santos', 'rafael v. santos')
('1744578', 'Marley M.B.R. Vellasco', 'marley m.b.r. vellasco')
('34686777', 'Raul Q. Feitosa', 'raul q. feitosa')
('1687882', 'Ricardo Tanscheit', 'ricardo tanscheit')
marley@ele.puc-rio.br
1177977134f6663fff0137f11b81be9c64c1f424Multi-Manifold Deep Metric Learning for Image Set Classification
1Advanced Digital Sciences Center, Singapore
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
School of ICE, Beijing University of Posts and Telecommunications, Beijing, China
University of Illinois at Urbana-Champaign, Urbana, IL, USA
Tsinghua University, Beijing, China
('1697700', 'Jiwen Lu', 'jiwen lu')
('22804340', 'Gang Wang', 'gang wang')
('1774956', 'Weihong Deng', 'weihong deng')
('1742248', 'Pierre Moulin', 'pierre moulin')
('39491387', 'Jie Zhou', 'jie zhou')
jiwen.lu@adsc.com.sg; wanggang@ntu.edu.sg; whdeng@bupt.edu.cn;
moulin@ifp.uiuc.edu; jzhou@tsinghua.edu.cn
1190cba0cae3c8bb81bf80d6a0a83ae8c41240bcSquared Earth Mover’s Distance Loss for Training
Deep Neural Networks on Ordered-Classes
Dept. of Computer Science
Stony Brook University
Chen-Ping Yu
Phiar Technologies, Inc
('2321406', 'Le Hou', 'le hou')
111d0b588f3abbbea85d50a28c0506f74161e091International Journal of Computer Applications (0975 – 8887)
Volume 134 – No.10, January 2016
Facial Expression Recognition from Visual Information
using Curvelet Transform
Surabhi Group of Institution Bhopal
systems. Further applications
('6837599', 'Pratiksha Singh', 'pratiksha singh')
11ac88aebe0230e743c7ea2c2a76b5d4acbfecd0Hybrid Cascade Model for Face Detection in the Wild
Based on Normalized Pixel Difference and a Deep
Convolutional Neural Network
Darijan Marčetić[0000-0002-6556-665X], Martin Soldić[0000-0002-4031-0404]
and Slobodan Ribarić[0000-0002-8708-8513]
University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia
{darijan.marcetic, martin.soldic, slobodan.ribaric}@fer.hr
117f164f416ea68e8b88a3005e55a39dbdf32ce4Neuroaesthetics in Fashion: Modeling the Perception of Fashionability
1Institut de Rob`otica i Inform`atica Industrial (CSIC-UPC),
University of Toronto
('3114470', 'Edgar Simo-Serra', 'edgar simo-serra')
('37895334', 'Sanja Fidler', 'sanja fidler')
('1994318', 'Francesc Moreno-Noguer', 'francesc moreno-noguer')
('2422559', 'Raquel Urtasun', 'raquel urtasun')
7dda2eb0054eb1aeda576ed2b27a84ddf09b07d42010 The 3rd International Conference on Machine Vision (ICMV 2010)
Face Recognition and Representation by Tensor-based MPCA Approach
Dept. of Control, Instrumentation, and Robot
Engineering
Chosun University
Gwangju, Korea
('2806903', 'Yun-Hee Han', 'yun-hee han')Yhhan1059@gmail.com
7d2556d674ad119cf39df1f65aedbe7493970256Now You Shake Me: Towards Automatic 4D Cinema
University of Toronto
Vector Institute
http://www.cs.toronto.edu/˜henryzhou/movie4d/
('2481662', 'Yuhao Zhou', 'yuhao zhou')
('37895334', 'Sanja Fidler', 'sanja fidler')
{henryzhou, makarand, fidler}@cs.toronto.edu
7d94fd5b0ca25dd23b2e36a2efee93244648a27bConvolutional Network for Attribute-driven and Identity-preserving Human Face
Generation
The Hong Kong Polytechnic University, Hong Kong
bSchool of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
('1701799', 'Mu Li', 'mu li')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('1698371', 'David Zhang', 'david zhang')
7d8c2d29deb80ceed3c8568100376195ce0914cbIdentity-Aware Textual-Visual Matching with Latent Co-attention
The Chinese University of Hong Kong
('1700248', 'Shuang Li', 'shuang li')
('1721881', 'Tong Xiao', 'tong xiao')
('1764548', 'Hongsheng Li', 'hongsheng li')
('1742383', 'Wei Yang', 'wei yang')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
{sli,xiaotong,hsli,wyang,xgwang}@ee.cuhk.edu.hk
7d306512b545df98243f87cb8173df83b4672b18Flag Manifolds for the Characterization of
Geometric Structure in Large Data Sets
T. Marrinan, J. R. Beveridge, B. Draper, M. Kirby, and C. Peterson
Colorado State University, Fort Collins, Colorado, USA
kirby@math.colostate.edu
7d98dcd15e28bcc57c9c59b7401fa4a5fdaa632bFACE APPEARANCE FACTORIZATION FOR EXPRESSION ANALYSIS AND SYNTHESIS
Heudiasyc Laboratory, CNRS, University of Technology of Compi`egne
BP 20529, 60205 COMPIEGNE Cedex, FRANCE.
('2371236', 'Bouchra Abboud', 'bouchra abboud')
('1742818', 'Franck Davoine', 'franck davoine')
E-mail: Bouchra.Abboud@hds.utc.fr
7d41b67a641426cb8c0f659f0ba74cdb60e7159aSoft Biometric Retrieval to Describe and Identify Surveillance Images
School of Electronics and Computer Science,
University of Southampton, United Kingdom
('3408521', 'Daniel Martinho-Corbishley', 'daniel martinho-corbishley')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('3000521', 'John N. Carter', 'john n. carter')
{dmc,msn,jnc}@ecs.soton.ac.uk
7d1688ce0b48096e05a66ead80e9270260cb8082Real vs. Fake Emotion Challenge: Learning to Rank Authenticity From Facial
Activity Descriptors
Otto von Guericke University
Magdeburg, Germany
('2441656', 'Frerk Saxen', 'frerk saxen')
('1783606', 'Philipp Werner', 'philipp werner')
('1741165', 'Ayoub Al-Hamadi', 'ayoub al-hamadi')
{Frerk.Saxen, Philipp.Werner, Ayoub.Al-Hamadi}@ovgu.de
7d53678ef6009a68009d62cd07c020706a2deac3Facial Feature Point Extraction using
the Adaptive Mean Shape in Active Shape Model
Hanyang University
Haengdang-dong, Seongdong-gu, Seoul, South Korea
Giheung-eup, Yongin-si, Gyeonggi-do, Seoul, Korea
Samsung Advanced Institute of Technology
('2771795', 'Hyoung-Joon Kim', 'hyoung-joon kim')
('34600044', 'Wonjun Hwang', 'wonjun hwang')
('2077154', 'Seok-Cheol Kee', 'seok-cheol kee')
('2982904', 'Whoi-Yul Kim', 'whoi-yul kim')
('40370422', 'Hyun-Chul Kim', 'hyun-chul kim')
{hckim, khjoon}@vision.hanyang.ac.kr, wykim@hanyang.ac.kr
{wj.hwang, sckee}@samsung.com
7d7be6172fc2884e1da22d1e96d5899a29831ad2L2GSCI: Local to Global Seam Cutting and Integrating for
Accurate Face Contour Extraction
South China University of China
South China University of China
Kitware, Inc
The Education University of Hong Kong
South China University of China
('37221211', 'Yongwei Nie', 'yongwei nie')
('37579534', 'Xu Cao', 'xu cao')
('2792312', 'Chengjiang Long', 'chengjiang long')
('2420746', 'Ping Li', 'ping li')
('4882057', 'Guiqing Li', 'guiqing li')
nieyongwei@scut.edu.cn
7de6e81d775e9cd7becbfd1bd685f4e2a5eebb22Labeled Faces in the Wild: A Survey ('1714536', 'Erik Learned-Miller', 'erik learned-miller')
('1799600', 'Gary Huang', 'gary huang')
('2895705', 'Aruni RoyChowdhury', 'aruni roychowdhury')
('3131569', 'Haoxiang Li', 'haoxiang li')
('1745420', 'Gang Hua', 'gang hua')
7d73adcee255469aadc5e926066f71c93f51a1a5978-1-4799-9988-0/16/$31.00 ©2016 IEEE
1283
ICASSP 2016
7df4f96138a4e23492ea96cf921794fc5287ba72A Jointly Learned Deep Architecture for Facial Attribute Analysis and Face
Detection in the Wild
Fudan University
('37391748', 'Keke He', 'keke he')
('35782003', 'Yanwei Fu', 'yanwei fu')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
{kkhe15, yanweifu, xyxue}@fudan.edu.cn
7d9fe410f24142d2057695ee1d6015fb1d347d4aFacial Expression Feature Extraction Based on
FastLBP
Beijing, China
Beijing, China
facial expression
('1921151', 'Ya Zheng', 'ya zheng')
('2780963', 'Xiuxin Chen', 'xiuxin chen')
('2671173', 'Chongchong Yu', 'chongchong yu')
('39681852', 'Cheng Gao', 'cheng gao')
Email: zy_lovedabao@163.com
Email: chenxx1979@126.com, chongzhy@vip.sina.com, gcandgh@163.com
7dd578878e84337d6d0f5eb593f22cabeacbb94cClassifiers for Driver Activity Monitoring
Department of Computer Science and Engineering
University of Minnesota
('3055503', 'Harini Veeraraghavan', 'harini veeraraghavan')
('32975623', 'Nathaniel Bird', 'nathaniel bird')
('1734862', 'Stefan Atev', 'stefan atev')
('1696163', 'Nikolaos Papanikolopoulos', 'nikolaos papanikolopoulos')
harini@cs.umn.edu bird@cs.umn.edu atev@cs.umn.edu npapas@cs.umn.edu
7dffe7498c67e9451db2d04bb8408f376ae86992LEAR-INRIA submission for the THUMOS workshop
LEAR, INRIA, France
('40465030', 'Heng Wang', 'heng wang')firstname.lastname@inria.fr
7df268a3f4da7d747b792882dfb0cbdb7cc431bcSemi-supervised Adversarial Learning to Generate
Photorealistic Face Images of New Identities from 3D
Morphable Model
Imperial College London, UK
Centre for Vision, Speech and Signal Processing, University of Surrey, UK
('2151914', 'Baris Gecer', 'baris gecer')
('48467774', 'Binod Bhattarai', 'binod bhattarai')
('1748684', 'Josef Kittler', 'josef kittler')
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
{b.gecer,b.bhattarai,tk.kim}@imperial.ac.uk,
j.kittler@surrey.ac.uk
7d3f6dd220bec883a44596ddec9b1f0ed4f6aca22106
Linear Regression for Face Recognition
('2095953', 'Imran Naseem', 'imran naseem')
('2444665', 'Roberto Togneri', 'roberto togneri')
('1698675', 'Mohammed Bennamoun', 'mohammed bennamoun')
7de386bf2a1b2436c836c0cc1f1f23fccb24aad6Finding What the Driver Does
Final Report
Prepared by:
Artificial Intelligence, Robotics, and Vision Laboratory
Department of Computer Science and Engineering
University of Minnesota
CTS 05-03
HUMAN-CENTERED TECHNOLOGY TO ENHANCE SAFETY AND MOBILITY
('3055503', 'Harini Veeraraghavan', 'harini veeraraghavan')
('1734862', 'Stefan Atev', 'stefan atev')
('32975623', 'Nathaniel Bird', 'nathaniel bird')
('31791248', 'Paul Schrater', 'paul schrater')
('40654170', 'Nilolaos Papanikolopoulos', 'nilolaos papanikolopoulos')
29ce6b54a87432dc8371f3761a9568eb3c5593b0Kent Academic Repository
Full text document (pdf)
Citation for published version
Yassin, DK H. PHM and Hoque, Sanaul and Deravi, Farzin (2013) Age Sensitivity of Face Recognition
pp. 12-15.
DOI
https://doi.org/10.1109/EST.2013.8
Link to record in KAR
http://kar.kent.ac.uk/43222/
Document Version
Author's Accepted Manuscript
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all
content is protected by copyright and in the absence of an open licence (eg Creative Commons), permissions
for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version.
Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the
published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact:
If you believe this document infringes copyright then please contact the KAR admin team with the take-down
information provided at http://kar.kent.ac.uk/contact.html
researchsupport@kent.ac.uk
2914e8c62f0432f598251fae060447f98141e935University of Nebraska - Lincoln
Computer Science and Engineering: Theses,
Dissertations, and Student Research
Computer Science and Engineering, Department of
8-2016
ACTIVITY ANALYSIS OF SPECTATOR
PERFORMER VIDEOS USING MOTION
TRAJECTORIES
Follow this and additional works at: http://digitalcommons.unl.edu/computerscidiss
Part of the Computer Engineering Commons
Timsina, Anish, "ACTIVITY ANALYSIS OF SPECTATOR PERFORMER VIDEOS USING MOTION TRAJECTORIES" (2016).
Computer Science and Engineering: Theses, Dissertations, and Student Research. Paper 107.
http://digitalcommons.unl.edu/computerscidiss/107
Nebraska - Lincoln. It has been accepted for inclusion in Computer Science and Engineering: Theses, Dissertations, and Student Research by an
('2404944', 'Anish Timsina', 'anish timsina')DigitalCommons@University of Nebraska - Lincoln
University of Nebraska-Lincoln, timsina.anish@gmail.com
This Article is brought to you for free and open access by the Computer Science and Engineering, Department of at DigitalCommons@University of
authorized administrator of DigitalCommons@University of Nebraska - Lincoln.
292eba47ef77495d2613373642b8372d03f7062bDeep Secure Encoding: An Application to Face Recognition ('39192292', 'Rohit Pandey', 'rohit pandey')
('34872128', 'Yingbo Zhou', 'yingbo zhou')
('1723877', 'Venu Govindaraju', 'venu govindaraju')
29e96ec163cb12cd5bd33bdf3d32181c136abaf9Report No. UIUCDCS-R-2006-2748
UILU-ENG-2006-1788
Regularized Locality Preserving Projections with Two-Dimensional
Discretized Laplacian Smoothing
by
July 2006
('1724421', 'Deng Cai', 'deng cai')
('3945955', 'Xiaofei He', 'xiaofei he')
('39639296', 'Jiawei Han', 'jiawei han')
29e793271370c1f9f5ac03d7b1e70d1efa10577cInternational Journal of Signal Processing, Image Processing and Pattern Recognition
Vol.6, No.5 (2013), pp.423-436
http://dx.doi.org/10.14257/ijsip.2013.6.5.37
Face Recognition Based on Multi-classifierWeighted Optimization
and Sparse Representation
Institute of control science and engineering
University of Science and Technology Beijing
1,2,330 Xueyuan Road, Haidian District, Beijing 100083 P. R.China
('11241192', 'Deng Nan', 'deng nan')
('7814565', 'Zhengguang Xu', 'zhengguang xu')
1dengnan666666@163.com, 2xzg_1@263.net, 3 xiaobian@ustb.edu.cn
2902f62457fdf7e8e8ee77a9155474107a2f423eNon-rigid 3D Shape Registration using an
Adaptive Template
University of York, UK
('1694260', 'Hang Dai', 'hang dai')
('1737428', 'Nick Pears', 'nick pears')
('32131827', 'William Smith', 'william smith')
{hd816,nick.pears,william.smith}@york.ac.uk
29d3ed0537e9ef62fd9ccffeeb72c1beb049e1eaParametric Dictionaries and Feature Augmentation for
Continuous Domain Adaptation∗
Adobe Research
Bangalore, India
Light
Paolo Alto, USA
University of Maryland
College Park, USA
('35223379', 'Sumit Shekhar', 'sumit shekhar')
('34711525', 'Nitesh Shroff', 'nitesh shroff')
('9215658', 'Rama Chellappa', 'rama chellappa')
sshekha@umiacs.umd.edu
nshroff@umiacs.umd.edu
rama@umiacs.umd.edu
29c7dfbbba7a74e9aafb6a6919629b0a7f576530Automatic Facial Expression Analysis and Emotional
Classification
by
Submitted to the Department of Math and Natural Sciences
in partial fulfillment of the requirements for the degree of a
Diplomingenieur der Optotechnik und Bildverarbeitung (FH)
(Diplom Engineer of Photonics and Image Processing)
at the
UNIVERSITY OF APPLIED SCIENCE DARMSTADT (FHD
Accomplished and written at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY (MIT
October 2004
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Math and Natural Sciences
October 30, 2004
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dr. Harald Scharfenberg
Professor at FHD
Thesis Supervisor
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
visiting scientist at MIT
('40163324', 'Robert Fischer', 'robert fischer')
('1684626', 'Bernd Heisele', 'bernd heisele')
292c6b743ff50757b8230395c4a001f210283a34Fast Violence Detection in Video
O. Deniz1, I. Serrano1, G. Bueno1 and T-K. Kim2
VISILAB group, University of Castilla-La Mancha, E.T.S.I.Industriales, Avda. Camilo Jose Cela s.n, 13071 Spain
Imperial College, South Kensington Campus, London SW7 2AZ, UK
Keywords:
action recognition, violence detection, fight detection
{oscar.deniz, ismael.serrano, gloria.bueno}@uclm.es, tk.kim@imperial.ac.uk
29fc4de6b680733e9447240b42db13d5832e408fInternational Journal of Multimedia and Ubiquitous Engineering
Vol. 10, No. 3 (2015), pp. 35-44
http://dx.doi.org/10.14257/ijmue.2015.10.3.04
Recognition of Facial Expressions Based on Tracking and
Selection of Discriminative Geometric Features
Korea Electronics Technology Institute, Jeonju-si, Jeollabuk-do 561-844, Rep. of
Korea
Chonbuk National University, Jeonju-si
Jeollabuk-do 561-756, Rep. of Korea
School of Computing Science, Simon Fraser University, Burnaby, B.C., Canada
('32322842', 'Deepak Ghimire', 'deepak ghimire')
('2034182', 'Joonwhoan Lee', 'joonwhoan lee')
('1689656', 'Ze-Nian Li', 'ze-nian li')
('1682436', 'Sunghwan Jeong', 'sunghwan jeong')
('1937680', 'Hyo Sub Choi', 'hyo sub choi')
deepak@keti.re.kr, chlee@jbnu.ac.kr, li@sfu.ca, shjeong@keti.re.kr,
shpark@keti.re.kr, hschoi@keti.re.kr
29c1f733a80c1e07acfdd228b7bcfb136c1dff98
29f27448e8dd843e1c4d2a78e01caeaea3f46a2d
294d1fa4e1315e1cf7cc50be2370d24cc6363a412008 SPIE Digital Library -- Subscriber Archive Copy
29d414bfde0dfb1478b2bdf67617597dd2d57fc6Multidim Syst Sign Process (2010) 21:213–229
DOI 10.1007/s11045-009-0099-y
Perfect histogram matching PCA for face recognition
Received: 10 August 2009 / Revised: 21 November 2009 / Accepted: 29 December 2009 /
Published online: 14 January 2010
© Springer Science+Business Media, LLC 2010
('2413241', 'Ana-Maria Sevcenco', 'ana-maria sevcenco')
2912c3ea67678a1052d7d5cbe734a6ad90fc360eFacial Feature Detection using a Virtual Structuring
Element
Intelligent Systems Lab Amsterdam,
University of Amsterdam
Kruislaan 403, 1098 SJ Amsterdam, The Netherlands
Keywords: Feature Detection, Active Appearance Models
('9301018', 'Roberto Valenti', 'roberto valenti')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1695527', 'Theo Gevers', 'theo gevers')
rvalenti@science.uva.nl
nicu@science.uva.nl
gevers@science.uva.nl
29f4ac49fbd6ddc82b1bb697820100f50fa98ab6The Benefits and Challenges of Collecting Richer Object Annotations
Department of Computer Science
University of Illinois Urbana Champaign
('2831988', 'Ian Endres', 'ian endres')
('2270286', 'Ali Farhadi', 'ali farhadi')
('2433269', 'Derek Hoiem', 'derek hoiem')
('1744452', 'David A. Forsyth', 'david a. forsyth')
{iendres2,afarhad2,dhoiem,daf}@uiuc.edu
2910fcd11fafee3f9339387929221f4fc1160973Evaluating Open-Universe Face Identification on the Web
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
Center for Research in Computer Vision, University of Central Florida, Orlando, FL
('16131262', 'Enrique G. Ortiz', 'enrique g. ortiz')brian@briancbecker.com and eortiz@cs.ucf.edu
29479bb4fe8c04695e6f5ae59901d15f8da6124bMultiple Instance Learning for Labeling Faces in
Broadcasting News Video
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
('38936351', 'Jun Yang', 'jun yang')
('2005689', 'Rong Yan', 'rong yan')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
juny@cs.cmu.edu
yanrong@cs.cmu.edu
alex+@cs.cmu.edu
290136947fd44879d914085ee51d8a4f433765faOn a Taxonomy of Facial Features ('1817623', 'Brendan Klare', 'brendan klare')
('6680444', 'Anil K. Jain', 'anil k. jain')
2957715e96a18dbb5ed5c36b92050ec375214aa6Improving Face Attribute Detection with Race and Gender Diversity
InclusiveFaceNet:
('3766392', 'Hee Jung Ryu', 'hee jung ryu')
291f527598c589fb0519f890f1beb2749082ddfdSeeing People in Social Context: Recognizing
People and Social Relationships
University of Illinois at Urbana-Champaign, Urbana, IL
Kodak Research Laboratories, Rochester, NY
('22804340', 'Gang Wang', 'gang wang')
('33642939', 'Jiebo Luo', 'jiebo luo')
291265db88023e92bb8c8e6390438e5da148e8f5MS-Celeb-1M: A Dataset and Benchmark for
Large-Scale Face Recognition
Microsoft Research
('3133575', 'Yandong Guo', 'yandong guo')
('1684635', 'Lei Zhang', 'lei zhang')
('1689532', 'Yuxiao Hu', 'yuxiao hu')
('1722627', 'Xiaodong He', 'xiaodong he')
('1800422', 'Jianfeng Gao', 'jianfeng gao')
{yandong.guo,leizhang,yuxiao.hu,xiaohe,jfgao}@microsoft.com
29c340c83b3bbef9c43b0c50b4d571d5ed037cbdStacked Dense U-Nets with Dual
Transformers for Robust Face Alignment
https://github.com/deepinsight/insightface
https://jiankangdeng.github.io/
https://ibug.doc.ic.ac.uk/people/nxue
Stefanos Zafeiriou2
https://wp.doc.ic.ac.uk/szafeiri/
1 InsightFace
Shanghai, China
2 IBUG
Imperial College London
London, UK
('3007274', 'Jia Guo', 'jia guo')
('3234063', 'Jiankang Deng', 'jiankang deng')
('3007274', 'Jia Guo', 'jia guo')
('3234063', 'Jiankang Deng', 'jiankang deng')
('4091869', 'Niannan Xue', 'niannan xue')
297d3df0cf84d24f7efea44f87c090c7d9be4bedAppearance-based 3-D Face Recognition from
Video
University of Maryland, Center for Automation Research
A.V. Williams Building
College Park, MD
The Robotics Institute, Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
('33731953', 'Ralph Gross', 'ralph gross')
('40039594', 'Simon Baker', 'simon baker')
29b86534d4b334b670914038c801987e18eb5532Total Cluster: A person agnostic clustering method for
broadcast videos
Computer Vision for Human Computer Interaction, Karlsruhe Institute of Technology, Germany
Visual Geometry Group, University of Oxford, UK
Center for Machine Vision Research, University of Oulu, Finland
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('2827962', 'Esa Rahtu', 'esa rahtu')
('1741116', 'Eric Sommerlade', 'eric sommerlade')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
tapaswi@kit.edu, omkar@robots.ox.ac.uk, erahtu@ee.oulu.fi
eric@robots.ox.ac.uk, rainer.stiefelhagen@kit.edu, az@robots.ox.ac.uk
29631ca6cff21c9199c70bcdbbcd5f812d331a96RESEARCH ARTICLE
Error Rates in Users of Automatic Face
Recognition Software
School of Psychology, The University of New South Wales, Sydney, Australia, 2 School of Psychology
The University of Sydney, Sydney, Australia
('40404556', 'David White', 'david white')
('29329747', 'James D. Dunn', 'james d. dunn')
('5016966', 'Alexandra C. Schmid', 'alexandra c. schmid')
('3086646', 'Richard I. Kemp', 'richard i. kemp')
* david.white@unsw.edu.au
2965d092ed72822432c547830fa557794ae7e27bImproving Representation and Classification of Image and
Video Data for Surveillance Applications
BSc(Biol), MSc(Biol), MSc(CompSc)
A thesis submitted for the degree of Doctor of Philosophy at
The University of Queensland in
School of Information Technology and Electrical Engineering
('2706642', 'Andres Sanin', 'andres sanin')
2983efadb1f2980ab5ef20175f488f77b6f059d7ch04_88815.QXP 12/23/08 3:36 PM Page 53
◆ 4 ◆
EMOTION IN HUMAN–COMPUTER INTERACTION
Stanford University
Understanding Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Distinguishing Emotion from Related Constructs . . . . 55
Mood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Sentiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Effects of Affect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Causes of Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Needs and Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Appraisal Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Contagion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Moods and Sentiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Previous Emotional State . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Causes of Mood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Contagion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Other Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Measuring Affect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Neurological Responses . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Autonomic Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Facial Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Voice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Self-Report Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Affect Recognition by Users . . . . . . . . . . . . . . . . . . . . . . . 63
Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1. With which emotion should HCI designers
be most concerned? . . . . . . . . . . . . . . . . . . . . . . . . . 64
2. When and how should interfaces attempt to
directly address users’ emotions and basic
needs (vs. application-specific goals)? . . . . . . . . . . . . 64
3. How accurate must emotion recognition be
to be useful as an interface technique? . . . . . . . . . . . 64
4. When and how should users be informed
that their affective states are being monitored
and adapted to? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5. How does emotion play out in computer-
mediated communication (CMC)? . . . . . . . . . . . . . . 64
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
53
('2739604', 'Scott Brave', 'scott brave')
('2029850', 'Clifford Nass', 'clifford nass')
2911e7f0fb6803851b0eddf8067a6fc06e8eadd6Joint Fine-Tuning in Deep Neural Networks
for Facial Expression Recognition
School of Electrical Engineering
Korea Advanced Institute of Science and Technology
('1800903', 'Heechul Jung', 'heechul jung')
('3249661', 'Junho Yim', 'junho yim')
{heechul, haeng, junho.yim, sunny0414, junmo.kim}@kaist.ac.kr
2921719b57544cfe5d0a1614d5ae81710ba804faFace Recognition Enhancement Based on Image
File Formats and Wavelet De-noising
('4050987', 'Jieqing Tan', 'jieqing tan')
('40160496', 'Zhengfeng Hou', 'zhengfeng hou')
29a013b2faace976f2c532533bd6ab4178ccd348This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Hierarchical Manifold Learning With Applications
to Supervised Classification for High-Resolution
Remotely Sensed Images
('7192623', 'Hong-Bing Huang', 'hong-bing huang')
('3239427', 'Hong Huo', 'hong huo')
('1680725', 'Tao Fang', 'tao fang')
29921072d8628544114f68bdf84deaf20a8c8f91Multi-Task Curriculum Transfer Deep Learning of Clothing Attributes
School of EECS, Queen Mary University of London, UK
('40204089', 'Qi Dong', 'qi dong')
('2073354', 'Shaogang Gong', 'shaogang gong')
('2171228', 'Xiatian Zhu', 'xiatian zhu')
{q.dong, s.gong, xiatian.zhu}@qmul.ac.uk
2969f822b118637af29d8a3a0811ede2751897b5Cascaded Shape Space Pruning for Robust Facial Landmark Detection
Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
('1874505', 'Xiaowei Zhao', 'xiaowei zhao')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1695600', 'Xiujuan Chai', 'xiujuan chai')
('1710220', 'Xilin Chen', 'xilin chen')
{xiaowei.zhao,shiguang.shan,xiujuan.chai,xilin.chen}@vipl.ict.ac.cn
29756b6b16d7b06ea211f21cdaeacad94533e8b4Thresholding Approach based on GPU for Facial
Expression Recognition
1 Benemérita Universidad Autónoma de Puebla, Faculty of Computer Science, Puebla, México
2Instituto Tecnológico de Puebla, Puebla, México
('4348305', 'Jesús García-Ramírez', 'jesús garcía-ramírez')
('3430302', 'Adolfo Aguilar-Rico', 'adolfo aguilar-rico')
gr_jesus@outlook.com,{aolvera,iolmos}@cs.buap.mx
{kremhilda,adolforico2}@gmail.com
293193d24d5c4d2975e836034bbb2329b71c4fe7Building a Corpus of Facial Expressions
for Learning-Centered Emotions
Instituto Tecnológico de Culiacán, Culiacán, Sinaloa,
Mexico
('1744658', 'María Lucía Barrón-Estrada', 'maría lucía barrón-estrada')
('38814197', 'Bianca Giovanna Aispuro-Medina', 'bianca giovanna aispuro-medina')
('38906263', 'Elvia Minerva Valencia-Rodríguez', 'elvia minerva valencia-rodríguez')
('38797488', 'Ana Cecilia Lara-Barrera', 'ana cecilia lara-barrera')
{lbarron, rzatarain, m06170904, m95170906, m15171452} @itculiacan.edu.mx
294bd7eb5dc24052237669cdd7b4675144e22306International Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Index Copernicus Value (2013): 6.14 | Impact Factor (2013): 4.438
Automatic Face Annotation

M.Tech Student, Mount Zion College of Engineering, Pathanamthitta, Kerala, India
2988f24908e912259d7a34c84b0edaf7ea50e2b3A Model of Brightness Variations Due to
Illumination Changes and Non-rigid Motion
Using Spherical Harmonics
Jos´e M. Buenaposada
Dep. Ciencias de la Computaci´on,
U. Rey Juan Carlos, Spain
http://www.dia.fi.upm.es/~pcr
Inst. for Systems and Robotics
Inst. Superior T´ecnico, Portugal
http://www.isr.ist.utl.pt/~adb
Enrique Mu˜noz
Facultad de Inform´atica,
U. Complutense de Madrid, Spain
Dep. de Inteligencia Artificial,
U. Polit´ecnica de Madrid, Spain
http://www.dia.fi.upm.es/~pcr
http://www.dia.fi.upm.es/~pcr
('1714730', 'Alessio Del Bue', 'alessio del bue')
('1778998', 'Luis Baumela', 'luis baumela')
29156e4fe317b61cdcc87b0226e6f09e416909e0
29f0414c5d566716a229ab4c5794eaf9304d78b6Hindawi Publishing Corporation
EURASIP Journal on Advances in Signal Processing
Volume 2008, Article ID 579416, 17 pages
doi:10.1155/2008/579416
Review Article
Biometric Template Security
Michigan State University, 3115 Engineering Building
East Lansing, MI 48824, USA
Received 2 July 2007; Revised 28 September 2007; Accepted 4 December 2007
Recommended by Arun Ross
Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the
widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy
of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate
that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various
vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In par-
ticular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised
biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the
acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages
and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable
security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as bio-
metric systems are beginning to proliferate into the core physical and information infrastructure of our society.
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1.
INTRODUCTION
A reliable identity management system is urgently needed in
order to combat the epidemic growth in identity theft and to
meet the increased security requirements in a variety of ap-
plications ranging from international border crossings to se-
curing information in databases. Establishing the identity of
a person is a critical task in any identity management system.
Surrogate representations of identity such as passwords and
ID cards are not sufficient for reliable identity determination
because they can be easily misplaced, shared, or stolen. Bio-
metric recognition is the science of establishing the identity
of a person using his/her anatomical and behavioral traits.
Commonly used biometric traits include fingerprint, face,
iris, hand geometry, voice, palmprint, handwritten signa-
tures, and gait (see Figure 1). Biometric traits have a number
of desirable properties with respect to their use as an authen-
tication token, namely, reliability, convenience, universality,
and so forth. These characteristics have led to the widespread
deployment of biometric authentication systems. But there
are still some issues concerning the security of biometric
recognition systems that need to be addressed in order to en-
sure the integrity and public acceptance of these systems.
There are five major components in a generic biomet-
ric authentication system, namely, sensor, feature extrac-
tor, template database, matcher, and decision module (see
Figure 2). Sensor is the interface between the user and the
authentication system and its function is to scan the bio-
metric trait of the user. Feature extraction module processes
the scanned biometric data to extract the salient information
(feature set) that is useful in distinguishing between differ-
ent users. In some cases, the feature extractor is preceded
by a quality assessment module which determines whether
the scanned biometric trait is of sufficient quality for fur-
ther processing. During enrollment, the extracted feature
set is stored in a database as a template (XT) indexed by
the user’s identity information. Since the template database
could be geographically distributed and contain millions of
records (e.g., in a national identification system), maintain-
ing its security is not a trivial task. The matcher module is
usually an executable program, which accepts two biomet-
ric feature sets XT and XQ (from template and query, resp.)
as inputs, and outputs a match score (S) indicating the sim-
ilarity between the two sets. Finally, the decision module
makes the identity decision and initiates a response to the
query.
('6680444', 'Anil K. Jain', 'anil k. jain')
('34633765', 'Karthik Nandakumar', 'karthik nandakumar')
('2743820', 'Abhishek Nagar', 'abhishek nagar')
('6680444', 'Anil K. Jain', 'anil k. jain')
Correspondence should be addressed to Karthik Nandakumar, nandakum@cse.msu.edu
293ade202109c7f23637589a637bdaed06dc37c9
7c61d21446679776f7bdc7afd13aedc96f9acac1Hierarchical Label Inference for Video Classification
Simon Fraser University
Simon Fraser University
Simon Fraser University
('3079079', 'Nelson Nauata', 'nelson nauata')
('2847110', 'Jonathan Smith', 'jonathan smith')
('10771328', 'Greg Mori', 'greg mori')
nnauata@sfu.ca
jws4@sfu.ca
mori@cs.sfu.ca
7cee802e083c5e1731ee50e731f23c9b12da7d362B3C: 2 Box 3 Crop of Facial Image for Gender Classification with Convolutional
Networks
Department of Electronics and Communication Engineering and
Computer Vision Group, L. D. College of Engineering, Ahmedabad, India
('23922616', 'Vandit Gajjar', 'vandit gajjar') gajjar.vandit.381@ldce.ac.in
7c47da191f935811f269f9ba3c59556c48282e80Robust Eye Centers Localization
with Zero–Crossing Encoded Image Projections
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
('2143956', 'Laura Florea', 'laura florea')
('2760434', 'Corneliu Florea', 'corneliu florea')
('2905899', 'Constantin Vertan', 'constantin vertan')
laura.florea@upb.ro
corneliu.florea@upb.ro
constantin.vertan@upb.ro
7c7ab59a82b766929defd7146fd039b89d67e984Improving Multiview Face Detection with
Multi-Task Deep Convolutional Neural Networks
Microsoft Research
One Microsoft Way, Redmond WA 98052
('1706673', 'Cha Zhang', 'cha zhang')
('1809184', 'Zhengyou Zhang', 'zhengyou zhang')
7ca337735ec4c99284e7c98f8d61fb901dbc9015Proceedings of the 8th International
IEEE Conference on Intelligent Transportation Systems
Vienna, Austria, September 13-16, 2005
TC4.2
Driver Activity Monitoring through Supervised and Unsupervised Learning
Harini Veeraraghavan Stefan Atev Nathaniel Bird Paul Schrater Nikolaos Papanikolopoulos†
Department of Computer Science and Engineering
University of Minnesota
{harini,atev,bird,schrater,npapas}@cs.umn.edu
7c1cfab6b60466c13f07fe028e5085a949ec8b30Deep Feature Consistent Variational Autoencoder
University of Nottingham, Ningbo China
Shenzhen University, Shenzhen China
University of Nottingham, Ningbo China
University of Nottingham, Ningbo China
('3468964', 'Xianxu Hou', 'xianxu hou')
('1687690', 'Linlin Shen', 'linlin shen')
('39508183', 'Ke Sun', 'ke sun')
('1698461', 'Guoping Qiu', 'guoping qiu')
xianxu.hou@nottingham.edu.cn
llshen@szu.edu.cn
ke.sun@nottingham.edu.cn
guoping.qiu@nottingham.edu.cn
7c45b5824645ba6d96beec17ca8ecfb22dfcdd7fNews image annotation on a large parallel text-image corpus
Universit´e de Rennes 1/IRISA, CNRS/IRISA, INRIA Rennes-Bretagne Atlantique
Campus de Beaulieu
35042 Rennes Cedex, France
('1694537', 'Pierre Tirilly', 'pierre tirilly')
('1735666', 'Vincent Claveau', 'vincent claveau')
('2436627', 'Patrick Gros', 'patrick gros')
ptirilly@irisa.fr, vclaveau@irisa.fr, pgros@inria.fr
7c17280c9193da3e347416226b8713b99e7825b8VideoCapsuleNet: A Simplified Network for Action
Detection
Kevin Duarte
Yogesh S Rawat
Center for Research in Computer Vision
University of Central Florida
Orlando, FL 32816
('1745480', 'Mubarak Shah', 'mubarak shah')kevin_duarte@knights.ucf.edu
yogesh@crcv.ucf.edu
shah@crcv.ucf.edu
7cffcb4f24343a924a8317d560202ba9ed26cd0bThe Unconstrained Ear Recognition Challenge
University of Ljubljana
Ljubljana, Slovenia
IIT Kharagpur
Kharagpur, India
University of Colorado Colorado Springs
Colorado Springs, CO, USA
Islamic Azad University
Qazvin, Iran
Imperial College London
London, UK
ITU Department of Computer Engineering
Istanbul, Turkey
('34862665', 'Peter Peer', 'peter peer')
('3110004', 'Anjith George', 'anjith george')
('2173052', 'Adil Ahmad', 'adil ahmad')
('39000630', 'Elshibani Omar', 'elshibani omar')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
('3062107', 'Reza Safdari', 'reza safdari')
('47943220', 'Yuxiang Zhou', 'yuxiang zhou')
('23981209', 'Dogucan Yaman', 'dogucan yaman')
ziga.emersic@fri.uni-lj.si
7c0a6824b556696ad7bdc6623d742687655852db18th Telecommunications forum TELFOR 2010
Serbia, Belgrade, November 23-25, 2010.
MPCA+DATER: A Novel Approach for Face
Recognition Based on Tensor Objects
Ali. A. Shams Baboli, Member, IEEE, G. Rezai-rad, Member, IEEE, Aref. Shams Baboli
7c95449a5712aac7e8c9a66d131f83a038bb7caaThis is an author produced version of Facial first impressions from another angle: How
social judgements are influenced by changeable and invariant facial properties.
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/102935/
Article:
Rhodes (2017) Facial first impressions from another angle: How social judgements are
influenced by changeable and invariant facial properties. British journal of psychology. pp.
397-415. ISSN 0007-1269
https://doi.org/10.1111/bjop.12206
promoting access to
White Rose research papers
http://eprints.whiterose.ac.uk/
('16854522', 'Clare', 'clare')
('9384336', 'Young', 'young')
eprints@whiterose.ac.uk
7c4c442e9c04c6b98cd2aa221e9d7be15efd8663Classifier Learning with Hidden Information
ECSE, Rensselaer Polytechnic Institute, Troy, NY
('2860279', 'Ziheng Wang', 'ziheng wang')
('1726583', 'Qiang Ji', 'qiang ji')
wangz10@rpi.edu
jiq@rpi.edu
7c3e09e0bd992d3f4670ffacb4ec3a911141c51fNoname manuscript No.
(will be inserted by the editor)
Transferring Object-Scene Convolutional Neural Networks for
Event Recognition in Still Images
Received: date / Accepted: date
('33345248', 'Limin Wang', 'limin wang')
7c2ec6f4ab3eae86e0c1b4f586e9c158fb1d719dDissimilarity-Based Classifications in Eigenspaces(cid:63)
Myongji University, Yongin, 449-728 South
Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of
('34959719', 'Sang-Woon Kim', 'sang-woon kim')
('1747298', 'Robert P. W. Duin', 'robert p. w. duin')
Korea. e-mail : kimsw@mju.ac.kr
Technology, The Netherlands. e-mail : r.p.w.duin@tudelft.nl
7cf8a841aad5b7bdbea46a7bb820790e9ce12d0bSUPERVISED HEAT KERNEL LPP
METHOD FOR FACE RECOGNITION
Utah State University, Logan UT
('1725739', 'Xiaojun Qi', 'xiaojun qi')cryshan@cc.usu.edu and xqi@cc.usu.edu
7c9622ad1d8971cd74cc9e838753911fe27ccac4Representation Learning with Smooth
Autoencoder
Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
('2582309', 'Kongming Liang', 'kongming liang')
('1783542', 'Hong Chang', 'hong chang')
('10338111', 'Zhen Cui', 'zhen cui')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{kongming.liang, hong.chang, zhen.cui, shiguang.shan, xilin.chen}@vipl.ict.ac.cn
7c2c9b083817f7a779d819afee383599d2e97ed8Disentangling Motion, Foreground and Background Features in Videos
Beihang University
Beijing, China
V´ıctor Campos
Xavier Giro-i-Nieto
Barcelona Supercomputing Center
Universitat Politecnica de Catalunya
Barcelona, Catalonia/Spain
Barcelona, Catalonia/Spain
Barcelona Supercomputing Center
Barcelona, Catalonia/Spain
Cristian Canton Ferrer
Facebook
Seattle (WA), USA
('10668384', 'Xunyu Lin', 'xunyu lin')
('1711068', 'Jordi Torres', 'jordi torres')
xunyulin2017@outlook.com
victor.campos@bsc.es
xavier.giro@upc.edu
jordi.torres@bsc.es
ccanton@fb.com
7c45339253841b6f0efb28c75f2c898c79dfd038Unsupervised Joint Alignment of Complex Images
University of Massachusetts Amherst
Amherst, MA
Erik Learned-Miller
('3219900', 'Gary B. Huang', 'gary b. huang')
('2246870', 'Vidit Jain', 'vidit jain')
fgbhuang,vidit,elmg@cs.umass.edu
7c825562b3ff4683ed049a372cb6807abb09af2aFinding Tiny Faces
Supplementary Materials
Robotics Institute
Carnegie Mellon University
1. Error analysis
Quantitative analysis We plot the distribution of error modes among false positives in Fig. 1 and the impact of object
characteristics on detection performance in Fig. 2 and Fig. 3.
Qualitative analysis We show top 20 scoring false positives in Fig. 4.
2. Experimental details
Multi-scale features Inspired by the way [3] trains “FCN-8s at-once”, we scale the learning rate of predictor built on
top of each layer by a fixed constant. Specifically, we use a scaling factor of 1 for res4, 0.1 for res3, and 0.01 for res2.
One more difference between our model and [3] is that: instead of predicting at original resolution, our model predicts
at the resolution of res3 feature (downsampled by 8X comparing to input resolution).
Input sampling We first randomly re-scale the input image by 0.5X, 1X, or 2X. Then we randomly crop a 500x500
image region out of the re-scaled input. We pad with average RGB value (prior to average subtraction) when cropping
outside image boundary.
Border cases Similar to [2], we ignore gradients coming from heatmap locations whose detection windows cross the
image boundary. The only difference is, we treat padded average pixels (as described in Input sampling) as outside
image boundary as well.
Online hard mining and balanced sampling We apply hard mining on both positive and negative examples. Our
implementation is simpler yet still effective comparing to [4]. We set a small threshold (0.03) on classification loss
to filter out easy locations. Then we sample at most 128 locations for both positive and negative (respectively) from
remaining ones whose losses are above the threshold. We compare training with and without hard mining on validation
performance in Table 1.
Loss function Our loss function is formulated in the same way as [2]. Note that we also use Huber loss as the loss
function for bounding box regression.
Bounding box regression Our bounding box regression is formulated as [2] and trained jointly with classification
using stochastic gradient descent. We compare between testing with and without regression in terms of performance
on WIDER FACE validation set.
('1770537', 'Deva Ramanan', 'deva ramanan'){peiyunh,deva}@cs.cmu.edu
7c7b0550ec41e97fcfc635feffe2e53624471c591051-4651/14 $31.00 © 2014 IEEE
DOI 10.1109/ICPR.2014.124
660
7ce03597b703a3b6754d1adac5fbc98536994e8f
7c36afc9828379de97f226e131390af719dbc18dUnsupervised Face-Name Association
via Commute Distance
1Zhejiang Provincial Key Laboratory of Service Robot
College of Computer Science, Zhejiang University, Hangzhou, China
State Key Lab of CADandCG, College of Computer Science, Zhejiang University, Hangzhou, China
('4140420', 'Jiajun Bu', 'jiajun bu')
('40155478', 'Bin Xu', 'bin xu')
('2484982', 'Chenxia Wu', 'chenxia wu')
('2588203', 'Chun Chen', 'chun chen')
('1704030', 'Jianke Zhu', 'jianke zhu')
('1724421', 'Deng Cai', 'deng cai')
('3945955', 'Xiaofei He', 'xiaofei he')
{bjj,xbzju,chenxiawu,chenc,jkzhu}@zju.edu.cn
{dengcai,xiaofeihe}@cad.zju.edu.cn
7c119e6bdada2882baca232da76c35ae9b5277f8Facial Expression Recognition Using Embedded
Hidden Markov Model
Intelligence Computing Research Center
HIT Shenzhen Graduate School
Shenzhen, China
('24233679', 'Languang He', 'languang he')
('1747105', 'Xuan Wang', 'xuan wang')
('10106946', 'Chenglong Yu', 'chenglong yu')
('38700402', 'Kun Wu', 'kun wu')
{telent, wangxuan, ycl, wukun} @cs.hitsz.edu.cn
7ca7255c2e0c86e4adddbbff2ce74f36b1dc522dStereo Matching for Unconstrained Face Recognition
Ph.D. Proposal
University of Maryland
Department of Computer Science
College Park, MD
May 10, 2009
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')carlos@cs.umd.edu
7c42371bae54050dbbf7ded1e7a9b4109a23a482The International Arab Journal of Information Technology, Vol. 12, No. 2, March 2015 183
Optimized Features Selection using Hybrid PSO-
GA for Multi-View Gender Classification
Foundation University Rawalpindi Campus, Pakistan
University of Central Punjab, Pakistan
University of Dammam, Saudi Arabia
4Department of Computer Science, SZABIST, Pakistan
('1723986', 'Muhammad Nazir', 'muhammad nazir')
('11616523', 'Muhammad Khan', 'muhammad khan')
7c953868cd51f596300c8231192d57c9c514ae17Detecting and Aligning Faces by Image Retrieval
Zhe Lin2
Northwestern University
2Adobe Research
2145 Sheridan Road, Evanston, IL 60208
345 Park Ave, San Jose, CA 95110
('1720987', 'Xiaohui Shen', 'xiaohui shen')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
('1736695', 'Ying Wu', 'ying wu')
{xsh835, yingwu}@eecs.northwestern.edu
{zlin, jbrandt}@adobe.com
7c6dbaebfe14878f3aee400d1378d90d61373921A Novel Biometric Feature Extraction Algorithm using Two
Dimensional Fisherface in 2DPCA subspace for Face Recognition
School of Electrical, Electronic and Computer Engineering
University of Newcastle
Newcastle upon Tyne, NE1 7RU
UNITED KINDOM
('3156162', 'R. M. MUTELO', 'r. m. mutelo')
7c9a65f18f7feb473e993077d087d4806578214eSpringerLink - Zeitschriftenbeitrag
http://www.springerlink.com/content/93hr862660nl1164/?p=abe5352...
Deutsch
Deutsch
Go
Vorherige Beitrag Nächste Beitrag
Beitrag markieren
In den Warenkorb legen
Zu gespeicherten Artikeln
hinzufügen
Permissions & Reprints
Diesen Artikel empfehlen
Ergebnisse
finden
Erweiterte Suche
Go
im gesamten Inhalt
in dieser Zeitschrift
in diesem Heft
Diesen Beitrag exportieren
Diesen Beitrag exportieren als RIS
| Text
Text
PDF
PDF ist das gebräuchliche Format
für Online Publikationen. Die Größe
dieses Dokumentes beträgt 564
Kilobyte. Je nach Art Ihrer
Internetverbindung kann der
Download einige Zeit in Anspruch
nehmen.
öffnen: Gesamtdokument
Publikationsart Subject Collections
Zurück zu: Journal Issue
Athens Authentication Point
Zeitschriftenbeitrag
Willkommen!
Um unsere personalisierten
Angebote nutzen zu können,
müssen Sie angemeldet sein.
Login
Jetzt registrieren
Zugangsdaten vergessen?
Hilfe.
Mein Menü
Markierte Beiträge
Alerts
Meine Bestellungen
Private emotions versus social interaction: a data-driven approach towards
analysing emotion in speech
Zeitschrift
Verlag
ISSN
Heft
Kategorie
DOI
Seiten
Subject Collection
SpringerLink Date
User Modeling and User-Adapted Interaction
Springer Netherlands
0924-1868 (Print) 1573-1391 (Online)
Volume 18, Numbers 1-2 / Februar 2008
Original Paper
10.1007/s11257-007-9039-4
175-206
Informatik
Freitag, 12. Oktober 2007
Gespeicherte Beiträge
Alle
Favoriten
(1) Lehrstuhl für Mustererkennung, FAU Erlangen – Nürnberg, Martensstr. 3, 91058 Erlangen,
Germany
Received: 3 July 2006 Accepted: 14 January 2007 Published online: 12 October 2007
('1745089', 'Anton Batliner', 'anton batliner')
('1732747', 'Stefan Steidl', 'stefan steidl')
('2596771', 'Christian Hacker', 'christian hacker')
('1739326', 'Elmar Nöth', 'elmar nöth')
7c1e1c767f7911a390d49bed4f73952df8445936NON-RIGID OBJECT DETECTION WITH LOCAL INTERLEAVED SEQUENTIAL ALIGNMENT (LISA)
Non-Rigid Object Detection with Local
Interleaved Sequential Alignment (LISA)
and Tom´aˇs Svoboda, Member, IEEE
('35274952', 'Karel Zimmermann', 'karel zimmermann')
('2687885', 'David Hurych', 'david hurych')
7cf579088e0456d04b531da385002825ca6314e2Emotion Detection on TV Show Transcripts with
Sequence-based Convolutional Neural Networks
Mathematics and Computer Science
Mathematics and Computer Science
Emory University
Atlanta, GA 30322, USA
Emory University
Atlanta, GA 30322, USA
('10669356', 'Sayyed M. Zahiri', 'sayyed m. zahiri')
('4724587', 'Jinho D. Choi', 'jinho d. choi')
sayyed.zahiri@emory.edu
jinho.choi@emory.edu
7c80d91db5977649487388588c0c823080c9f4b4DocFace: Matching ID Document Photos to Selfies∗
Michigan State University
East Lansing, Michigan, USA
('9644181', 'Yichun Shi', 'yichun shi')
('1739705', 'Anil K. Jain', 'anil k. jain')
shiyichu@msu.edu, jain@cse.msu.edu
7c349932a3d083466da58ab1674129600b12b81c
7c30ea47f5ae1c5abd6981d409740544ed16ed16ROITBERG, AL-HALAH, STIEFELHAGEN: NOVELTY DETECTION FOR ACTION RECOGNITION
Informed Democracy: Voting-based Novelty
Detection for Action Recognition
Karlsruhe Institute of Technology
76131 Karlsruhe,
Germany
('33390229', 'Alina Roitberg', 'alina roitberg')
('2256981', 'Ziad Al-Halah', 'ziad al-halah')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
alina.roitberg@kit.edu
ziad.al-halah@kit.edu
rainer.stiefelhagen@kit.edu
1648cf24c042122af2f429641ba9599a2187d605Boosting Cross-Age Face Verification via Generative Age Normalization
(cid:2) Orange Labs, 4 rue Clos Courtel, 35512 Cesson-S´evign´e, France
† Eurecom, 450 route des Chappes, 06410 Biot, France
('3116433', 'Grigory Antipov', 'grigory antipov')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
('2341854', 'Moez Baccouche', 'moez baccouche')
{grigory.antipov,moez.baccouche}@orange.com
jean-luc.dugelay@eurecom.fr
162403e189d1b8463952fa4f18a291241275c354Action Recognition with Spatio-Temporal
Visual Attention on Skeleton Image Sequences
With a strong ability of modeling sequential data, Recur-
rent Neural Networks (RNN) with Long Short-Term Memory
(LSTM) neurons outperform the previous hand-crafted feature
based methods [9], [10]. Each skeleton frame is converted into
a feature vector and the whole sequence is fed into the RNN.
Despite the strong ability in modeling temporal sequences,
RNN structures lack the ability to efficiently learn the spatial
relations between the joints. To better use spatial information,
a hierarchical structure is proposed in [11], [12] that feeds
the joints into the network as several pre-defined body part
groups. However,
limit
the effectiveness of representing spatial relations. A spatio-
temporal 2D LSTM (ST-LSTM) network [13] is proposed
to learn the spatial and temporal relations simultaneously.
Furthermore, a two-stream RNN structure [14] is proposed to
learn the spatio-temporal relations with two RNN branches.
the pre-defined body regions still
('21518096', 'Zhengyuan Yang', 'zhengyuan yang')
('3092578', 'Yuncheng Li', 'yuncheng li')
('1706007', 'Jianchao Yang', 'jianchao yang')
('33642939', 'Jiebo Luo', 'jiebo luo')
160259f98a6ec4ec3e3557de5e6ac5fa7f2e7f2bDiscriminant Multi-Label Manifold Embedding for Facial Action Unit
Detection
Signal Procesing Laboratory (LTS5), ´Ecole Polytechnique F´ed´erale de Lausanne, Switzerland
('1697965', 'Hua Gao', 'hua gao')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
anil.yuce@epfl.ch, hua.gao@epfl.ch, jean-philippe.thiran@epfl.ch
16671b2dc89367ce4ed2a9c241246a0cec9ec10e2006
Detecting the Number of Clusters
in n-Way Probabilistic Clustering
('1788526', 'Zhaoshui He', 'zhaoshui he')
('1747156', 'Andrzej Cichocki', 'andrzej cichocki')
('1795838', 'Shengli Xie', 'shengli xie')
('1775180', 'Kyuwan Choi', 'kyuwan choi')
16fdd6d842475e6fbe58fc809beabbed95f0642eLearning Temporal Embeddings for Complex Video Analysis
Stanford University, 2Simon Fraser University
('34066479', 'Vignesh Ramanathan', 'vignesh ramanathan')
('10771328', 'Greg Mori', 'greg mori')
('3216322', 'Li Fei-Fei', 'li fei-fei')
{vigneshr, kdtang}@cs.stanford.edu, mori@cs.sfu.ca, feifeili@cs.stanford.edu
16bce9f940bb01aa5ec961892cc021d4664eb9e4Mutual Component Analysis for Heterogeneous Face Recognition
39
Heterogeneous face recognition, also known as cross-modality face recognition or inter-modality face recogni-
tion, refers to matching two face images from alternative image modalities. Since face images from different
image modalities of the same person are associated with the same face object, there should be mutual com-
ponents that reflect those intrinsic face characteristics that are invariant to the image modalities. Motivated
by this rationality, we propose a novel approach called mutual component analysis (MCA) to infer the mu-
tual components for robust heterogeneous face recognition. In the MCA approach, a generative model is first
proposed to model the process of generating face images in different modalities, and then an Expectation
Maximization (EM) algorithm is designed to iteratively learn the model parameters. The learned generative
model is able to infer the mutual components (which we call the hidden factor, where hidden means the
factor is unreachable and invisible, and can only be inferred from observations) that are associated with
the person’s identity, thus enabling fast and effective matching for cross-modality face recognition. To en-
hance recognition performance, we propose an MCA-based multi-classifier framework using multiple local
features. Experimental results show that our new approach significantly outperforms the state-of-the-art
results on two typical application scenarios, sketch-to-photo and infrared-to-visible face recognition.
Categories and Subject Descriptors: I.5.1 [Pattern Recognition]: Models
General Terms: Design, Algorithms, Performance
Additional Key Words and Phrases: Face recognition, heterogeneous face recognition, mutual component
analysis (MCA)
ACM Reference Format:
Heterogeneous Face Recognition ACM Trans. Intell. Syst. Technol. 9, 4, Article 39 (July 2015), 22 pages.
DOI: http://dx.doi.org/10.1145/2807705
This work was supported by grants from National Natural Science Foundation of China (61103164 and
61125106), Natural Science Foundation of Guangdong Province (2014A030313688), Australian Research
Council Projects (FT-130101457 and LP-140100569), Key Laboratory of Human-Machine Intelligence-
Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Guangdong
Innovative Research Team Program (No.201001D0104648280), the Key Research Program of the Chinese
Academy of Sciences (Grant No. KGZD-EW-T03), and project MMT-8115038 of the Shun Hing Institute of
Advanced Engineering, The Chinese University of Hong Kong
Author s addresses: Z. Li and D. Gong, Shenzhen Institutes of Advanced Technology, Chinese Academy
tum Computation and Intelligent Systems, Faculty of Engineering and Information Technology, University
Key Laboratory of Transient Optics and Photonics, Xi an Institute of Optics and Precision Mechanics, Chi
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights for components of this work owned
('1911510', 'Zhifeng Li', 'zhifeng li')
('2856494', 'Dihong Gong', 'dihong gong')
('20638185', 'Qiang Li', 'qiang li')
('1692693', 'Dacheng Tao', 'dacheng tao')
('1720243', 'Xuelong Li', 'xuelong li')
('1911510', 'Zhifeng Li', 'zhifeng li')
('20638185', 'Qiang Li', 'qiang li')
('1692693', 'Dacheng Tao', 'dacheng tao')
('1720243', 'Xuelong Li', 'xuelong li')
of Sciences, P. R. China; e-mail: {zhifeng.li, dh.gong}@siat.ac.cn; Q. Li and D. Tao, Centre for Quan-
of Technology Sydney, 81 Broadway, Ultimo, NSW 2007, Australia; e-mail: qiang.li-2@student.uts.edu.au,
dacheng.tao@uts.edu.au; X. Li, the Center for OPTical IMagery Analysis and Learning (OPTIMAL), State
nese Academy of Sciences, Xi’an 710119, Shaanxi, China; e-mail: xuelong li@opt.ac.cn.
16de1324459fe8fdcdca80bba04c3c30bb789bdf
16892074764386b74b6040fe8d6946b67a246a0b
16395b40e19cbc6d5b82543039ffff2a06363845Action Recognition in Video Using Sparse Coding and Relative Features
Anal´ı Alfaro
P. Universidad Catolica de Chile
P. Universidad Catolica de Chile
P. Universidad Catolica de Chile
Santiago, Chile
Santiago, Chile
Santiago, Chile
('1797475', 'Domingo Mery', 'domingo mery')
('7263603', 'Alvaro Soto', 'alvaro soto')
ajalfaro@uc.cl
dmery@ing.puc.cl
asoto@ing.uc.cl
1677d29a108a1c0f27a6a630e74856e7bddcb70dEfficient Misalignment-Robust Representation
for Real-Time Face Recognition
The Hong Kong Polytechnic University, Hong Kong
('5828998', 'Meng Yang', 'meng yang')
('36685537', 'Lei Zhang', 'lei zhang')
('1698371', 'David Zhang', 'david zhang')
{csmyang,cslzhang}@comp.polyu.edu.hk
16b9d258547f1eccdb32111c9f45e2e4bbee79af2006 Xiyuan Ave.
Chengdu, Sichuan 611731
2006 Xiyuan Ave.
Chengdu, Sichuan 611731
University of Electronic Science and Technology of China
Johns Hopkins University
3400 N. Charles St.
Baltimore, Maryland 21218
Johns Hopkins University
3400 N. Charles St.
Baltimore, Maryland 21218
NormFace: L2 Hypersphere Embedding for Face Verification
University of Electronic Science and Technology of China
('1709439', 'Jian Cheng', 'jian cheng')
('40031188', 'Xiang Xiang', 'xiang xiang')
('1746141', 'Alan L. Yuille', 'alan l. yuille')
('39369840', 'Feng Wang', 'feng wang')
feng.w(cid:29)@gmail.com
chengjian@uestc.edu.cn
xxiang@cs.jhu.edu
alan.yuille@jhu.edu
16c884be18016cc07aec0ef7e914622a1a9fb59dUNIVERSITÉ DE GRENOBLE
No attribué par la bibliothèque
THÈSE
pour obtenir le grade de
DOCTEUR DE L’UNIVERSITÉ DE GRENOBLE
Spécialité : Mathématiques et Informatique
préparée au Laboratoire Jean Kuntzmann
dans le cadre de l’École Doctorale Mathématiques,
Sciences et Technologies de l’Information, Informatique
présentée et soutenue publiquement
par
le 27 septembre 2010
Exploiting Multimodal Data for Image Understanding
Données multimodales pour l’analyse d’image
Directeurs de thèse : Cordelia Schmid et Jakob Verbeek
JURY
M. Éric Gaussier
M. Antonio Torralba
Mme Tinne Tuytelaars Katholieke Universiteit Leuven
M. Mark Everingham University of Leeds
Mme Cordelia Schmid
M. Jakob Verbeek
Président
Université Joseph Fourier
Massachusetts Institute of Technology Rapporteur
Rapporteur
Examinateur
Examinatrice
Examinateur
INRIA Grenoble
INRIA Grenoble
('2737253', 'Matthieu Guillaumin', 'matthieu guillaumin')
162dfd0d2c9f3621d600e8a3790745395ab25ebcHead Pose Estimation Based on Multivariate Label Distribution
School of Computer Science and Engineering
Southeast University, Nanjing, China
('1735299', 'Xin Geng', 'xin geng')
('40228279', 'Yu Xia', 'yu xia')
{xgeng, xiayu}@seu.edu.cn
16f940b4b5da79072d64a77692a876627092d39cA Framework for Automated Measurement of the Intensity of Non-Posed Facial
Action Units
University of Denver, Denver, CO
University of Miami, Coral Gables, FL
University of Miami, Coral Gables, FL
University of Pittsburgh, Pittsburgh, PA
Emails:
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')
('2897823', 'Steven Cadavid', 'steven cadavid')
('1874236', 'Daniel S. Messinger', 'daniel s. messinger')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
mmahoor@du.edu, scadavid@umsis.miami.edu, dmessinger@miami.edu, and jeffcohn@pitt.edu
16572c545384174f8136d761d2b0866e968120a8Sequential Max-Margin Event Detectors
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 15213. USA
('39792229', 'Dong Huang', 'dong huang')
('2583890', 'Shitong Yao', 'shitong yao')
('1734275', 'Yi Wang', 'yi wang')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
16820ccfb626dcdc893cc7735784aed9f63cbb70Real-time Embedded Age and Gender Classification in Unconstrained Video
School of Electrical Engineering and Computer Science
University of Ottawa
Ottawa, ON K1N 6N5 Canada
CogniVue Corporation
Gatineau, QC, Canada
('2014654', 'Ramin Azarmehr', 'ramin azarmehr')
('1807494', 'Won-Sook Lee', 'won-sook lee')
('2551825', 'Christina Xu', 'christina xu')
('32944169', 'Daniel Laroche', 'daniel laroche')
{razar033,laganier,wslee}@uottawa.ca
{cxu,dlaroche}@cognivue.com
1630e839bc23811e340bdadad3c55b6723db361dSONG, TAN, CHEN: EXPLOITING RELATIONSHIP BETWEEN ATTRIBUTES
Exploiting Relationship between Attributes for
Improved Face Verification
Department of Computer Science and
Technology, Nanjing University of Aero
nautics and Astronautics, Nanjing 210016,
P.R. China
('3075941', 'Fengyi Song', 'fengyi song')
('2248421', 'Xiaoyang Tan', 'xiaoyang tan')
('1680768', 'Songcan Chen', 'songcan chen')
f.song@nuaa.edu.cn
x.tan@nuaa.edu.cn
s.chen@nuaa.edu.cn
164b0e2a03a5a402f66c497e6c327edf20f8827bProceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Sparse Deep Transfer Learning for
Convolutional Neural Network
The Chinese University of Hong Kong, Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
('2335888', 'Jiaming Liu', 'jiaming liu')
('47903936', 'Yali Wang', 'yali wang')
('33427555', 'Yu Qiao', 'yu qiao')
jiaming.liu@email.ucr.edu, {yl.wang, yu.qiao}@siat.ac.cn
16286fb0f14f6a7a1acc10fcd28b3ac43f12f3ebJ Nonverbal Behav
DOI 10.1007/s10919-008-0059-5
O R I G I N A L P A P E R
All Smiles are Not Created Equal: Morphology
and Timing of Smiles Perceived as Amused, Polite,
and Embarrassed/Nervous
Ó Springer Science+Business Media, LLC 2008
('2059653', 'Zara Ambadar', 'zara ambadar')
1667a77db764e03a87a3fd167d88b060ef47bb56Alternative Semantic Representations for
Zero-Shot Human Action Recognition
School of Computer Science, The University of Manchester
Manchester, M13 9PL, UK
('1729612', 'Qian Wang', 'qian wang')
('32811782', 'Ke Chen', 'ke chen')
{qian.wang,ke.chen}@manchester.ac.uk
169618b8dc9b348694a31c6e9e17b989735b4d39Unsupervised Representation Learning by Sorting Sequences
University of California, Merced
Maneesh Singh3
2Virginia Tech
3Verisk Analytics
http://vllab1.ucmerced.edu/˜hylee/OPN/
('2837591', 'Hsin-Ying Lee', 'hsin-ying lee')
('3068086', 'Jia-Bin Huang', 'jia-bin huang')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
16e95a907b016951da7c9327927bb039534151daJOURNAL OF INFORMATION SCIENCE AND ENGINEERING 32, XXXX-XXXX (2016)
3D Face Recognition Using Spherical Vector Norms Map *
a Beijing Key Laboratory of Information Service Engineering,
Beijing Union University, 100101, China
b Computer Technology Institute, Beijing Union University, 100101, China
c Beijing Advanced Innovation Center for Imaging Technology,
Capital Normal University, 100048, China
In this paper, we introduce a novel, automatic method for 3D face recognition. A
new feature called a spherical vector norms map of a 3D face is created using the normal
vector of each point. This feature contains more detailed information than the original
depth image in regions such as the eyes and nose. For certain flat areas of 3D face, such
as the forehead and cheeks, this map could increase the distinguishability of different
points. In addition, this feature is robust to facial expression due to an adjustment that is
made in the mouth region. Then, the facial representations, which are based on Histo-
grams of Oriented Gradients, are extracted from the spherical vector norms map and the
original depth image. A new partitioning strategy is proposed to produce the histogram
of eight patches of a given image, in which all of the pixels are binned based on the
magnitude and direction of their gradients. In this study, SVNs map and depth image are
represented compactly with two histograms of oriented gradients; this approach is com-
pleted by Linear Discriminant Analysis and a Nearest Neighbor classifier.
Keywords: spherical vector norms map, Histograms of Oriented Gradients, 3D face
recognition, Linear Discriminant Analysis, Face Recognition Grand Challenge database
1. INTRODUCTION
With the rapidly decreasing costs of 3D capturing devices, many researchers are in-
vestigating 3D face recognition systems because it could overcome limitations illumina-
tion and make-up, but still bear limitations mostly due to facial expression. We summa-
rize a smaller subset of expressive-robust methods below:
1. Deformable template-based approaches: Berretti et al. [1] proposed an approach
that describes the geometric information of a 3D facial using a surface graph form, and
the relevant information among the neighboring points could be encoded into a compact
representation. 3DWWs (3D Weighted Walkthroughs) descriptors were proposed to
demonstrate the mutual spatial displacement among pairwise arcs of points of the corre-
sponding stripes. An 81.2% verification rate at a 0.1% FAR was achieved on the all vs.
all experiment. The advantage of the method is the computational complexity is low.
Kakadiaris et al. [2] mapped 3D geometry information onto a 2D regular grid using
an elastically adapted deformable model. Then, advanced wavelet analysis was used for
recognition and get good performance.
Drira et al. [3] used radial curves emanating from the nose tips which were already
provided, and used elastic shape analysis of these curves to develop a Riemannian
framework. Finally, they analyze the shapes of full facial surfaces.
1249
('3282147', 'Xue-Qiao Wang', 'xue-qiao wang')
('2130097', 'Jia-Zheng Yuan', 'jia-zheng yuan')
('1930238', 'Qing Li', 'qing li')
E-mail: {ldxueqiao; jiazheng; liqing10}@buu.edu.cn
166186e551b75c9b5adcc9218f0727b73f5de899Volume 4, Issue 2, February 2016
International Journal of Advance Research in
Computer Science and Management Studies
Research Article / Survey Paper / Case Study
Available online at: www.ijarcsms.com
ISSN: 2321-7782 (Online)
Automatic Age and Gender Recognition in Human Face Image
Dataset using Convolutional Neural Network System
Subhani Shaik1
Assoc. Prof & Head of the Department
Department of CSE,
Associate Professor
Department of CSE,
St.Mary’s Group of Institutions Guntur
St.Mary’s Group of Institutions Guntur
Chebrolu(V&M),Guntur(Dt),
Andhra Pradesh - India
Chebrolu(V&M),Guntur(Dt),
Andhra Pradesh - India
('39885231', 'Anto A. Micheal', 'anto a. micheal')
16d6737b50f969247339a6860da2109a8664198aConvolutional Neural Networks
for Age and Gender Classification
Stanford University
('22241470', 'Ari Ekmekji', 'ari ekmekji')aekmekji@stanford.edu
16d9b983796ffcd151bdb8e75fc7eb2e31230809EUROGRAPHICS 2018 / D. Gutierrez and A. Sheffer
(Guest Editors)
Volume 37 (2018), Number 2
GazeDirector: Fully Articulated Eye Gaze Redirection in Video
ID: paper1004
1679943d22d60639b4670eba86665371295f52c3
162c33a2ec8ece0dc96e42d5a86dc3fedcf8cd5eMygdalis, V., Iosifidis, A., Tefas, A., & Pitas, I. (2016). Large-Scale
Classification by an Approximate Least Squares One-Class Support Vector
of a meeting held 20-22 August 2015, Helsinki, Finland (Vol. 2, pp. 6-10).
Institute of Electrical and Electronics Engineers (IEEE). DOI
10.1109/Trustcom.2015.555
Peer reviewed version
Link to published version (if available):
10.1109/Trustcom.2015.555
Link to publication record in Explore Bristol Research
PDF-document
University of Bristol - Explore Bristol Research
General rights
This document is made available in accordance with publisher policies. Please cite only the published
version using the reference above. Full terms of use are available:
http://www.bristol.ac.uk/pure/about/ebr-terms
1610d2d4947c03a89c0fda506a74ba1ae2bc54c2Robust Real-Time 3D Face Tracking from RGBD Videos under Extreme Pose,
Depth, and Expression Variations
Hai X. Pham
Rutgers University, USA
('1736042', 'Vladimir Pavlovic', 'vladimir pavlovic'){hxp1,vladimir}@cs.rutgers.edu
1659a8b91c3f428f1ba6aeba69660f2c9d0a85c6Recent Developments in Social Signal Processing
Institute of Informatics - ISLA
University of Amsterdam, Amsterdam, The Netherlands
†Department of Computing
Imperial College London, London, UK
EEMCS, University of Twente Enschede, The Netherlands
University of Glasgow
Glasgow, Scotland
('1764521', 'Albert Ali Salah', 'albert ali salah')
('1694605', 'Maja Pantic', 'maja pantic')
('1719436', 'Alessandro Vinciarelli', 'alessandro vinciarelli')
Email: a.a.salah@uva.nl
Email: m.pantic@imperial.ac.uk
Email: vincia@dcs.gla.ac.uk
169076ffe5e7a2310e98087ef7da25aceb12b62d
167736556bea7fd57cfabc692ec4ae40c445f144METHODS
published: 13 January 2016
doi: 10.3389/fict.2015.00028
Improved Motion Description for
Action Classification
Inria, Centre Rennes – Bretagne Atlantique, Rennes, France
Even though the importance of explicitly integrating motion characteristics in video
descriptions has been demonstrated by several recent papers on action classification, our
current work concludes that adequately decomposing visual motion into dominant and
residual motions, i.e., camera and scene motion, significantly improves action recognition
algorithms. This holds true both for the extraction of the space-time trajectories and for
computation of descriptors. We designed a new motion descriptor – the DCS descriptor –
that captures additional information on local motion patterns enhancing results based on
differential motion scalar quantities, divergence, curl, and shear features. Finally, applying
the recent VLAD coding technique proposed in image retrieval provides a substantial
improvement for action recognition. These findings are complementary to each other
and they outperformed all previously reported results by a significant margin on three
challenging datasets: Hollywood 2, HMDB51, and Olympic Sports as reported in Jain
et al. (2013). These results were further improved by Oneata et al. (2013), Wang and
Schmid (2013), and Zhu et al. (2013) through the use of the Fisher vector encoding. We
therefore also employ Fisher vector in this paper, and we further enhance our approach by
combining trajectories from both optical flow and compensated flow. We as well provide
additional details of DCS descriptors, including visualization. For extending the evaluation
a novel dataset with 101 action classes, UCF101, was added.
Keywords: action classification, camera motion, optical flow, motion trajectories, motion descriptors
1. INTRODUCTION
The recognition of human actions in unconstrained videos remains a challenging problem in
computer vision despite the fact that human actions are often attributed to essential meaningful
content in such videos. The field receives sustained attention due to its potential applications,
such as for designing video-surveillance systems, in providing automatic annotation of video
archives, as well as for improving human–computer interaction. The solutions that were proposed
to address the above problems were inherited from the techniques first designed for image search
and classification.
Successful local features were developed to describe image patches (Schmid and Mohr, 1997;
Lowe, 2004) and translated in the 2D + t domain as spatio-temporal local descriptors (Laptev et al.,
2008; Wang et al., 2009) and now include motion clues of Wang et al. (2011). These descriptors
are often extracted from spatial–temporal interest points (Laptev and Lindeberg, 2003; Willems
et al., 2008). Furthermore, several approaches assume underlying temporal motion model involving
trajectories (Hervieu et al., 2008; Matikainen et al., 2009; Messing et al., 2009; Sun et al., 2009;
Brox and Malik, 2010; Wang et al., 2011; Wu et al., 2011; Gaidon et al., 2012; Wang and Schmid,
2013).
Edited by:
Jean-Marc Odobez,
Idiap Research Institute, Switzerland
Reviewed by:
Thanh Duc Ngo,
Ho Chi Minh City University of
Information Technology, Vietnam
Jean Martinet,
Lille 1 University, France
*Correspondence:
Specialty section:
This article was submitted to
Computer Image Analysis, a section
of the journal Frontiers in ICT
Received: 16 April 2015
Accepted: 22 December 2015
Published: 13 January 2016
Citation:
Jain M, Jégou H and Bouthemy P
(2016) Improved Motion Description
for Action Classification.
doi: 10.3389/fict.2015.00028
Frontiers in ICT | www.frontiersin.org
January 2016 | Volume 2 | Article 28
('40027484', 'Mihir Jain', 'mihir jain')
('1681054', 'Hervé Jégou', 'hervé jégou')
('1716733', 'Patrick Bouthemy', 'patrick bouthemy')
('40027484', 'Mihir Jain', 'mihir jain')
m.jain@uva.nl
167ea1631476e8f9332cef98cf470cb3d4847bc6Visual Search at Pinterest
1Visual Discovery, Pinterest
University of California, Berkeley
('39554931', 'Yushi Jing', 'yushi jing')
('1911082', 'Dmitry Kislyuk', 'dmitry kislyuk')
('39835325', 'Andrew Zhai', 'andrew zhai')
('2560579', 'Jiajing Xu', 'jiajing xu')
('7408951', 'Jeff Donahue', 'jeff donahue')
('2608161', 'Sarah Tavel', 'sarah tavel')
{jing, dliu, dkislyuk, andrew, jiajing, jdonahue, sarah}@pinterest.com
161eb88031f382e6a1d630cd9a1b9c4bc6b476521
Automatic Facial Expression Recognition
Using Features of Salient Facial Patches
('2680543', 'Aurobinda Routray', 'aurobinda routray')
420782499f38c1d114aabde7b8a8104c9e40a974Joint Ranking and Classification using Weak Data for Feature Extraction
Fashion Style in 128 Floats:
Department of Computer Science and Engineering
Waseda University, Tokyo, Japan
('3114470', 'Edgar Simo-Serra', 'edgar simo-serra')
('1692113', 'Hiroshi Ishikawa', 'hiroshi ishikawa')
esimo@aoni.waseda.jp
hfs@waseda.jp
4209783b0cab1f22341f0600eed4512155b1dee6Accurate and Efficient Similarity Search for Large Scale Face Recognition
BUPT
BUPT
BUPT
('49712251', 'Ce Qi', 'ce qi')
('35963823', 'Zhizhong Liu', 'zhizhong liu')
('1684263', 'Fei Su', 'fei su')
42e3dac0df30d754c7c7dab9e1bb94990034a90dPANDA: Pose Aligned Networks for Deep Attribute Modeling
2EECS, UC Berkeley
1Facebook AI Research
('40565777', 'Ning Zhang', 'ning zhang')
('2210374', 'Manohar Paluri', 'manohar paluri')
('1753210', 'Trevor Darrell', 'trevor darrell')
{mano, ranzato, lubomir}@fb.com
{nzhang, trevor}@eecs.berkeley.edu
4217473596b978f13a211cdf47b7d3f6588c785fAn Efficient Approach for Clustering Face Images
Michigan State University
Noblis
Anil Jain
Michigan State Universtiy
('40653304', 'Charles Otto', 'charles otto')
('1817623', 'Brendan Klare', 'brendan klare')
ottochar@msu.edu
Brendan.Klare@noblis.org
jain@msu.edu
4223666d1b0b1a60c74b14c2980069905088edc6A Convergent Incoherent Dictionary Learning
Algorithm for Sparse Coding
Department of Mathematics
National University of Singapore
('3183763', 'Chenglong Bao', 'chenglong bao')
('2217653', 'Yuhui Quan', 'yuhui quan')
('39689301', 'Hui Ji', 'hui ji')
42afe6d016e52c99e2c0d876052ade9c192d91e7Spontaneous vs. Posed Facial Behavior:
Automatic Analysis of Brow Actions
Imperial College London, UK
Faculty of EEMCS, University of Twente, The Netherlands
Psychology and Psychiatry, University of Pittsburgh, USA
('1795528', 'Michel F. Valstar', 'michel f. valstar')
('1694605', 'Maja Pantic', 'maja pantic')
('2059653', 'Zara Ambadar', 'zara ambadar')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
{michel.valstar,m.pantic}@imperial.ac.uk, {ambadar,jeffcohn}@pitt.edu,
42765c170c14bd58e7200b09b2e1e17911eed42b2
Feature Extraction Based on Wavelet
Moments and Moment Invariants in
Machine Vision Systems
G.A. Papakostas, D.E. Koulouriotis and V.D. Tourassis
Democritus University of Thrace
Department of Production Engineering and Management
Greece
1. Introduction
Recently, there has been an increasing interest on modern machine vision systems for
industrial and commercial purposes. More and more products are introduced in the market,
which are making use of visual information captured by a camera in order to perform a
specific task. Such machine vision systems are used for detecting and/or recognizing a face
in an unconstrained environment for security purposes, for analysing the emotional states of
a human by processing his facial expressions or for providing a vision based interface in the
context of the human computer interaction (HCI) etc..
In almost all the modern machine vision systems there is a common processing procedure
called feature extraction, dealing with the appropriate representation of the visual information.
This task has two main objectives simultaneously, the compact description of the useful
information by a set of numbers (features), by keeping the dimension as low as possible.
Image moments constitute an important feature extraction method (FEM) which generates
high discriminative features, able to capture the particular characteristics of the described
pattern, which distinguish it among similar or totally different objects. Their ability to fully
describe an image by encoding its contents in a compact way makes them suitable for many
disciplines of the engineering life, such as image analysis (Sim et al., 2004), image
watermarking (Papakostas et al., 2010a) and pattern recognition (Papakostas et al., 2007,
2009a, 2010b).
Among the several moment families introduced in the past, the orthogonal moments are
the most popular moments widely used in many applications, owing to their
orthogonality property that comes from the nature of the polynomials used as kernel
functions, which they constitute an orthogonal base. As a result, the orthogonal moments
have minimum information redundancy meaning that different moment orders describe
different parts of the image.
In order to use the moments to classify visual objects, they have to ensure high recognition
rates for all possible object’s orientations. This requirement constitutes a significant
operational feature of each modern pattern recognition system and it can be satisfied during
www.intechopen.com
429c3588ce54468090cc2cf56c9b328b549a86dc
42cc9ea3da1277b1f19dff3d8007c6cbc0bb9830Coordinated Local Metric Learning
Inria∗
('2143851', 'Shreyas Saxena', 'shreyas saxena')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
42350e28d11e33641775bef4c7b41a2c3437e4fd212
Multilinear Discriminant Analysis
for Face Recognition
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('38188040', 'Dong Xu', 'dong xu')
('1706370', 'Qiang Yang', 'qiang yang')
('39089563', 'Lei Zhang', 'lei zhang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
42e155ea109eae773dadf74d713485be83fca105
4223917177405eaa6bdedca061eb28f7b440ed8eB-spline Shape from Motion & Shading: An Automatic Free-form Surface
Modeling for Face Reconstruction
School of Computer Science, Tianjin University
School of Computer Science, Tianjin University
School of Software, Tianjin University
('1919846', 'Weilong Peng', 'weilong peng')
('1683334', 'Zhiyong Feng', 'zhiyong feng')
('29962190', 'Chao Xu', 'chao xu')
wlpeng@tju.edu.cn
zyfeng@tju.edu.cn
42eda7c20db9dc0f42f72bb997dd191ed8499b10Gaze Embeddings for Zero-Shot Image Classification
Max Planck Institute for Informatics
Saarland Informatics Campus
2Amsterdam Machine Learning Lab
University of Amsterdam
('7789181', 'Nour Karessli', 'nour karessli')
('3194727', 'Andreas Bulling', 'andreas bulling')
42c9394ca1caaa36f535721fa9a64b2c8d4e0deeLabel Efficient Learning of Transferable
Representations across Domains and Tasks
Stanford University
Virginia Tech
University of California, Berkeley
('3378742', 'Zelun Luo', 'zelun luo')
('8299168', 'Yuliang Zou', 'yuliang zou')
('4742485', 'Judy Hoffman', 'judy hoffman')
zelunluo@stanford.edu
ylzou@vt.edu
jhoffman@eecs.berkeley.edu
4270460b8bc5299bd6eaf821d5685c6442ea179aInt J Comput Vis (2009) 84: 163–183
DOI 10.1007/s11263-008-0147-3
Partial Similarity of Objects, or How to Compare a Centaur
to a Horse
Received: 30 September 2007 / Accepted: 3 June 2008 / Published online: 26 July 2008
© Springer Science+Business Media, LLC 2008
('1731883', 'Alexander M. Bronstein', 'alexander m. bronstein')
('1692832', 'Ron Kimmel', 'ron kimmel')
4205cb47ba4d3c0f21840633bcd49349d1dc02c1ACTION RECOGNITION WITH GRADIENT BOUNDARY CONVOLUTIONAL NETWORK
Research Institute of Shenzhen, Wuhan University, Shenzhen, China
National Engineering Research Center for Multimedia Software, Wuhan University, Wuhan, China
Center for Research in Computer Vision, University of Central Florida, Orlando, USA
('2559431', 'Huafeng Chen', 'huafeng chen')
('1736897', 'Jun Chen', 'jun chen')
('1732874', 'Chen Chen', 'chen chen')
('37254976', 'Ruimin Hu', 'ruimin hu')
42ded74d4858bea1070dadb08b037115d9d15db5Exigent: An Automatic Avatar Generation System
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, Massachusetts 02139, USA
('2852664', 'Dominic Kao', 'dominic kao')
('1709421', 'D. Fox Harrell', 'd. fox harrell')
{dkao,fox.harrell}@mit.edu
42ea8a96eea023361721f0ea34264d3d0fc49ebdParameterized Principal Component Analysis
Florida State University, USA
('2109527', 'Ajay Gupta', 'ajay gupta')
('2455529', 'Adrian Barbu', 'adrian barbu')
42f6f5454dda99d8989f9814989efd50fe807ee8Conditional generative adversarial nets for convolutional face generation
Symbolic Systems Program, Natural Language Processing Group
Stanford University
('24339276', 'Jon Gauthier', 'jon gauthier')jgauthie@stanford.edu
429d4848d03d2243cc6a1b03695406a6de1a7abdFace Recognition based on Logarithmic Fusion
International Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-2, Issue-3, July 2012
of SVD and KT
Ramachandra A C, Raja K B, Venugopal K R, L M Patnaik
to
42dc36550912bc40f7faa195c60ff6ffc04e7cd6Hindawi Publishing Corporation
ISRN Machine Vision
Volume 2013, Article ID 579126, 10 pages
http://dx.doi.org/10.1155/2013/579126
Research Article
Visible and Infrared Face Identification via
Sparse Representation
LITIS EA 4108-QuantIF Team, University of Rouen, 22 Boulevard Gambetta, 76183 Rouen Cedex, France
GREYC UMR CNRS 6072 ENSICAEN-Image Team, University of Caen Basse-Normandie, 6 Boulevard Mar echal Juin
14050 Caen, France
Received 4 April 2013; Accepted 27 April 2013
Academic Editors: O. Ghita, D. Hernandez, Z. Hou, M. La Cascia, and J. M. Tavares
Copyright © 2013 P. Buyssens and M. Revenu. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
We present a facial recognition technique based on facial sparse representation. A dictionary is learned from data, and patches
extracted from a face are decomposed in a sparse manner onto this dictionary. We particularly focus on the design of dictionaries
that play a crucial role in the final identification rates. Applied to various databases and modalities, we show that this approach
gives interesting performances. We propose also a score fusion framework that allows quantifying the saliency classifiers outputs
and merging them according to these saliencies.
1. Introduction
Face recognition is a topic which has been of increasing inter-
est during the last two decades due to a vast number of pos-
sible applications: biometrics, video surveillance, advanced
HMI, or image/video indexation. Although considerable
progress has been made in this domain, especially with the
development of powerful methods (such as the Eigenfaces
or the Elastic Bunch Graph Matching methods), automatic
face recognition is not enough accurate in uncontrolled envi-
ronments for a large use. Many factors can degrade the per-
formances of facial biometric system: illumination variation
creates artificial shadows, changing locally the appearance of
the face; head poses modify the distance between localized
features; facial expression introduces global changes; artefacts
wearing, such as glasses or scarf, may hide parts of the face.
For the particular case of illumination, a lot of work has
been done on the preprocessing step of the images to reduce
the effect of the illumination on the face. Another approach is
to use other imagery such as infrared, which has been showed
to be a promising alternative. An infrared capture of a face is
nearly invariant to illumination changes and allows a system
to process in all the illumination conditions, including total
darkness like night.
While visual cameras measure the electromagnetic
energy in the visible spectrum (0.4–0.7 𝜇m), sensors in the
IR respond to thermal radiation in the infrared spectrum
(0.7–14.0 𝜇m). The infrared spectrum can mainly be divided
into reflected IR (Figure 1(b)) and emissive IR (Figure 1(c)).
Reflected IR contains near infrared (NIR) (0.7–0.9 𝜇m)
and short-wave infrared (SWIR) (0.9–2.4 𝜇m). The ther-
mal IR band is associated with thermal radiation emitted
by the objects. It contains the midwave infrared (MWIR)
(3.0–5.0 𝜇m) and long-wave infrared (LWIR) (8.0–14.0 𝜇m).
Although the reflected IR is by far the most studied, we use
thermal long-wave IR in this study.
Despite the advantages of infrared modality, infrared im-
agery has other limitations. Since a face captured under this
modality renders its thermal patterns, a temperature screen
placed in front of the face will totally occlude it. This phe-
nomenon appears when a subject simply wears glasses. In this
case, the captured face has two black holes, corresponding to
the glasses, which is far more inconvenient than in the visible
('2825139', 'Pierre Buyssens', 'pierre buyssens')Correspondence should be addressed to Pierre Buyssens; pierre.buyssens@gmail.com
424259e9e917c037208125ccc1a02f8276afb667
42ecfc3221c2e1377e6ff849afb705ecd056b6ffPose Invariant Face Recognition under Arbitrary
Unknown Lighting using Spherical Harmonics
Department of Computer Science,
SUNY at Stony Brook, NY, 11790
('38323599', 'Lei Zhang', 'lei zhang')
('1686020', 'Dimitris Samaras', 'dimitris samaras')
{lzhang, samaras}@cs.sunysb.edu
421955c6d2f7a5ffafaf154a329a525e21bbd6d3570
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 22, NO. 6,
JUNE 2000
Evolutionary Pursuit and Its
Application to Face Recognition
('39664966', 'Chengjun Liu', 'chengjun liu')
('1781577', 'Harry Wechsler', 'harry wechsler')
42e0127a3fd6a96048e0bc7aab6d0ae88ba00fb0
42df75080e14d32332b39ee5d91e83da8a914e344280
Illumination Compensation Using Oriented
Local Histogram Equalization and
Its Application to Face Recognition
('1822733', 'Ping-Han Lee', 'ping-han lee')
('2250469', 'Szu-Wei Wu', 'szu-wei wu')
('1732064', 'Yi-Ping Hung', 'yi-ping hung')
4276eb27e2e4fc3e0ceb769eca75e3c73b7f2e99Face Recognition From Video
1Siemens Corporate Research
College Road East, Princeton, NJ
2Center for Automation Research (CfAR) and
Department of Electrical and Computer Engineering
University of Maryland, College Park, MD
I. INTRODUCTION
While face recognition (FR) from a single still image has been studied extensively [13], [57], FR based on a
video sequence is an emerging topic, evidenced by the growing increase in the literature. It is predictable that with
the ubiquity of video sequences, FR based on video sequences will become more and more popular. In this chapter,
we also address FR based on a group of still images (also referred to as multiple still images). Multiple still images
are not necessarily from a video sequence; they can come from multiple independent still captures.
It is obvious that multiple still images or a video sequence can be regarded as a single still image in a degenerate
manner. More specifically, suppose that we have a group of face images {y1, . . . , yT} and a single-still-image-based
FR algorithm A (or the base algorithm), we can construct a recognition algorithm based on multiple still images
or a video sequence by fusing multiple base algorithms denoted by Ai’s. Each Ai takes a different single image
yi as input. The fusion rule can be additive, multiplicative, and so on.
Even though the fusion algorithm might work well in practice, clearly, the overall recognition performance solely
depends on the base algorithm and hence designing the base algorithm A (or the similarity function k) is of ultimate
importance. However, the fused algorithms neglect additional properties manifested in multiple still images or video
sequences. Generally speaking, algorithms that judiciously exploit these properties will perform better in terms of
recognition accuracy, computational efficiency, etc.
There are three additional properties available from multiple still images and/or video sequences:
- [P 1: Set of observations]. This property is directly exploited by the fused algorithms. One main disadvantage
may be the ad hoc nature of the combination rule. However, theoretical analysis based on a set of observations
can be performed. For example, a set of observations can be summarized using quantities like matrix, probability
density function, manifold, etc. Hence, corresponding knowledge can be utilized to match two sets.
- [P 2: Temporal continuity/Dynamics]. Successive frames in the video sequences are continuous in the
temporal dimension. Such continuity, coming from facial expression, geometric continuity related to head
July 14, 2008
DRAFT
('1682187', 'Shaohua Kevin Zhou', 'shaohua kevin zhou')
('9215658', 'Rama Chellappa', 'rama chellappa')
('1867477', 'Gaurav Aggarwal', 'gaurav aggarwal')
Email: shaohua.zhou@siemens.com, rama@cfar.umd.edu, gaurav@cs.umd.edu
89945b7cd614310ebae05b8deed0533a9998d212Divide-and-Conquer Method for L1 Norm Matrix
Factorization in the Presence of Outliers and
Missing Data
('1803714', 'Deyu Meng', 'deyu meng')
89de30a75d3258816c2d4d5a733d2bef894b66b9
89002a64e96a82486220b1d5c3f060654b24ef2aPIEFA: Personalized Incremental and Ensemble Face Alignment
Yang Yu⋆
Rutgers University
Piscataway, NJ, 08854
The University of North Carolina at Charlotte
Charlotte, NC, 28223
('4340744', 'Xi Peng', 'xi peng')
('1753384', 'Shaoting Zhang', 'shaoting zhang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
xpeng.nb,yyu,dnm@cs.rutgers.edu
szhang16@uncc.edu
89c84628b6f63554eec13830851a5d03d740261aImage Enhancement and Automated Target Recognition
Techniques for Underwater Electro-Optic Imagery
Metron, Inc
11911 Freedom Dr., Suite 800
Reston, VA 20190
Contract Number N00014-07-C-0351
http:www.metsci.com
LONG TERM GOALS
The long-term goal of this project is to provide a flexible, accurate and extensible automated target
recognition (ATR) system for use with a variety of imaging and non-imaging sensors. Such an ATR
system, once it achieves a high level of performance, can relieve human operators from the tedious
business of pouring over vast quantities of mostly mundane data, calling the operator in only when the
computer assessment involves an unacceptable level of ambiguity. The ATR system will provide most
leading edge algorithms for detection, segmentation, and classification while incorporating many novel
algorithms that we are developing at Metron. To address one of the most critical challenges in ATR
technology, the system will also provide powerful feature extraction routines designed for specific
applications of current interest.
OBJECTIVES
The main objective of this project is to develop a complete, flexible, and extensible modular automated
target recognition (MATR) system for computer aided detection and classification (CAD/CAC) of
target objects from within cluttered and possibly noisy image data. The MATR system framework is
designed to be applicable to a wide range of situations, each with its own challenges, and so is
organized in such a way that the constituent algorithms are interchangeable and can be selected based
on their individual suitability to the particular task within the specific application. The ATR system
designer can select combinations of algorithms, many of which are being developed at Metron, to
produce a variety of systems, each tailored to specific needs. While the development of the system is
still ongoing, results for mine countermeasures (MCM) applications using electro-optical (EO) image
data have been encouraging. A brief description of the system framework, some of the novel
algorithms, and preliminary test results are provided in this interim report.
APPROACH
The MATR system is composed of several modules, as depicted in Figure 1, reflecting the sequence of
steps in the ATR process. The detection step is concerned with finding portions of an image that
contain possible objects of interest, or targets, that merit further attention. During the localization and
segmentation phase the position and approximate size and shape of the object is estimated and a
portion of the image, or “snippet,” containing the object is extracted. At this stage, image processing
may be performed on the snippet to reorient the target, mitigate noise, accentuate edge detail, etc.
1
('2395986', 'Thomas Giddings', 'thomas giddings')
('2386585', 'Cetin Savkli', 'cetin savkli')
('2632462', 'Joseph Shirron', 'joseph shirron')
phone: (703) 437-2428 fax: (703) 787-3518 email: giddings@metsci.com
89c51f73ec5ebd1c2a9000123deaf628acf3cdd8American Journal of Applied Sciences 5 (5): 574-580, 2008
ISSN 1546-9239
© 2008 Science Publications
Face Recognition Based on Nonlinear Feature Approach
1Eimad E.A. Abusham, 1Andrew T.B. Jin, 1Wong E. Kiong and 2G. Debashis
1Faculty of Information Science and Technology,
Faculty of Engineering and Technology, Multimedia University (Melaka Campus
Jalan Ayer Keroh Lama, 75450 Bukit Beruang, Melaka, Malaysia
89c73b1e7c9b5e126a26ed5b7caccd7cd30ab199Application of an Improved Mean Shift Algorithm
in Real-time Facial Expression Recognition
School of Computer and Communication, Hunan University of Technology, Hunan, Zhuzhou, 412008 china
School of Electrical and Information Engineering, Hunan University of Technology, Hunan, Zhuzhou, 412008 china
School of Computer and Communication, Hunan University of Technology, Hunan, Zhuzhou, 412008 china
Yan-hui ZHU
School of Computer and Communication, Hunan University of Technology, Hunan, Zhuzhou, 412008 china
facial
real-time
expression
('1719090', 'Zhao-yi Peng', 'zhao-yi peng')
('1696179', 'Yu Zhou', 'yu zhou')
('2276926', 'Zhi-qiang Wen', 'zhi-qiang wen')
Email:pengzhaoyi@163.com
Email:zypzy@163.com
Email: swayhzhu@163.com
Email: zhqwen20001@163.com
89e7d23e0c6a1d636f2da68aaef58efee36b718bLucas-Kanade Scale Invariant Feature Transform for
Uncontrolled Viewpoint Face Recognition
1Division of Computer Science and Engineering,
2Center for Advanced Image and Information Technology
Chonbuk National University, Jeonju 561-756, Korea
('2642847', 'Yongbin Gao', 'yongbin gao')
('4292934', 'Hyo Jong Lee', 'hyo jong lee')
893239f17dc2d17183410d8a98b0440d98fa2679UvA-DARE (Digital Academic Repository)
Expression-Invariant Age Estimation
Published in:
Proceedings of the British Machine Vision Conference 2014
DOI:
10.5244/C.28.14
Link to publication
Citation for published version (APA):
French, & T. Pridmore (Eds.), Proceedings of the British Machine Vision Conference 2014 (pp. 14.1-14.11).
BMVA Press. DOI: 10.5244/C.28.14
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s),
other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating
your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask
the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam
The Netherlands. You will be contacted as soon as possible.
Download date: 04 Aug 2017
UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl
('49776777', 'Alvarez Lopez', 'alvarez lopez')
89f4bcbfeb29966ab969682eae235066a89fc151A Comparison of Photometric Normalisation Algorithms for Face Verification
Centre for Vision, Speech and Signal Processing
University of Surrey
Guildford, Surrey, GU2 7XH, UK
('39213687', 'James Short', 'james short')
('1748684', 'Josef Kittler', 'josef kittler')
('2173900', 'Kieron Messer', 'kieron messer')
(cid:0)j.short,j.kittler,k.messer(cid:1)@eim.surrey.ac.uk
892c911ca68f5b4bad59cde7eeb6c738ec6c4586RESEARCH ARTICLE
The Ryerson Audio-Visual Database of
Emotional Speech and Song (RAVDESS): A
dynamic, multimodal set of facial and vocal
expressions in North American English
Ryerson University, Toronto, Canada
Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
('2940438', 'Frank A. Russo', 'frank a. russo')* steven.livingstone@uwrf.edu
8913a5b7ed91c5f6dec95349fbc6919deee4fc75BigBIRD: A Large-Scale 3D Database of Object Instances ('37248999', 'Arjun Singh', 'arjun singh')
('1905626', 'James Sha', 'james sha')
('39537097', 'Karthik S. Narayan', 'karthik s. narayan')
('2461427', 'Tudor Achim', 'tudor achim')
('1689992', 'Pieter Abbeel', 'pieter abbeel')
8986585975c0090e9ad97bec2ba6c4b437419daeUnsupervised Hard Example Mining from
Videos for Improved Object Detection
College of Information and Computer Sciences, University of Massachusetts, Amherst
{souyoungjin,arunirc,hzjiang,ashishsingh,
('24525313', 'SouYoung Jin', 'souyoung jin')
('2895705', 'Aruni RoyChowdhury', 'aruni roychowdhury')
('40175280', 'Huaizu Jiang', 'huaizu jiang')
('1785936', 'Ashish Singh', 'ashish singh')
('39087749', 'Aditya Prasad', 'aditya prasad')
('32315404', 'Deep Chakraborty', 'deep chakraborty')
aprasad,dchakraborty,elm}@cs.umass.edu
89cabb60aa369486a1ebe586dbe09e3557615ef8Bayesian Networks as Generative
Models for Face Recognition
IDIAP RESEARCH INSTITUTE
´ECOLE POLYTECHNIQUE F´ED´ERALE DE LAUSANNE
supervised by:
Dr. S. Marcel
Prof. H. Bourlard
2009
('16602458', 'Guillaume Heusch', 'guillaume heusch')
89d3a57f663976a9ac5e9cdad01267c1fc1a7e06Neural Class-Specific Regression for face
verification
('38813382', 'Guanqun Cao', 'guanqun cao')
('9219875', 'Moncef Gabbouj', 'moncef gabbouj')
8983485996d5d9d162e70d66399047c5d01ac451Deep Feature-based Face Detection on Mobile Devices
Center for Automation Research, University of Maryland, College Park, MD
Rutgers University, Piscataway, NJ
('40599829', 'Sayantan Sarkar', 'sayantan sarkar')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
{ssarkar2, rama}@umiacs.umd.edu
vishal.m.patel@rutgers.edu
89bc311df99ad0127383a9149d1684dfd8a5aa34Towards ontology driven learning of
visual concept detectors
Dextro Robotics, Inc. 101 Avenue of the Americas, New York, USA
('3407640', 'Sanchit Arora', 'sanchit arora')
('21781318', 'Chuck Cho', 'chuck cho')
('1810102', 'Paul Fitzpatrick', 'paul fitzpatrick')
8981be3a69cd522b4e57e9914bf19f034d4b530cFast Automatic Video Retrieval using Web Images
Center For Automation Research, University of Maryland, College Park
('2257769', 'Xintong Han', 'xintong han')
('47679939', 'Bharat Singh', 'bharat singh')
('2852035', 'Vlad I. Morariu', 'vlad i. morariu')
('1693428', 'Larry S. Davis', 'larry s. davis')
{xintong,bharat,morariu,lsd}@umiacs.umd.edu
898a66979c7e8b53a10fd58ac51fbfdb6e6e6e7cDynamic vs. Static Recognition of Facial
Expressions
No Author Given
No Institute Given
89d7cc9bbcd2fdc4f4434d153ecb83764242227b(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.351-355
Face-Name Graph Matching For The Personalities In Movie
Screen
VelTech HighTech Dr. Rangarajan Dr.Sakunthala Engineering College
Final Year Student, M.Tech IT, Vel Tech Dr. RR andDr. SR Technical University, Chennai
Chennai.)
896f4d87257abd0f628c1ffbbfdac38c86a56f50Action and Gesture Temporal Spotting with
Super Vector Representation
Southwest Jiaotong University, Chengdu, China
The Chinese University of Hong Kong
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology, CAS
('1766837', 'Xiaojiang Peng', 'xiaojiang peng')
('33345248', 'Limin Wang', 'limin wang')
('2985266', 'Zhuowei Cai', 'zhuowei cai')
('33427555', 'Yu Qiao', 'yu qiao')
891b10c4b3b92ca30c9b93170ec9abd71f6099c4Facial landmark detection using structured output deep
neural networks
Soufiane Belharbi ∗1, Cl´ement Chatelain∗1, Romain H´erault∗1, and S´ebastien
1LITIS EA 4108, INSA de Rouen, Saint ´Etienne du Rouvray 76800, France
2LITIS EA 4108, UFR des Sciences, Universit´e de Rouen, France.
September 24, 2015
('49529671', 'Adam', 'adam')
451b6409565a5ad18ea49b063561a2645fa4281bAction Sets: Weakly Supervised Action Segmentation without Ordering
Constraints
University of Bonn, Germany
('32774629', 'Alexander Richard', 'alexander richard')
('51267303', 'Hilde Kuehne', 'hilde kuehne')
('2946643', 'Juergen Gall', 'juergen gall')
{richard,kuehne,gall}@iai.uni-bonn.de
45c340c8e79077a5340387cfff8ed7615efa20fd
455204fa201e9936b42756d362f62700597874c4A REGION BASED METHODOLOGY FOR FACIAL
EXPRESSION RECOGNITION
Medical School, University of Ioannina, Ioannina, Greece
Unit of Medical Technology and Intelligent Information Systems, Dept. of Computer Science
University of Ioannina, Ioannina, Greece
Keywords:
Facial expression recognition, Gabor filters, filter bank, artificial neural networks, Japanese Female Facial
Expression Database (JAFFE).
('2059518', 'Anastasios C. Koutlas', 'anastasios c. koutlas')
('1692818', 'Dimitrios I. Fotiadis', 'dimitrios i. fotiadis')
me01697@cc.uoi.gr
fotiadis@cs.uoi.gr
4541c9b4b7e6f7a232bdd62ae653ba5ec0f8bbf6The role of structural facial asymmetry in asymmetry of
peak facial expressions
Karen L. Schmidt
University of Pittsburgh, PA, USA
Carnegie Mellon University, Pittsburgh, PA, USA
Jeffrey F. Cohn
University of Pittsburgh, PA, USA
joy, anger, and disgust expressions,
Asymmetric facial expression is generally attributed to asymmetry in movement,
but structural asymmetry in the face may also affect asymmetry of expression.
Asymmetry in posed expressions was measured using image-based approaches in
digitised sequences of facial expression in 55 individuals, N/16 men, N/39
women. Structural asymmetry (at neutral expression) was higher in men than
women and accounted for .54, .62, and .66 of the variance in asymmetry at peak
expression for
respectively. Movement
asymmetry (measured by change in pixel values over time) was found, but was
unrelated to peak asymmetry in joy or anger expressions over the whole face and in
facial subregions relevant to the expression. Movement asymmetry was negatively
related to peak asymmetry in disgust expressions. Sidedness of movement
asymmetry (defined as the ratio of summed movement on the left to movement
on the right) was consistent across emotions within individuals. Sidedness was
found only for joy expressions, which had significantly more movement on the left.
The significant role of structural asymmetry in asymmetry of emotion expression
and the exploration of facial expression asymmetry have important implications for
evolutionary interpretations of facial signalling and facial expressions in general.
Address correspondence to: Karen L. Schmidt, University of
This study is part of a larger programme of research that is ongoing in the Department of
Psychiatry at the University of Pittsburgh
Science and the Robotics Institute at Carnegie Mellon University. This study was supported in part
by grants from the National Institute of Mental Health (MH 15279 and MH067976 (K. Schmidt
and MH51435 (J. Cohn). Additional support for this project was received from Office of Naval
Research (HID 29-203). The authors acknowledge the contribution of Rebecca McNutt to this
article. A preliminary version of these results was presented at the Tenth Annual Conference: Facial
Measurement and Meaning in Rimini, Italy, September 2003.
# 2006 Psychology Press, an imprint of the Taylor & Francis Group, an informa business
DOI: 10.1080/13576500600832758
('1689241', 'Yanxi Liu', 'yanxi liu')Pittsburgh, 121 University Place, Pittsburgh PA 15217, USA. E-mail: kschmidt@pitt.edu
4552f4d46a2cc67ccc4dd8568e5c95aa2eedb4ecDisentangling Features in 3D Face Shapes
for Joint Face Reconstruction and Recognition∗
College of Computer Science, Sichuan University
Michigan State University
('1734409', 'Feng Liu', 'feng liu')
('1778454', 'Ronghang Zhu', 'ronghang zhu')
('39422721', 'Dan Zeng', 'dan zeng')
('7345195', 'Qijun Zhao', 'qijun zhao')
('38284381', 'Xiaoming Liu', 'xiaoming liu')
459960be65dd04317dd325af5b7cbb883d822ee4The Meme Quiz: A Facial Expression Game Combining
Human Agency and Machine Involvement
Department of Computer Science and Engineering
University of Washington
('3059933', 'Kathleen Tuite', 'kathleen tuite'){ktuite,kemelmi}@cs.washington.edu
45f858f9e8d7713f60f52618e54089ba68dfcd6dWhat Actions are Needed for Understanding Human Actions in Videos?
Carnegie Mellon University
github.com/gsig/actions-for-actions
('34280810', 'Gunnar A. Sigurdsson', 'gunnar a. sigurdsson')
45e7ddd5248977ba8ec61be111db912a4387d62fCHEN ET AL.: ADVERSARIAL POSENET
Adversarial Learning of Structure-Aware Fully
Convolutional Networks for Landmark
Localization
('50579509', 'Yu Chen', 'yu chen')
('1780381', 'Chunhua Shen', 'chunhua shen')
('2126047', 'Xiu-Shen Wei', 'xiu-shen wei')
('2161037', 'Lingqiao Liu', 'lingqiao liu')
('49499405', 'Jian Yang', 'jian yang')
45215e330a4251801877070c85c81f42c2da60fbDomain Adaptive Dictionary Learning
Center for Automation Research, UMIACS, University of Maryland, College Park
Arts Media and Engineering, Arizona State University
('2077648', 'Qiang Qiu', 'qiang qiu')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
qiu@cs.umd.edu, {pvishalm, rama}@umiacs.umd.edu, pturaga@asu.edu
457cf73263d80a1a1338dc750ce9a50313745d1dPublished as a conference paper at ICLR 2017
DECOMPOSING MOTION AND CONTENT FOR
NATURAL VIDEO SEQUENCE PREDICTION
University of Michigan, Ann Arbor, USA
2Adobe Research, San Jose, CA 95110
3POSTECH, Pohang, Korea
Beihang University, Beijing, China
5Google Brain, Mountain View, CA 94043
('2241528', 'Seunghoon Hong', 'seunghoon hong')
('10668384', 'Xunyu Lin', 'xunyu lin')
('1697141', 'Honglak Lee', 'honglak lee')
('1768964', 'Jimei Yang', 'jimei yang')
('1711926', 'Ruben Villegas', 'ruben villegas')
4526992d4de4da2c5fae7a5ceaad6b65441adf9dSystem for Medical Mask Detection
in the Operating Room Through
Facial Attributes
A. Nieto-Rodr´ıguez, M. Mucientes(B), and V.M. Brea
Center for Research in Information Technologies (CiTIUS),
University of Santiago de Compostela, Santiago de Compostela, Spain
{adrian.nietorodriguez,manuel.mucientes,victor.brea}@usc.es
45e616093a92e5f1e61a7c6037d5f637aa8964afFine-grained Evaluation on Face Detection in the Wild
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1716231', 'Bin Yang', 'bin yang')
('1721677', 'Junjie Yan', 'junjie yan')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
{yb.derek,yanjjie}@gmail.com
{zlei,szli}@nlpr.ia.ac.cn
45efd6c2dd4ca19eed38ceeb7c2c5568231451e1Comparative Analysis of Statistical Approach
for Face Recognition
CMR Institute of Technology, Hyderabad, (India
('39463904', 'M.Janga Reddy', 'm.janga reddy')
45f3bf505f1ce9cc600c867b1fb2aa5edd5feed8
4560491820e0ee49736aea9b81d57c3939a69e12Investigating the Impact of Data Volume and
Domain Similarity on Transfer Learning
Applications
State Farm Insurance, Bloomington IL 61710, USA,
('30492517', 'Michael Bernico', 'michael bernico')
('50024782', 'Yuntao Li', 'yuntao li')
('41092475', 'Dingchao Zhang', 'dingchao zhang')
michael.bernico.qepz@statefarm.com
4571626d4d71c0d11928eb99a3c8b10955a74afeGeometry Guided Adversarial Facial Expression Synthesis
1National Laboratory of Pattern Recognition, CASIA
2Center for Research on Intelligent Perception and Computing, CASIA
3Center for Excellence in Brain Science and Intelligence Technology, CAS
('3051419', 'Lingxiao Song', 'lingxiao song')
('9702077', 'Zhihe Lu', 'zhihe lu')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
('1688870', 'Tieniu Tan', 'tieniu tan')
4534d78f8beb8aad409f7bfcd857ec7f19247715Under review as a conference paper at ICLR 2017
TRANSFORMATION-BASED MODELS OF VIDEO
SEQUENCES
Facebook AI Research
('39248118', 'Anitha Kannan', 'anitha kannan')
('3149531', 'Arthur Szlam', 'arthur szlam')
('1687325', 'Du Tran', 'du tran')
joost@joo.st, {akannan, ranzato, aszlam, trandu, soumith}@fb.com
459e840ec58ef5ffcee60f49a94424eb503e8982One-shot Face Recognition by Promoting Underrepresented Classes
Microsoft
One Microsoft Way, Redmond, Washington, United States
('3133575', 'Yandong Guo', 'yandong guo')
('1684635', 'Lei Zhang', 'lei zhang')
{yandong.guo, leizhang}@microsoft.com
45fbeed124a8956477dbfc862c758a2ee2681278
451c42da244edcb1088e3c09d0f14c064ed9077e1964
© EURASIP, 2011 - ISSN 2076-1465
19th European Signal Processing Conference (EUSIPCO 2011)
INTRODUCTION
4568063b7efb66801e67856b3f572069e774ad33Correspondence Driven Adaptation for Human Profile Recognition
NEC Laboratories America, Inc
2Huawei Technologies (USA)
Cupertino, CA 95014
Santa Clara, CA 95050
('2909406', 'Ming Yang', 'ming yang')
('1682028', 'Shenghuo Zhu', 'shenghuo zhu')
('39157653', 'Fengjun Lv', 'fengjun lv')
('38701713', 'Kai Yu', 'kai yu')
{myang,zsh,kyu}@sv.nec-labs.com
felix.Lv@huawei.com
45c31cde87258414f33412b3b12fc5bec7cb3ba9Coding Facial Expressions with Gabor Wavelets
ATR Human Information Processing Research Laboratory
2-2 Hikaridai, Seika-cho
Soraku-gun, Kyoto 619-02, Japan
Kyushu University
('34801422', 'Shigeru Akamatsu', 'shigeru akamatsu')
('40533190', 'Miyuki Kamachi', 'miyuki kamachi')
('8365437', 'Jiro Gyoba', 'jiro gyoba')
mlyons@hip.atr.co.jp
4542273a157bfd4740645a6129d1784d1df775d2FaceRipper
Automatic Face Indexer and Tagger for Personal
Albums and Videos
A PROJECT REPORT
SUBMITTED IN PARTIAL FULFILMENT OF THE
REQUIREMENTS FOR THE DEGREE OF
Master of Engineering
IN
COMPUTER SCIENCE AND ENGINEERING
by
Computer Science and Automation
Indian Institute of Science
BANGALORE – 560 012
July 2007
('2819449', 'Mehul Parsana', 'mehul parsana')
4511e09ee26044cb46073a8c2f6e1e0fbabe33e8
45513d0f2f5c0dac5b61f9ff76c7e46cce62f402LEE,GRAUMAN:FACEDISCOVERYWITHSOCIALCONTEXT
Face Discovery with Social Context
https://webspace.utexas.edu/yl3663/~ylee/
http://www.cs.utexas.edu/~grauman/
University of Texas at Austin
Austin, TX, USA
('1883898', 'Yong Jae Lee', 'yong jae lee')
('1794409', 'Kristen Grauman', 'kristen grauman')
45e459462a80af03e1bb51a178648c10c4250925LCrowdV: Generating Labeled Videos for
Simulation-based Crowd Behavior Learning
The University of North Carolina at Chapel Hill
('3422427', 'Ernest Cheung', 'ernest cheung')
('3422442', 'Tsan Kwong Wong', 'tsan kwong wong')
('2718563', 'Aniket Bera', 'aniket bera')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1699159', 'Dinesh Manocha', 'dinesh manocha')
458677de7910a5455283a2be99f776a834449f61Face Image Retrieval Using Facial Attributes By
K-Means
[1]I.Sudha, [2]V.Saradha, [3]M.Tamilselvi, [4]D.Vennila
[1]AP, Department of CSE ,[2][3][4] B.Tech(CSE)
Achariya college of Engineering Technology
Puducherry
45a6333fc701d14aab19f9e2efd59fe7b0e89fecHAND POSTURE DATASET CREATION FOR GESTURE
RECOGNITION
Luis Anton-Canalis
Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria
Campus Universitario de Tafira, 35017 Gran Canaria, Spain
Elena Sanchez-Nielsen
Departamento de E.I.O. y Computacion
38271 Universidad de La Laguna, Spain
Keywords:
Image understanding, Gesture recognition, Hand dataset.
450c6a57f19f5aa45626bb08d7d5d6acdb863b4bTowards Interpretable Face Recognition
Michigan State University
2 Adobe Inc.
3 Aibee
('32032812', 'Bangjie Yin', 'bangjie yin')
('1849929', 'Luan Tran', 'luan tran')
('3131569', 'Haoxiang Li', 'haoxiang li')
('1720987', 'Xiaohui Shen', 'xiaohui shen')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
{yinbangj, tranluan, liuxm}@msu.edu, xshen@adobe.com, lhxustcer@gmail.com
1f9b2f70c24a567207752989c5bd4907442a9d0fDeep Representations to Model User ‘Likes’
School of Computer Engineering, Nanyang Technological University, Singapore
Institute for Infocomm Research, Singapore
QCIS, University of Technology, Sydney
('2731733', 'Sharath Chandra Guntuku', 'sharath chandra guntuku')
('10638646', 'Joey Tianyi Zhou', 'joey tianyi zhou')
('1872875', 'Sujoy Roy', 'sujoy roy')
('1807998', 'Ivor W. Tsang', 'ivor w. tsang')
sharathc001@e.ntu.edu.sg, tzhou1@ntu.edu.sg, wslin@ntu.edu.sg
sujoy@i2r.a-star.edu.sg
ivor.tsang@uts.edu.au
1fe1bd6b760e3059fff73d53a57ce3a6079adea1SINGH ET AL.: SCALING BAG-OF-VISUAL-WORDS GENERATION
Fast-BoW: Scaling Bag-of-Visual-Words
Generation
Visual Learning & Intelligence Group
Department of Computer Science and
Engineering
Indian Institute of Technology
Hyderabad
Kandi, Sangareddy, Telangana, India
('40624178', 'Dinesh Singh', 'dinesh singh')
('51292354', 'Abhijeet Bhure', 'abhijeet bhure')
('51305895', 'Sumit Mamtani', 'sumit mamtani')
('34358756', 'C. Krishna Mohan', 'c. krishna mohan')
cs14resch11003@iith.ac.in
cs15btech11001@iith.ac.in
cs15btech11022@iith.ac.in
ckm@iith.ac.in
1f05473c587e2a3b587f51eb808695a1c10bc153Towards Good Practices for Very Deep Two-Stream ConvNets
The Chinese University of Hong Kong, Hong Kong
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('33345248', 'Limin Wang', 'limin wang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('1915826', 'Zhe Wang', 'zhe wang')
('33427555', 'Yu Qiao', 'yu qiao')
{07wanglimin,bitxiong,buptwangzhe2012}@gmail.com, yu.qiao@siat.ac.cn
1fa3948af1c338f9ae200038c45adadd2b39a3e4Computational Explorations of Split Architecture in Modeling Face and Object
Recognition
University of California San Diego
9500 Gilman Drive #0404, La Jolla, CA 92093, USA
University of California San Diego
9500 Gilman Drive #0515, La Jolla, CA 92093, USA
Janet Hui-wen Hsiao (jhsiao@cs.ucsd.edu)
Garrison W. Cottrell (gary@ucsd.edu)
Danke Shieh (danke@ucsd.edu)
1ffe20eb32dbc4fa85ac7844178937bba97f4bf0Face Clustering: Representation and Pairwise
Constraints
('9644181', 'Yichun Shi', 'yichun shi')
('40653304', 'Charles Otto', 'charles otto')
('6680444', 'Anil K. Jain', 'anil k. jain')
1f8304f4b51033d2671147b33bb4e51b9a1e16feNoname manuscript No.
(will be inserted by the editor)
Beyond Trees:
MAP Inference in MRFs via Outer-Planar Decomposition
Received: date / Accepted: date
('1746610', 'Dhruv Batra', 'dhruv batra')
1f89439524e87a6514f4fbe7ed34bda4fd1ce286Carnegie Mellon University
Department of Statistics
Dietrich College of Humanities and Social Sciences
9-2005
Devising Face Authentication System and
Performance Evaluation Based on Statistical
Models
Carnegie Mellon University
Follow this and additional works at: http://repository.cmu.edu/statistics
Part of the Statistics and Probability Commons
('2046854', 'Sinjini Mitra', 'sinjini mitra')
('1680307', 'Anthony Brockwell', 'anthony brockwell')
('1794486', 'Marios Savvides', 'marios savvides')
('1684961', 'Stephen E. Fienberg', 'stephen e. fienberg')
Research Showcase @ CMU
Carnegie Mellon University, abrock@stat.cmu.edu
Carnegie Mellon University, msavvid@cs.cmu.edu
Carnegie Mellon University, fienberg@stat.cmu.edu
This Technical Report is brought to you for free and open access by the Dietrich College of Humanities and Social Sciences at Research Showcase @
CMU. It has been accepted for inclusion in Department of Statistics by an authorized administrator of Research Showcase @ CMU. For more
information, please contact research-showcase@andrew.cmu.edu.
1f9ae272bb4151817866511bd970bffb22981a49An Iterative Regression Approach for Face Pose Estima-
tion from RGB Images
This paper presents a iterative optimization method, explicit shape regression, for face pose
detection and localization. The regression function is learnt to find out the entire facial shape
and minimize the alignment errors. A cascaded learning framework is employed to enhance
shape constraint during detection. A combination of a two-level boosted regression, shape
performance. In this paper, we have explain the advantage of ESR for deformable object like
face pose estimation and reveal its generic applications of the method. In the experiment,
we compare the results with different work and demonstrate the accuracy and robustness in
different scenarios.
Introduction
Pose estimation is an important problem in computer vision, and has enabled many practical ap-
plication from face expression 1 to activity tracking 2. Researchers design a new algorithm called
explicit shape regression (ESR) to find out face alignment from a picture 3. Figure 1 shows how
the system uses ESR to learn a shape of a human face image. A simple way to identify a face is to
find out facial landmarks like eyes, nose, mouth and chin. The researchers define a face shape S
and S is composed of Nf p facial landmarks. Therefore, they get S = [x1, y1, ..., xNf p, yNf p]T . The
objective of the researchers is to estimate a shape S of a face image. The way to know the accuracy
('3988780', 'Wenye He', 'wenye he')
1fd6004345245daf101c98935387e6ef651cbb55Learning Symmetry Features for Face Detection
Based on Sparse Group Lasso
Center for Research on Intelligent Perception and Computing,
National Laboratory of Pattern Recognition, Institute of Automation
Chinese Academy of Sciences, Beijing, China
('39763795', 'Qi Li', 'qi li')
('1757186', 'Zhenan Sun', 'zhenan sun')
('1705643', 'Ran He', 'ran he')
('1688870', 'Tieniu Tan', 'tieniu tan')
{qli,znsun,rhe,tnt}@nlpr.ia.ac.cn
1fc249ec69b3e23856b42a4e591c59ac60d77118Evaluation of a 3D-aided Pose Invariant 2D Face Recognition System
Computational Biomedicine Lab
4800 Calhoun Rd. Houston, TX, USA
('5084124', 'Xiang Xu', 'xiang xu')
('26401746', 'Ha A. Le', 'ha a. le')
('39634395', 'Pengfei Dou', 'pengfei dou')
('2461369', 'Yuhang Wu', 'yuhang wu')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
{xxu18, hale4, pdou, ywu35, ikakadia}@central.uh.edu
1fbde67e87890e5d45864e66edb86136fbdbe20eThe Action Similarity Labeling Challenge ('3294355', 'Orit Kliper-Gross', 'orit kliper-gross')
('1756099', 'Tal Hassner', 'tal hassner')
('1776343', 'Lior Wolf', 'lior wolf')
1f41a96589c5b5cee4a55fc7c2ce33e1854b09d6Demographic Estimation from Face Images:
Human vs. Machine Performance
('34393045', 'Hu Han', 'hu han')
('40653304', 'Charles Otto', 'charles otto')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('6680444', 'Anil K. Jain', 'anil k. jain')
1fd2ed45fb3ba77f10c83f0eef3b66955645dfe0
1fe59275142844ce3ade9e2aed900378dd025880Facial Landmark Detection via Progressive Initialization
National University of Singapore
Singapore 117576
('3124720', 'Shengtao Xiao', 'shengtao xiao')xiao shengtao@u.nus.edu, eleyans@nus.edu.sg, ashraf@nus.edu.sg
1f2d12531a1421bafafe71b3ad53cb080917b1a7
1fe121925668743762ce9f6e157081e087171f4cUnsupervised Learning of Overcomplete Face Descriptors
Center for Machine Vision Research
University of Oulu
('32683737', 'Juha Ylioinas', 'juha ylioinas')
('1776374', 'Juho Kannala', 'juho kannala')
('1751372', 'Abdenour Hadid', 'abdenour hadid')
firstname.lastname@ee.oulu.fi
1fefb2f8dd1efcdb57d5c2966d81f9ab22c1c58dvExplorer: A Search Method to Find Relevant YouTube Videos for Health
Researchers
IBM Research, Cambridge, MA, USA
('1764750', 'Hillol Sarker', 'hillol sarker')
('3456866', 'Murtaza Dhuliawala', 'murtaza dhuliawala')
('31633051', 'Nicholas Fay', 'nicholas fay')
('15793829', 'Amar Das', 'amar das')
1fdeba9c4064b449231eac95e610f3288801fd3eFine-Grained Head Pose Estimation Without Keypoints
Georgia Institute of Technology
('31601235', 'Nataniel Ruiz', 'nataniel ruiz')
('39832600', 'Eunji Chong', 'eunji chong')
('1692956', 'James M. Rehg', 'james m. rehg')
{nataniel.ruiz, eunjichong, rehg}@gatech.edu
1f8e44593eb335c2253d0f22f7f9dc1025af8c0dFine-tuning regression forests votes for object alignment in the wild.
Yang, H; Patras, I
© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained for all other uses, in any current or future media, including reprinting/republishing
this material for advertising or promotional purposes, creating new collective works, for resale
or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
For additional information about this publication click this link.
http://qmro.qmul.ac.uk/xmlui/handle/123456789/22607
Information about this research object was correct at the time of download; we occasionally
make corrections to records, please therefore check the published record when citing. For
more information contact scholarlycommunications@qmul.ac.uk
1f94734847c15fa1da68d4222973950d6b683c9eEmbedding Label Structures for Fine-Grained Feature Representation
UNC Charlotte
Charlotte, NC 28223
NEC Lab America
Cupertino, CA 95014
NEC Lab America
Cupertino, CA 95014
UNC Charlotte
Charlotte, NC 28223
('2739998', 'Xiaofan Zhang', 'xiaofan zhang')
('1757386', 'Feng Zhou', 'feng zhou')
('1695082', 'Yuanqing Lin', 'yuanqing lin')
('1753384', 'Shaoting Zhang', 'shaoting zhang')
xzhang35@uncc.edu
feng@nec-labs.com
ylin@nec-labs.com
szhang16@uncc.edu
1f745215cda3a9f00a65166bd744e4ec35644b02Facial Cosmetics Database and Impact Analysis on
Automatic Face Recognition
# Computer Science Department, TU Muenchen
Boltzmannstr. 3, 85748 Garching b. Muenchen, Germany
∗ Multimedia Communications Department, EURECOM
450 Route des Chappes, 06410 Biot, France
('38996894', 'Marie-Lena Eckert', 'marie-lena eckert')
('1862703', 'Neslihan Kose', 'neslihan kose')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
1 marie-lena.eckert@mytum.de
2 kose@eurecom.fr
3 jld@eurecom.fr
1fff309330f85146134e49e0022ac61ac60506a9Data-Driven Sparse Sensor Placement for Reconstruction ('37119658', 'Krithika Manohar', 'krithika manohar')
('1824880', 'Bingni W. Brunton', 'bingni w. brunton')
('1937069', 'J. Nathan Kutz', 'j. nathan kutz')
('3083169', 'Steven L. Brunton', 'steven l. brunton')
∗Corresponding author: kmanohar@uw.edu
1fd3dbb6e910708fa85c8a86e17ba0b6fef5617cARISTOTLE UNIVERSITY OF THESSALONIKI
FACULTY OF SCIENCES
DEPARTMENT OF INFORMATICS
POSTGRADUATE STUDIES PROGRAMME
Age interval and gender prediction using PARAFAC2 on
speech recordings and face images
Supervisor: Professor Kotropoulos Constantine
A thesis submitted in partial fulfillment of the requirements
for the degree of Master of Science
July 2016
1f24cef78d1de5aa1eefaf344244dcd1972797e8Outlier-Robust Tensor PCA
National University of Singapore, Singapore
('33481412', 'Pan Zhou', 'pan zhou')
('33221685', 'Jiashi Feng', 'jiashi feng')
pzhou@u.nus.edu
elefjia@nus.edu.sg
1fe990ca6df273de10583860933d106298655ec8College of Information Science and Engineering
Hunan University
Changsha, 410082 P.R. China
In this paper, we propose a wavelet-based illumination normalization method for
face recognition against different directions and strength of light. Here, by one-level
discrete wavelet transform, a given face image is first decomposed into low frequency
and high frequency components, respectively, and then the two components are pro-
cessed separately through contrast enhancement to eliminate the effect of illumination
variations and enhance the detailed edge information. Finally the normalized image is
obtained through the inverse discrete wavelet transform. Experimental results on the
Yale B, the extended Yale B and CMU PIE face databases show that the proposed
method can effectively reduce the effect of illumination variations on face recognition.
Keywords: face recognition, illumination normalization, discrete wavelet transform, edge
enhancement, face representation
1. INTRODUCTION
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 31, 1711-1731 (2015)
A Wavelet-Based Image Preprocessing Method
for Illumination Insensitive Face Recognition
Face recognition plays an important role in pattern recognition and computer vision
due to its wide applications in human computer interaction, information security and
access control, law enforcement and entertainment [1]. Various methods have been pro-
posed for face recognition, such as PCA [2], LDA [3], LFA [4], EBGM [5], probabilistic
and Bayesian matching [6] and SVM [7]. These methods can yield good performance
when face images are well frontally illuminated. Existing studies have proved that face
recognition for the same face with different illumination conditions is more difficult than
the perception of face identity [8, 9]. The reason is that an object's appearance largely
depends on the way in which it is viewed. Illumination variations mainly consist of the
lighting direction and the lighting intensity. Usually, slight changes in illumination pro-
duce dramatical changes in the face appearance. So, the performance of face recognition
is highly sensitive to the illumination condition. For example, the unsuitable lighting
direction and intensity may lead to underexposed or overexposed regions over the face,
and weaken the discrimination of face features such as skin texture, eye detail, etc.
Therefore, illumination normalization is a very important task for face recognition under
varying illumination.
To make face recognition relatively insensitive to illumination variations, many
methods have been proposed with the goal of illumination normalization, illumination-
invariant feature extraction or illumination variation modeling [10]. Illumination-inva-
riant approaches generally fall into three classes. The first class is to preprocess face
images by using some simply techniques, such as logarithm transform and histogram
Received March 26, 2014; revised May 26, 2014; accepted July 17, 2014.
Communicated by Chung-Lin Huang.
1711
('2078993', 'Xiaochao Zhao', 'xiaochao zhao')
('2138422', 'Yaping Lin', 'yaping lin')
('2431083', 'Bo Ou', 'bo ou')
('1824216', 'Junfeng Yang', 'junfeng yang')
E-mail: {s12103017; yplin; oubo; B12100031}@hnu.edu.cn
1feeab271621128fe864e4c64bab9b2e2d0ed1f1Article
Perception-Link Behavior Model: Supporting
a Novel Operator Interface for a Customizable
Anthropomorphic Telepresence Robot
BeingTogether Centre, Institute for Media Innovation, Singapore 637553, Singapore
Robotic Research Centre, Nanyang Technological University, Singapore 639798, Singapore
Received: 15 May 2017; Accepted: 15 July 2017; Published: 20 July 2017
('1768723', 'William Gu', 'william gu')
('9216152', 'Gerald Seet', 'gerald seet')
('1695679', 'Nadia Magnenat-Thalmann', 'nadia magnenat-thalmann')
mglseet@ntu.edu.sg (G.S.); NADIATHALMANN@ntu.edu.sg (N.M.-T.)
* Correspondence: GUYU0007@e.ntu.edu.sg
73b90573d272887a6d835ace89bfaf717747c59bFeature Disentangling Machine - A Novel
Approach of Feature Selection and Disentangling
in Facial Expression Analysis
University of South Carolina, USA
Center for Computational Intelligence, Nanyang Technology University, Singapore
3 Center for Quantum Computation and Intelligent Systems,
University of Technology, Australia
('40205868', 'Ping Liu', 'ping liu')
('10638646', 'Joey Tianyi Zhou', 'joey tianyi zhou')
('3091647', 'Zibo Meng', 'zibo meng')
('49107074', 'Shizhong Han', 'shizhong han')
('1686235', 'Yan Tong', 'yan tong')
73f467b4358ac1cafb57f58e902c1cab5b15c590 ISSN 0976 3724 47
Combination of Dimensionality Reduction Techniques for Face
Image Retrieval: A Review
M.Tech Scholar, MES College of Engineering, Kuttippuram
Kerala
MES College of Engineering, Kuttippuram
Kerala
fousisadath@gmail.com
Jahfar.ali@gmail.com
7323b594d3a8508f809e276aa2d224c4e7ec5a80JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
An Experimental Evaluation of Covariates
Effects on Unconstrained Face Verification
('2927406', 'Boyu Lu', 'boyu lu')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('9215658', 'Rama Chellappa', 'rama chellappa')
732e8d8f5717f8802426e1b9debc18a8361c1782Unimodal Probability Distributions for Deep Ordinal Classification ('12757989', 'Christopher Beckham', 'christopher beckham')
73ed64803d6f2c49f01cffef8e6be8fc9b5273b8Noname manuscript No.
(will be inserted by the editor)
Cooking in the kitchen: Recognizing and Segmenting Human
Activities in Videos
Received: date / Accepted: date
('51267303', 'Hilde Kuehne', 'hilde kuehne')
7306d42ca158d40436cc5167e651d7ebfa6b89c1Noname manuscript No.
(will be inserted by the editor)
Transductive Zero-Shot Action Recognition by
Word-Vector Embedding
Received: date / Accepted: date
('47158489', 'Xun Xu', 'xun xu')
734cdda4a4de2a635404e4c6b61f1b2edb3f501dTie and Guan EURASIP Journal on Image and Video Processing 2013, 2013:8
http://jivp.eurasipjournals.com/content/2013/1/8
R ES EAR CH
Open Access
Automatic landmark point detection and tracking
for human facial expressions
('1721867', 'Ling Guan', 'ling guan')
739d400cb6fb730b894182b29171faaae79e3f01A New Regularized Orthogonal Local Fisher Discriminant Analysis for Image
Feature Extraction
dept. name of organization, name of organization, City, Country
School of Management Engineering, Henan Institute of Engineering, Zhengzhou 451191, P.R. China
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, P.R. China
('2539310', 'ZHONGFENG WANG', 'zhongfeng wang')
('2539310', 'ZHONGFENG WANG', 'zhongfeng wang')
('1718667', 'Zhan WANG', 'zhan wang')
732e4016225280b485c557a119ec50cffb8fee98Are all training examples equally valuable?
Massachusetts Institute of Technology
Universitat Oberta de Catalunya
Agata Lapedriza
Computer Vision Center
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Massachusetts Institute of Technology
('2367683', 'Hamed Pirsiavash', 'hamed pirsiavash')
('3326347', 'Zoya Bylinskii', 'zoya bylinskii')
('1690178', 'Antonio Torralba', 'antonio torralba')
hpirsiav@mit.edu
agata@mit.edu
zoya@mit.edu
torralba@mit.edu
7373c4a23684e2613f441f2236ed02e3f9942dd4This document is downloaded from DR-NTU, Nanyang Technological
University Library, Singapore
Title
Feature extraction through binary pattern of phase
congruency for facial expression recognition
Author(s)
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Li, Jun;
Teoh, Eam Khwang
Citation
Shojaeilangari, S., Yau, W. Y., Li, J., & Teoh, E. K.
(2012). Feature extraction through binary pattern of
phase congruency for facial expression recognition. 12th
International Conference on Control Automation Robotics
& Vision (ICARCV), 166-170.
Date
2012
URL
http://hdl.handle.net/10220/18012
Rights
© 2012 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other
uses, in any current or future media, including
reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for
resale or redistribution to servers or lists, or reuse of any
copyrighted component of this work in other works. The
published version is available at:
[http://dx.doi.org/10.1109/ICARCV.2012.6485152].
732686d799d760ccca8ad47b49a8308b1ab381fbRunning head: TEACHERS’ DIFFERING BEHAVIORS
1
Graduate School of Psychology
RESEARCH MASTER’S PSYCHOLOGY THESıS REPORT

Teachers’ differing classroom behaviors:
The role of emotional sensitivity and cultural tolerance
Research Master’s, Social Psychology
Ethics Committee Reference Code: 2016-SP-7084
('7444483', 'Agneta Fischer', 'agneta fischer')
('22253276', 'Disa Sauter', 'disa sauter')
('2808612', 'Monique Volman', 'monique volman')
73fbdd57270b9f91f2e24989178e264f2d2eb7ae978-1-4673-0046-9/12/$26.00 ©2012 IEEE
1945
ICASSP 2012
738a985fba44f9f5acd516e07d0d9578f2ffaa4eMACHINE LEARNING TECHNIQUES FOR FACE ANALYSIS
Man Machine Interaction Group
Delft University of Technology
Mekelweg 4, 2628 CD Delft
The Netherlands
from
learning, pattern recognition, classifiers, face
KEYWORDS
Machine
detection, facial expression recognition.
('2866326', 'D. Datcu', 'd. datcu')E-mail: {D.Datcu, L.J.M.Rothkrantz}@ewi.tudelft.nl
73fd7e74457e0606704c5c3d3462549f1b2de1adLearning Predictable and Discriminative Attributes
for Visual Recognition
School of Software, Tsinghua University, Beijing 100084, China
('34811036', 'Yuchen Guo', 'yuchen guo')
('38329336', 'Guiguang Ding', 'guiguang ding')
('39665252', 'Xiaoming Jin', 'xiaoming jin')
('1751179', 'Jianmin Wang', 'jianmin wang')
yuchen.w.guo@gmail.com, {dinggg,xmjin,jimwang}@tsinghua.edu.cn,
73c5bab5c664afa96b1c147ff21439135c7d968bWhitened LDA for Face Recognition ∗
Ubiquitous Computing Lab
Kyung Hee University
Suwon, Korea
Ubiquitous Computing Lab
Kyung Hee University
Suwon, Korea
Mobile Computing Lab
SungKyunKwan University
Suwon, Korea
('1687579', 'Vo Dinh Minh Nhat', 'vo dinh minh nhat')
('1700806', 'Sungyoung Lee', 'sungyoung lee')
('1718666', 'Hee Yong Youn', 'hee yong youn')
vdmnhat@oslab.khu.ac.kr
sylee@oslab.khu.ac.kr
youn@ece.skku.ac.kr
73c9cbbf3f9cea1bc7dce98fce429bf0616a1a8c
877100f430b72c5d60de199603ab5c65f611ce17Within-person variability in men’s facial
width-to-height ratio
University of York, York, United Kingdom
('40598264', 'Robin S.S. Kramer', 'robin s.s. kramer')
870433ba89d8cab1656e57ac78f1c26f4998edfbRegressing Robust and Discriminative 3D Morphable Models
with a very Deep Neural Network
Institute for Robotics and Intelligent Systems, USC, CA, USA
Information Sciences Institute, USC, CA, USA
The Open University of Israel, Israel
('1756099', 'Tal Hassner', 'tal hassner')
('11269472', 'Iacopo Masi', 'iacopo masi')
872dfdeccf99bbbed7c8f1ea08afb2d713ebe085L2-constrained Softmax Loss for Discriminative Face Verification
Center for Automation Research, UMIACS, University of Maryland, College Park, MD
('48467498', 'Rajeev Ranjan', 'rajeev ranjan')
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')
('9215658', 'Rama Chellappa', 'rama chellappa')
{rranjan1,carlos,rama}@umiacs.umd.edu
87e6cb090aecfc6f03a3b00650a5c5f475dfebe1KIM, BALTRUŠAITIS et al.: HOLISTICALLY CONSTRAINED LOCAL MODEL
Holistically Constrained Local Model:
Going Beyond Frontal Poses for Facial
Landmark Detection
Tadas Baltrušaitis2
Amir Zadeh2
Gérard Medioni1
Institute for Robotics and Intelligent
Systems
University of Southern California
Los Angeles, CA, USA
Language Technologies Institute
Carnegie Mellon University
Pittsburgh, PA, USA
('2792633', 'KangGeon Kim', 'kanggeon kim')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
kanggeon.kim@usc.edu
tbaltrus@cs.cmu.edu
abagherz@cs.cmu.edu
morency@cs.cmu.edu
medioni@usc.edu
8796f2d54afb0e5c924101f54d469a1d54d5775dJournal of Signal and Information Processing, 2012, 3, 45-50
http://dx.doi.org/10.4236/jsip.2012.31007 Published Online February 2012 (http://www.SciRP.org/journal/jsip)
45
Illumination Invariant Face Recognition Using Fuzzy LDA
and FFNN
School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
Received October 20th, 2011; revised November 24th, 2011; accepted December 10th, 2011
('1697559', 'Behzad Bozorgtabar', 'behzad bozorgtabar')
('3280435', 'Hamed Azami', 'hamed azami')
('3097307', 'Farzad Noorian', 'farzad noorian')
Email: b_bozorgtabar@elec.iust.ac.ir, hmdazami@gmail.com, fnoorian@ee.iust.ac.ir
87f285782d755eb85d8922840e67ed9602cfd6b9INCORPORATING BOLTZMANN MACHINE PRIORS
FOR SEMANTIC LABELING IN IMAGES AND VIDEOS
A Dissertation Presented
by
ANDREW KAE
Submitted to the Graduate School of the
University of Massachusetts Amherst in partial ful llment
of the requirements for the degree of
DOCTOR OF PHILOSOPHY
May 2014
Computer Science
871f5f1114949e3ddb1bca0982086cc806ce84a8Discriminative Learning of Apparel Features
1 Computer Vision Laboratory, D-ITET, ETH Z¨urich, Switzerland
2 ESAT - PSI / IBBT, K.U. Leuven, Belgium
('2173683', 'Rasmus Rothe', 'rasmus rothe')
('2113583', 'Marko Ristin', 'marko ristin')
('1727791', 'Matthias Dantone', 'matthias dantone')
('1681236', 'Luc Van Gool', 'luc van gool')
{rrothe,ristin,mdantone,vangool}@vision.ee.ethz.ch
luc.vangool@esat.kuleuven.be
8724fc4d6b91eebb79057a7ce3e9dfffd3b1426fOrdered Pooling of Optical Flow Sequences for Action Recognition
1Data61/CSIRO, 2 Australian Center for Robotic Vision
Australian National University, Canberra, Australia
Fatih Porikli1,2,3
('48094509', 'Jue Wang', 'jue wang')
('2691929', 'Anoop Cherian', 'anoop cherian')
jue.wang@anu.edu.au
anoop.cherian@anu.edu.au
fatih.porikli@anu.edu.au
87bee0e68dfc86b714f0107860d600fffdaf7996Automated 3D Face Reconstruction from Multiple Images
using Quality Measures
Institute for Vision and Graphics, University of Siegen, Germany
('2712313', 'Marcel Piotraschke', 'marcel piotraschke')
('2880906', 'Volker Blanz', 'volker blanz')
piotraschke@nt.uni-siegen.de, blanz@informatik.uni-siegen.de
87309bdb2b9d1fb8916303e3866eca6e3452c27dKernel Coding: General Formulation and Special Cases
Australian National University, Canberra, ACT 0200, Australia
NICTA(cid:63), Locked Bag 8001, Canberra, ACT 2601, Australia
('2862871', 'Mathieu Salzmann', 'mathieu salzmann')
878169be6e2c87df2d8a1266e9e37de63b524ae7CBMM Memo No. 089
May 10, 2018
Image interpretation above and below the object level
('2507298', 'Guy Ben-Yosef', 'guy ben-yosef')
('1743045', 'Shimon Ullman', 'shimon ullman')
878301453e3d5cb1a1f7828002ea00f59cbeab06Faceness-Net: Face Detection through
Deep Facial Part Responses
('1692609', 'Shuo Yang', 'shuo yang')
('47571885', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
87e592ee1a7e2d34e6b115da08700a1ae02e9355Deep Pictorial Gaze Estimation
AIT Lab, Department of Computer Science, ETH Zurich
('20466488', 'Seonwook Park', 'seonwook park')
('21195502', 'Adrian Spurr', 'adrian spurr')
('2531379', 'Otmar Hilliges', 'otmar hilliges')
{firstname.lastname}@inf.ethz.ch
87147418f863e3d8ff8c97db0b42695a1c28195bAttributes for Improved Attributes: A
Multi-Task Network for Attribute Classification
University of Maryland, College Park
('3351637', 'Emily M. Hand', 'emily m. hand')
('9215658', 'Rama Chellappa', 'rama chellappa')
87dd3fd36bccbe1d5f1484ac05f1848b51c6eab5SPATIO-TEMPORAL MAXIMUM AVERAGE CORRELATION
HEIGHT TEMPLATES IN ACTION RECOGNITION AND VIDEO
SUMMARIZATION
by
B.A. Earlham College, Richmond Indiana
M.S. University of Central Florida
A dissertation submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy
in the School of Electrical Engineering and Computer Science
in the College of Engineering and Computer Science
at the University of Central Florida
Orlando, Florida
Summer Term
2010
Major Professor: Mubarak Shah
('35188194', 'MIKEL RODRIGUEZ', 'mikel rodriguez')
87bb183d8be0c2b4cfceb9ee158fee4bbf3e19fdCraniofacial Image Analysis ('1935115', 'Ezgi Mercan', 'ezgi mercan')
('1771661', 'Indriyati Atmosukarto', 'indriyati atmosukarto')
('10423763', 'Jia Wu', 'jia wu')
('1744684', 'Shu Liang', 'shu liang')
('1809809', 'Linda G. Shapiro', 'linda g. shapiro')
8006219efb6ab76754616b0e8b7778dcfb46603dCONTRIBUTIONSTOLARGE-SCALELEARNINGFORIMAGECLASSIFICATIONZeynepAkataPhDThesisl’´EcoleDoctoraleMath´ematiques,SciencesetTechnologiesdel’Information,InformatiquedeGrenoble
80193dd633513c2d756c3f568ffa0ebc1bb5213e
808b685d09912cbef4a009e74e10476304b4cccfFrom Understanding to Controlling Privacy
against Automatic Person Recognition in Social Media
Max Planck Institute for Informatics, Germany
('2390510', 'Seong Joon Oh', 'seong joon oh')
('1697100', 'Bernt Schiele', 'bernt schiele')
('1739548', 'Mario Fritz', 'mario fritz')
{joon,mfritz,schiele}@mpi-inf.mpg.de
804b4c1b553d9d7bae70d55bf8767c603c1a09e3978-1-4799-9988-0/16/$31.00 ©2016 IEEE
1831
ICASSP 2016
800cbbe16be0f7cb921842d54967c9a94eaa2a65MULTIMODAL RECOGNITION OF
EMOTIONS
80135ed7e34ac1dcc7f858f880edc699a920bf53EFFICIENT ACTION AND EVENT RECOGNITION IN VIDEOS USING
EXTREME LEARNING MACHINES
by
G¨ul Varol
B.S., Computer Engineering, Bo gazi ci University
Submitted to the Institute for Graduate Studies in
Science and Engineering in partial fulfillment of
the requirements for the degree of
Master of Science
Graduate Program in Computer Engineering
Bo gazi ci University
2015
803c92a3f0815dbf97e30c4ee9450fd005586e1aMax-Mahalanobis Linear Discriminant Analysis Networks ('19201674', 'Tianyu Pang', 'tianyu pang')
80277fb3a8a981933533cf478245f262652a33b5Synergy-based Learning of Facial Identity
Institute for Computer Graphics and Vision
Graz University of Technology, Austria
('1791182', 'Peter M. Roth', 'peter m. roth')
('3628150', 'Horst Bischof', 'horst bischof')
{koestinger,pmroth,bischof}@icg.tugraz.at
80840df0802399838fe5725cce829e1b417d7a2eFast Approximate L∞ Minimization: Speeding Up Robust Regression
School of Computer Science and Technology, Nanjing University of Science and Technology, China
School of Computer Science, The University of Adelaide, Australia
('2731972', 'Fumin Shen', 'fumin shen')
('1780381', 'Chunhua Shen', 'chunhua shen')
('26065407', 'Rhys Hill', 'rhys hill')
('5546141', 'Anton van den Hengel', 'anton van den hengel')
('3195119', 'Zhenmin Tang', 'zhenmin tang')
80c8d143e7f61761f39baec5b6dfb8faeb814be9Local Directional Pattern based Fuzzy Co-
occurrence Matrix Features for Face recognition
Professor, CSE Dept.
Gokaraju Rangaraju Institute of Engineering and Technology, Hyd
('39121253', 'P Chandra Sekhar Reddy', 'p chandra sekhar reddy')
809ea255d144cff780300440d0f22c96e98abd53ArcFace: Additive Angular Margin Loss for Deep Face Recognition
Imperial College London
UK
DeepInSight
China
Imperial College London
UK
('3234063', 'Jiankang Deng', 'jiankang deng')
('3007274', 'Jia Guo', 'jia guo')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
j.deng16@imperial.ac.uk
guojia@gmail.com
s.zafeiriou@imperial.ac.uk
80345fbb6bb6bcc5ab1a7adcc7979a0262b8a923Research Article
Soft Biometrics for a Socially Assistive Robotic
Platform
Open Access
('2104853', 'Pierluigi Carcagnì', 'pierluigi carcagnì')
('2417460', 'Dario Cazzato', 'dario cazzato')
('33097940', 'Marco Del Coco', 'marco del coco')
('35438199', 'Pier Luigi Mazzeo', 'pier luigi mazzeo')
('4730472', 'Marco Leo', 'marco leo')
('1741861', 'Cosimo Distante', 'cosimo distante')
80a6bb337b8fdc17bffb8038f3b1467d01204375Proceedings of the International Conference on Computer and Information Science and Technology
Ottawa, Ontario, Canada, May 11 – 12, 2015
Paper No. 126
Subspace LDA Methods for Solving the Small Sample Size
Problem in Face Recognition

101 KwanFu Rd., Sec. 2, Hsinchu, Taiwan
('2018515', 'Ching-Ting Huang', 'ching-ting huang')
('1830341', 'Chaur-Chin Chen', 'chaur-chin chen')
j60626j@gmail.com;cchen@cs.nthu.edu.tw
80be8624771104ff4838dcba9629bacfe6b3ea09Simultaneous Feature and Dictionary Learning
for Image Set Based Face Recognition
1 Advanced Digital Sciences Center, Singapore
Nanyang Technological University, Singapore
Beijing University of Posts and Telecommunications, Beijing, China
University of Illinois at Urbana-Champaign, IL USA
('1697700', 'Jiwen Lu', 'jiwen lu')
('39209795', 'Gang Wang', 'gang wang')
8000c4f278e9af4d087c0d0895fff7012c5e3d78Multi-Task Warped Gaussian Process for Personalized Age Estimation
Hong Kong University of Science and Technology
('36233573', 'Yu Zhang', 'yu zhang'){zhangyu,dyyeung}@cse.ust.hk
80097a879fceff2a9a955bf7613b0d3bfa68dc23Active Self-Paced Learning for Cost-Effective and
Progressive Face Identification
('1737218', 'Liang Lin', 'liang lin')
('3170394', 'Keze Wang', 'keze wang')
('1803714', 'Deyu Meng', 'deyu meng')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('36685537', 'Lei Zhang', 'lei zhang')
80bd795930837330e3ced199f5b9b75398336b87Relative Forest for Attribute Prediction
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing, 100190, China
Graduate University of Chinese Academy of Sciences, Beijing 100049, China
('1688086', 'Shaoxin Li', 'shaoxin li')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{shaoxin.li, shiguang.shan, xilin.chen}@vipl.ict.ac.cn
74de03923a069ffc0fb79e492ee447299401001fOn Film Character Retrieval in Feature-Length Films
1 Introduction
The problem of automatic face recognition (AFR) concerns matching a detected (roughly localized) face
against a database of known faces with associated identities. This task, although very intuitive to humans
and despite the vast amounts of research behind it, still poses a significant challenge to computer-based
methods. For reviews of the literature and commercial state-of-the-art see [5, 31] and [22, 23]. Much AFR
research has concentrated on the user authentication paradigm (e.g. [2, 8, 19]). In contrast, we consider the
content-based multimedia retrieval setup: our aim is to retrieve, and rank by confidence, film shots based on
the presence of specific actors. A query to the system consists of the user choosing the person of interest in
one or more keyframes. Possible applications include:
1. DVD browsing: Current DVD technology allows users to quickly jump to the chosen part of a film
using an on-screen index. However, the available locations are predefined. AFR technology could allow
the user to rapidly browse scenes by formulating queries based on the presence of specific actors.
2. Content-based web search: Many web search engines have very popular image search features (e.g.
http://www.google.co.uk/imghp). Currently, the search is performed based on the keywords
that appear in picture filenames or in the surrounding web page content. Face recognition can make the
retrieval much more accurate by focusing on the content of images.
We proceed from the face detection stage, assuming localized faces. Face detection technology is fairly
mature and a number of reliable face detectors have been built, see [17, 21, 25, 30]. We use a local imple-
mentation of the method of Schneiderman and Kanade [25] and consider a face to be correctly detected if
both eyes and the mouth are visible, see Figure 1. In a typical feature-length film, using every 10th frame,
we obtain 2000-5000 face detections which result from a cast of 10-20 primary and secondary characters
(see §3).
Problem challenges.
A number of factors other than identity influence the way a face appears in an image. Lighting conditions,
and especially light angle, drastically change the appearance of a face [1]. Facial expressions, including
closed or partially closed eyes, also complicate the problem, just as head pose does. Partial occlusions, be
they artefacts in front of a face or resulting from hair style change, or growing a beard or moustache also
('1688869', 'Andrew Zisserman', 'andrew zisserman')1 Department of Engineering, University of Cambridge, UK oa214@cam.ac.uk
2 Department of Engineering, University of Oxford, UK az@robots.ox.ac.uk
74f643579949ccd566f2638b85374e7a6857a9fcMonogenic Binary Pattern (MBP): A Novel Feature Extraction and
Representation Model for Face Recognition
Biometric Research Center, The Hong Kong Polytechnic University
Different from other face recognition methods, LBP
methods use local structural information and histogram
of sub-regions to extract and describe facial features.
Following LBP, LGBPHS [6] was proposed to use
Gabor filtering to enhance the facial features and then
extract the local Gabor binary pattern histogram
sequence, which improves much LBP’s robustness to
illumination changes. The Gabor phase was also used
to improve the recognition rate [7-8], and a typical
method of this class is the HGPP [8], which captures
the Global Gabor phase and Local Gabor phase
variation. Despite the high accuracy, the expense of
the above mentioned Gabor
face
recognition methods is also very expensive: both the
computational cost and the storage space are high
because Gabor filtering is usually applied at five
different scales and along eight different orientations,
which limits the application of these methods.
filter based
is a
signal
(HMBP)
the MBP
to describe
two-dimensional
This paper presents a new local facial feature
extraction method, namely monogenic binary pattern
(MBP), based on the theory of monogenic signal
analysis [9], and then proposes to use the histogram of
features.
MBP
Monogenic
(2D)
generalization of the one-dimensional analytic signal,
through which
the multi-resolution magnitude,
orientation and phase of a 2D signal can be estimated.
The proposed MBP combines monogenic orientation
and monogenic magnitude information for face feature
extraction and description. The advantage of MBP
over other Gabor based methods [4][6][8] is that it has
much lower time and space complexity but with better
or comparable performance. This is mainly because
monogenic signal analysis
itself a compact
representation of features with little information loss.
It does not use steerable filters to create multi-
orientation features like Gabor filters do. HMBP is the
sub-region spatial histogram sequence of MBP
features, which is robust to face image variation of
is
('5828998', 'Meng Yang', 'meng yang')
('36685537', 'Lei Zhang', 'lei zhang')
('40613710', 'Lin Zhang', 'lin zhang')
('1698371', 'David Zhang', 'david zhang')
E-mail: {csmyang, cslzhang, cslinzhang, csdzhang}@comp.polyu.edu.hk
74ce7e5e677a4925489897665c152a352c49d0a2SONG ET AL.: SEGMENTATION-GUIDED IMAGE INPAINTING
SPG-Net: Segmentation Prediction and
Guidance Network for Image Inpainting
University of Southern California
3740 McClintock Ave
Los Angeles, USA
2 Baidu Research
1195 Bordeaux Dr.,
Sunnyvale, USA
('3383051', 'Yuhang Song', 'yuhang song')
('1683340', 'Chao Yang', 'chao yang')
('8035191', 'Yeji Shen', 'yeji shen')
('1722767', 'Peng Wang', 'peng wang')
('38592052', 'Qin Huang', 'qin huang')
('9363144', 'C.-C. Jay Kuo', 'c.-c. jay kuo')
yuhangso@usc.edu
chaoy@usc.edu
yejishen@usc.edu
wangpeng54@baidu.com
qinhuang@usc.edu
cckuo@sipi.usc.edu
74408cfd748ad5553cba8ab64e5f83da14875ae8Facial Expressions Tracking and Recognition: Database Protocols for Systems Validation
and Evaluation
747d5fe667519acea1bee3df5cf94d9d6f874f20
74dbe6e0486e417a108923295c80551b6d759dbeInternational Journal of Computer Applications (0975 – 8887)
Volume 45– No.11, May 2012
An HMM based Model for Prediction of Emotional
Composition of a Facial Expression using both
Significant and Insignificant Action Units and
Associated Gender Differences
Department of Management and Information
Department of Management and Information
Systems Science
1603-1 Kamitomioka, Nagaoka
Niigata, Japan
Systems Science
1603-1 Kamitomioka, Nagaoka
Niigata, Japan
('2931637', 'Suvashis Das', 'suvashis das')
('1808643', 'Koichi Yamada', 'koichi yamada')
740e095a65524d569244947f6eea3aefa3cca526Towards Human-like Performance Face Detection: A
Convolutional Neural Network Approach
University of Twente
P.O. Box 217, 7500AE Enschede
The Netherlands
('2651432', 'Joshua van Kleef', 'joshua van kleef')j.a.vankleef-1@student.utwente.nl
74e869bc7c99093a5ff9f8cfc3f533ccf1b135d8Context and Subcategories for
Sliding Window Object Recognition
CMU-RI-TR-12-17
Submitted in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy in Robotics
The Robotics Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
August 2012
Thesis Committee
Martial Hebert, Co-Chair
Alexei A. Efros, Co-Chair
Takeo Kanade
Deva Ramanan, University of California at Irvine
('2038685', 'Santosh K. Divvala', 'santosh k. divvala')
('2038685', 'Santosh K. Divvala', 'santosh k. divvala')
747c25bff37b96def96dc039cc13f8a7f42dbbc7EmoNets: Multimodal deep learning approaches for emotion
recognition in video
('3127597', 'Samira Ebrahimi Kahou', 'samira ebrahimi kahou')
('1748421', 'Vincent Michalski', 'vincent michalski')
('2488222', 'Nicolas Boulanger-Lewandowski', 'nicolas boulanger-lewandowski')
('1923596', 'David Warde-Farley', 'david warde-farley')
('1751762', 'Yoshua Bengio', 'yoshua bengio')
741485741734a99e933dd0302f457158c6842adf A Novel Automatic Facial Expression
Recognition Method Based on AAM
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
('1703431', 'Li Wang', 'li wang')
('2677485', 'Ruifeng Li', 'ruifeng li')
('1751643', 'Ke Wang', 'ke wang')
Email: wangli-hb@163.com, lrf100@ hit.edu.cn, wangke@ hit.edu.cn
744fa8062d0ae1a11b79592f0cd3fef133807a03Aalborg Universitet
Deep Pain
Rodriguez, Pau; Cucurull, Guillem; Gonzàlez, Jordi; M. Gonfaus, Josep ; Nasrollahi, Kamal;
Moeslund, Thomas B.; Xavier Roca, F.
Published in:
I E E E Transactions on Cybernetics
DOI (link to publication from Publisher):
10.1109/TCYB.2017.2662199
Publication date:
2017
Document Version
Accepted author manuscript, peer reviewed version
Link to publication from Aalborg University
Citation for published version (APA):
Rodriguez, P., Cucurull, G., Gonzàlez, J., M. Gonfaus, J., Nasrollahi, K., Moeslund, T. B., & Xavier Roca, F.
(2017). Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification. I E E E
Transactions on Cybernetics, 1-11. DOI: 10.1109/TCYB.2017.2662199
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
? Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
? You may not further distribute the material or use it for any profit-making activity or commercial gain
? You may freely distribute the URL identifying the publication in the public portal ?
Take down policy
the work immediately and investigate your claim.
Downloaded from vbn.aau.dk on: marts 22, 2018
If you believe that this document breaches copyright please contact us at vbn@aub.aau.dk providing details, and we will remove access to
743e582c3e70c6ec07094887ce8dae7248b970adInternational Journal of Signal Processing, Image Processing and Pattern Recognition
Vol.8, No.10 (2015), pp.29-38
http://dx.doi.org/10.14257/ijsip.2015.8.10.04
Face Recognition based on Deep Neural Network
Shandong Women s University
('9094473', 'Li Xinhua', 'li xinhua')
('29742002', 'Yu Qian', 'yu qian')
lixinhua@sdwu.edu.cn
74b0095944c6e29837c208307a67116ebe1231c8
74156a11c2997517061df5629be78428e1f09cbdCancún Center, Cancún, México, December 4-8, 2016
978-1-5090-4846-5/16/$31.00 ©2016 IEEE
2784
748e72af01ba4ee742df65e9c030cacec88ce506Discriminative Regions Selection for Facial Expression
Recognition
MIRACL-FSEG, University of Sfax
3018 Sfax, Tunisia
MIRACL-FS, University of Sfax
3018 Sfax, Tunisia
('2049116', 'Hazar Mliki', 'hazar mliki')
('1749733', 'Mohamed Hammami', 'mohamed hammami')
745b42050a68a294e9300228e09b5748d2d20b81
749d605dd12a4af58de1fae6f5ef5e65eb06540eMulti-Task Video Captioning with Video and Entailment Generation
UNC Chapel Hill
('10721120', 'Ramakanth Pasunuru', 'ramakanth pasunuru')
('7736730', 'Mohit Bansal', 'mohit bansal')
{ram, mbansal}@cs.unc.edu
749382d19bfe9fb8d0c5e94d0c9b0a63ab531cb7A Modular Framework to Detect and Analyze Faces for
Audience Measurement Systems
Fraunhofer Institute for Integrated Circuits IIS
Department Electronic Imaging
Am Wolfsmantel 33, 91058 Erlangen, Germany
('33046373', 'Andreas Ernst', 'andreas ernst')
('27421829', 'Tobias Ruf', 'tobias ruf')
{andreas.ernst, tobias.ruf, christian.kueblbeck}@iis.fraunhofer.de
74c19438c78a136677a7cb9004c53684a4ae56ffRESOUND: Towards Action Recognition
without Representation Bias
UC San Diego
('48513320', 'Yingwei Li', 'yingwei li')
('47002970', 'Yi Li', 'yi li')
('1699559', 'Nuno Vasconcelos', 'nuno vasconcelos')
{yil325,yil898,nvasconcelos}@ucsd.edu
74618fb4ce8ce0209db85cc6069fe64b1f268ff4Rendering and Animating Expressive
Caricatures
Mukundan
*HITLab New Zealand,
University
of Canterbury,
Christchurch,
New Zealand
tComputer
Science
and Software Engineering
Email: {mohammad.obaid,
University
of Canterbury,
New Zealand
non­
stylized
and control
on the generated caricature.
A stroke-based
of the caricature,
of facial expressions.
rendering of caricatures
from a given face image, with
the facial appearance
using quadratic deformation
rendering (NPR) engine is developed to generate
that appears to be a sketch of the original
('1761180', 'Mohammad Obaid', 'mohammad obaid')
('1684805', 'Mark Billinghurst', 'mark billinghurst')
mark.billinghurst}@hitlabnz.org,
mukund@cosc.canterbury.ac.nz
74875368649f52f74bfc4355689b85a724c3db47Object Detection by Labeling Superpixels
1National Laboratory of Pattern Recognition, Chinese Academy of Sciences
Institute of Data Science and Technology, Alibaba Group
Institute of Deep Learning, Baidu Research
('1721677', 'Junjie Yan', 'junjie yan')
('2278628', 'Yinan Yu', 'yinan yu')
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
7492c611b1df6bce895bee6ba33737e7fc7f60a6The 3D Menpo Facial Landmark Tracking Challenge
Imperial College London, UK
Center for Machine Vision and Signal Analysis, University of Oulu, Finland
University of Exeter, UK
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('34586458', 'Grigorios G. Chrysos', 'grigorios g. chrysos')
('2931390', 'Anastasios Roussos', 'anastasios roussos')
('31243357', 'Evangelos Ververas', 'evangelos ververas')
('3234063', 'Jiankang Deng', 'jiankang deng')
('2814229', 'George Trigeorgis', 'george trigeorgis')
{s.zafeiriou, g.chrysos}@imperial.ac.uk
74eae724ef197f2822fb7f3029c63014625ce1caInternational Journal of Bio-Science and Bio-Technology
Vol. 5, No. 2, April, 2013
Feature Extraction based on Local Directional Pattern with SVM
Decision-level Fusion for Facial Expression Recognition
1Key Laboratory of Education Informalization for Nationalities, Ministry of
Education, Yunnan Normal University, Kunming, China
College of Information, Yunnan Normal University, Kunming, China
('2535958', 'Juxiang Zhou', 'juxiang zhou')
('3305175', 'Tianwei Xu', 'tianwei xu')
('2411704', 'Jianhou Gan', 'jianhou gan')
zjuxiang@126.com,xutianwei@ynnu.edu.cn,kmganjh@yahoo.com.cn
7480d8739eb7ab97c12c14e75658e5444b852e9fNEGREL ET AL.: REVISITED MLBOOST FOR FACE RETRIEVAL
MLBoost Revisited: A Faster Metric
Learning Algorithm for Identity-Based Face
Retrieval
Frederic Jurie
Normandie Univ, UNICAEN,
ENSICAEN, CNRS
France
('2838835', 'Romain Negrel', 'romain negrel')
('2504258', 'Alexis Lechervy', 'alexis lechervy')
romain.negrel@unicaen.fr
alexis.lechervy@unicaen.fr
frederic.jurie@unicaen.fr
74ba4ab407b90592ffdf884a20e10006d2223015Partial Face Detection in the Mobile Domain ('3152615', 'Upal Mahbub', 'upal mahbub')
('40599829', 'Sayantan Sarkar', 'sayantan sarkar')
('9215658', 'Rama Chellappa', 'rama chellappa')
7405ed035d1a4b9787b78e5566340a98fe4b63a0Self-Expressive Decompositions for
Matrix Approximation and Clustering
('1746363', 'Eva L. Dyer', 'eva l. dyer')
('3318961', 'Raajen Patel', 'raajen patel')
('1746260', 'Richard G. Baraniuk', 'richard g. baraniuk')
744db9bd550bf5e109d44c2edabffec28c867b91FX e-Makeup for Muscle Based Interaction
1 Department of Informatics, PUC-Rio, Rio de Janeiro, Brazil
2 Department of Mechanical Engineering, PUC-Rio, Rio de Janeiro, Brazil
3 Department of Administration, PUC-Rio, Rio de Janeiro, Brazil
('21852164', 'Abel Arrieta', 'abel arrieta')
('38047086', 'Felipe Esteves', 'felipe esteves')
('1805792', 'Hugo Fuks', 'hugo fuks')
{kvega,hugo}@inf.puc-rio.br
abel.arrieta@aluno.puc-rio.br
felipeesteves@aluno.puc-rio.br
74325f3d9aea3a810fe4eab8863d1a48c099de11Regression-Based Image Alignment
for General Object Categories
Queensland University of Technology (QUT
Brisbane QLD 4000, Australia
Carnegie Mellon University (CMU
Pittsburgh PA 15289, USA
('2266155', 'Hilton Bristow', 'hilton bristow')
('1820249', 'Simon Lucey', 'simon lucey')
744d23991a2c48d146781405e299e9b3cc14b731This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2535284, IEEE
Transactions on Image Processing
Aging Face Recognition: A Hierarchical Learning
Model Based on Local Patterns Selection
('1911510', 'Zhifeng Li', 'zhifeng li')
('2856494', 'Dihong Gong', 'dihong gong')
('1720243', 'Xuelong Li', 'xuelong li')
('1692693', 'Dacheng Tao', 'dacheng tao')
1a45ddaf43bcd49d261abb4a27977a952b5fff12LDOP: Local Directional Order Pattern for Robust
Face Retrieval

('34992579', 'Shiv Ram Dubey', 'shiv ram dubey')
('34356161', 'Snehasis Mukherjee', 'snehasis mukherjee')
1a41e5d93f1ef5b23b95b7163f5f9aedbe661394Hindawi Publishing Corporation
e Scientific World Journal
Volume 2014, Article ID 903160, 9 pages
http://dx.doi.org/10.1155/2014/903160
Research Article
Alignment-Free and High-Frequency Compensation in
Face Hallucination
College of Computer Science and Information Technology, Central South University of Forestry and Technology, Hunan 410004, China
College of Information Science and Engineering, Ritsumeikan University, Shiga 525-8577, Japan
Received 25 August 2013; Accepted 21 November 2013; Published 12 February 2014
Academic Editors: S. Bourennane and J. Marot
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Face hallucination is one of learning-based super resolution techniques, which is focused on resolution enhancement of facial
images. Though face hallucination is a powerful and useful technique, some detailed high-frequency components cannot be
recovered. It also needs accurate alignment between training samples. In this paper, we propose a high-frequency compensation
framework based on residual images for face hallucination method in order to improve the reconstruction performance. The basic
idea of proposed framework is to reconstruct or estimate a residual image, which can be used to compensate the high-frequency
components of the reconstructed high-resolution image. Three approaches based on our proposed framework are proposed. We
also propose a patch-based alignment-free face hallucination. In the patch-based face hallucination, we first segment facial images
into overlapping patches and construct training patch pairs. For an input low-resolution (LR) image, the overlapping patches
are also used to obtain the corresponding high-resolution (HR) patches by face hallucination. The whole HR image can then be
reconstructed by combining all of the HR patches. Experimental results show that the high-resolution images obtained using our
proposed approaches can improve the quality of those obtained by conventional face hallucination method even if the training data
set is unaligned.
1. Introduction
There is a high demand for high-resolution (HR) images such
as video surveillance, remote sensing, and medical imaging
because high-resolution images can reveal more information
than low-resolution images. However, it is hard to improve
the image resolution by replacing sensors because of the
high cost, hardware physical limits. Super resolution image
reconstruction (SR) is one promising technique to solve the
problem [1, 2]. SR can be broadly classified into two families of
methods: (1) the classical multiframe super resolution [2] and
(2) the single-frame super resolution, which is also known as
example-based or learning-based super resolution [3–5]. In
the classical multiimage SR, the HR image is reconstructed
by combining subpixel-aligned multiimages (LR images). In
the learning-based SR, the HR image is reconstructed by
learning correspondence between low and high-resolution
image patches from a database.
Face hallucination is one of learning-based SR techniques
proposed by Baker and Kanade [1, 6], which is focused on
resolution enhancement of facial images. To date, a lot of
algorithms of face hallucination methods have been proposed
[7–12]. Though face hallucination is a powerful and useful
technique, some detailed high-frequency components cannot
be recovered. In this paper, we propose a high-frequency
compensation framework based on residual images for face
hallucination method in order to improve the reconstruction
performance. The basic idea of proposed framework is to
reconstruct or estimate a residual image, which can be used
to compensate the high-frequency components of the recon-
structed high-resolution image. Three approaches based on
our proposed framework are proposed. We also propose a
patch-based alignment-free face hallucination method. In the
patch-based face hallucination, we first segment facial images
into overlapping patches and construct training patch pairs.
For an input LR image, the overlapping patches are also used
to obtain the corresponding HR patches by face hallucination.
The whole HR image can then be reconstructed by combining
all of the HR patches.
('1699766', 'Yen-Wei Chen', 'yen-wei chen')
('2755407', 'So Sasatani', 'so sasatani')
('1707360', 'Xian-Hua Han', 'xian-hua han')
('1699766', 'Yen-Wei Chen', 'yen-wei chen')
Correspondence should be addressed to Yen-Wei Chen; chen@is.ritsumei.ac.jp
1a65cc5b2abde1754b8c9b1d932a68519bcb1adaLU, LIAN, YUILLE: PARSING SEMANTIC PARTS OF CARS
Parsing Semantic Parts of Cars Using
Graphical Models and Segment Appearance
Consistency
Alan Yuille2
1 Department of Electrical Engineering
Tsinghua University
2 Department of Statistics
University of California, Los Angeles
('2282045', 'Wenhao Lu', 'wenhao lu')
('5964529', 'Xiaochen Lian', 'xiaochen lian')
yourslewis@gmail.com
lianxiaochen@gmail.com
yuille@stat.ucla.edu
1aa766bbd49bac8484e2545c20788d0f86e73ec2
Baseline Face Detection, Head Pose Estimation, and Coarse
Direction Detection for Facial Data in the SHRP2 Naturalistic
Driving Study
J. Paone, D. Bolme, R. Ferrell, Member, IEEE, D. Aykac, and
T. Karnowski, Member, IEEE
Oak Ridge National Laboratory, Oak Ridge, TN
1a849b694f2d68c3536ed849ed78c82e979d64d5This is a repository copy of Symmetric Shape Morphing for 3D Face and Head Modelling.
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/131760/
Version: Accepted Version
Proceedings Paper:
Dai, Hang, Pears, Nicholas Edwin orcid.org/0000-0001-9513-5634, Smith, William Alfred
Peter orcid.org/0000-0002-6047-0413 et al. (1 more author) (2018) Symmetric Shape
Morphing for 3D Face and Head Modelling. In: The 13th IEEE Conference on Automatic
Face and Gesture Recognition. IEEE .
Reuse
Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless
indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by
national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of
the full text version. This is indicated by the licence information on the White Rose Research Online record
for the item.
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
https://eprints.whiterose.ac.uk/
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.
eprints@whiterose.ac.uk
1a46d3a9bc1e4aff0ccac6403b49a13c8a89fc1dOnline Robust Image Alignment via Iterative Convex Optimization
Center for Data Analytics & Biomedical Informatics, Computer & Information Science Department,
Temple University, Philadelphia, PA 19122, USA
School of Information and Control Engineering, Nanjing University of Information Science and Technology, Nanjing, 210044, China
Purdue University, West Lafayette, IN 47907, USA
('36578908', 'Yi Wu', 'yi wu')
('39274045', 'Bin Shen', 'bin shen')
('1805398', 'Haibin Ling', 'haibin ling')
fwuyi,hblingg@temple.edu, bshen@purdue.edu
1a878e4667fe55170252e3f41d38ddf85c87fcafDiscriminative Machine Learning with Structure
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2010-4
http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-4.html
January 12, 2010
('1685481', 'Simon Lacoste-Julien', 'simon lacoste-julien')
1a41831a3d7b0e0df688fb6d4f861176cef97136massachusetts institute of technology artificial intelligence laboratory
A Biological Model of Object
Recognition with Feature Learning
AI Technical Report 2003-009
CBCL Memo 227
June 2003
© 2 0 0 3 m a s s a c h u s e t t s i n s t i t u t e o f
t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i . m i t . e d u
('1848733', 'Jennifer Louie', 'jennifer louie')@ MIT
1ac2882559a4ff552a1a9956ebeadb035cb6df5bHow much training data for facial action unit detection?
University of Pittsburgh, Pittsburgh, PA, USA
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
('36185909', 'Jeffrey M. Girard', 'jeffrey m. girard')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1820249', 'Simon Lucey', 'simon lucey')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
1a7a17c4f97c68d68fbeefee1751d349b83eb14aIterative Hessian sketch: Fast and accurate solution
approximation for constrained least-squares
1Department of Electrical Engineering and Computer Science
2Department of Statistics
University of California, Berkeley
November 4, 2014
('3173667', 'Mert Pilanci', 'mert pilanci')
('1721860', 'Martin J. Wainwright', 'martin j. wainwright')
{mert, wainwrig}@berkeley.edu
1aef6f7d2e3565f29125a4871cd60c4d86c48361Natural Language Video Description using
Deep Recurrent Neural Networks
University of Texas at Austin
Doctoral Dissertation Proposal
('1811430', 'Subhashini Venugopalan', 'subhashini venugopalan')
('1797655', 'Raymond J. Mooney', 'raymond j. mooney')
vsub@cs.utexas.edu
1a6c3c37c2e62b21ebc0f3533686dde4d0103b3fInternational Journal of Linguistics and Computational Applications (IJLCA) ISSN 2394-6385 (Print)
Volume 4, Issue 1, January – March 2017 ISSN 2394-6393 (Online)
Implementation of Partial Face Recognition
using Directional Binary Code
N.Pavithra #1, A.Sivapriya*2, K.Hemalatha*3 , D.Lakshmi*4
Final Year, PanimalarInstitute of Technology
PanimalarInstitute of Technology, Tamilnadu, India
in
faith
is proposed. It
face alignment and
1a167e10fe57f6d6eff0bb9e45c94924d9347a3eBoosting VLAD with Double Assignment using
Deep Features for Action Recognition in Videos
University of Trento, Italy
Tuan A. Nguyen
University of Tokyo, Japan
University of Tokyo, Japan
University Politehnica of Bucharest, Romania
University of Trento, Italy
('3429470', 'Ionut C. Duta', 'ionut c. duta')
('1712839', 'Kiyoharu Aizawa', 'kiyoharu aizawa')
('1796198', 'Bogdan Ionescu', 'bogdan ionescu')
('1703601', 'Nicu Sebe', 'nicu sebe')
ionutcosmin.duta@unitn.it
t nguyen@hal.t.u-tokyo.ac.jp
aizawa@hal.t.u-tokyo.ac.jp
bionescu@imag.pub.ro
niculae.sebe@unitn.it
1a3eee980a2252bb092666cf15dd1301fa84860ePCA GAUSSIANIZATION FOR IMAGE PROCESSING
Image Processing Laboratory (IPL), Universitat de Val`encia
Catedr´atico A. Escardino - 46980 Paterna, Val`encia, Spain
('2732577', 'Valero Laparra', 'valero laparra')
('1684246', 'Gustavo Camps-Valls', 'gustavo camps-valls')
{lapeva,gcamps,jmalo}@uv.es
1a140d9265df8cf50a3cd69074db7e20dc060d14Face Parts Localization Using
Structured-Output Regression Forests
School of EECS, Queen Mary University of London
('2966679', 'Heng Yang', 'heng yang')
('1744405', 'Ioannis Patras', 'ioannis patras')
{heng.yang,i.patras}@eecs.qmul.ac.uk
1a85956154c170daf7f15f32f29281269028ff69Active Pictorial Structures
Imperial College London
180 Queens Gate, SW7 2AZ, London, U.K.
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
{e.antonakos, ja310, s.zafeiriou}@imperial.ac.uk
1a031378cf1d2b9088a200d9715d87db8a1bf041Workshop track - ICLR 2018
DEEP DICTIONARY LEARNING: SYNERGIZING RE-
CONSTRUCTION AND CLASSIFICATION
('3362896', 'Shahin Mahdizadehaghdam', 'shahin mahdizadehaghdam')
('1733181', 'Ashkan Panahi', 'ashkan panahi')
('1769928', 'Hamid Krim', 'hamid krim')
{smahdiz,apanahi,ahk}@ncsu.edu & liyi.dai.civ@mail.mil
1afd481036d57320bf52d784a22dcb07b1ca95e2The Computer Journal Advance Access published December 6, 2012
The Author 2012. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved
doi:10.1093/comjnl/bxs146
Automated Content Metadata Extraction
Services Based on MPEG Standards
D.C. Gibbon∗, Z. Liu, A. Basso and B. Shahraray
AT&T Labs Research, Middletown, NJ, USA
This paper is concerned with the generation, acquisition, standardized representation and transport
of video metadata. The use of MPEG standards in the design and development of interoperable
media architectures and web services is discussed. A high-level discussion of several algorithms
for metadata extraction is presented. Some architectural and algorithmic issues encountered when
designing services for real-time processing of video streams, as opposed to traditional offline media
processing, are addressed. A prototype real-time video analysis system for generating MPEG-7
Audiovisual Description Profile from MPEG-2 transport stream encapsulated video is presented.
Such a capability can enable a range of new services such as content-based personalization of live
broadcasts given that the MPEG-7 based data models fit in well with specifications for advanced
television services such as TV-Anytime andAlliance for Telecommunications Industry Solutions IPTV
Interoperability Forum.
Keywords: MPEG-7; MPEG-21; audiovisual description profile; video processing; automated metadata
extraction; video metadata, real-time media processing
Received 1 March 2012; revised 11 September 2012; accepted 9 October 2012
Handling editor: Marios Angelides
1.
INTRODUCTION
Content descriptors have gained considerable prominence
in the content ecosystem in the last decade. This growing
significance stems from the fact that rich metadata promotes
user engagement, enables fine-grained access to content and
allows more intelligent and targeted access to content.
Effective utilization of content descriptors involves three
basic steps, namely generation, representation and transport.
In traditional broadcasting,
the generation of the content
descriptions has been a manual process in which individuals
would access the content and would index it according to
specific rules (i.e. annotation guides). While in the past this
was a viable option due to the limited amount of available
content, with the large volumes of content that are generated
today (e.g. YouTube uploads have currently surpassed 1 h of
video every second), manual indexing is no longer a viable
option. Research in multimedia content analysis has generated a
variety of algorithms for content feature extraction in the visual,
text, music and speech domains. Such algorithms provide
descriptions with different levels of confidence and are often
combined to improve their accuracy and descriptive power.
Despite the enormous progress that has been made in this area,
content description generation is not yet sufficiently advanced
to be fully automated for all applications and types of content.
However, for a subset of content types and certain applications,
the current state of the art in automated content processing has
proven sufficient.
Another important consideration in effective and widespread
utilization of content metadata is the adoption of appropriate
representations for the metadata. Historically, the represen-
tation of content metadata has been specialized to specific
representation and service needs (i.e. the asset distribution
interface from CableLabs for traditional paid video on demand
services). Recently, in the context of MPEG, a standardization
effort has been undertaken to create more general represen-
tations of content descriptors that are independent of any
particular application and to enable interoperability among
metadata generation systems and applications.
Finally, for a certain class of applications and services,
real-time delivery or transport of metadata is critical, but
is an area that is still in its infancy. For example, today’s
systems for delivering television electronic program guide
(EPG) information make efficient use of multicast delivery,
but the data are largely static (the data may only change
The Computer Journal, 2012
For Permissions, please email: journals.permissions@oup.com
Corresponding author: dcg@research.att.com
1a9337d70a87d0e30966ecd1d7a9b0bbc7be161f
1a4b6ee6cd846ef5e3030a6ae59f026e5f50eda6Deep Learning for Video Classification and Captioning
Fudan University, 2Microsoft Research Asia, 3University of Maryland
1. Introduction
Today’s digital contents are inherently multimedia: text, audio, image,
video and etc. Video, in particular, becomes a new way of communication
between Internet users with the proliferation of sensor-rich mobile devices.
Accelerated by the tremendous increase in Internet bandwidth and storage
space, video data has been generated, published and spread explosively, be-
coming an indispensable part of today’s big data. This has encouraged the
development of advanced techniques for a broad range of video understand-
ing applications. A fundamental issue that underlies the success of these
technological advances is the understanding of video contents. Recent ad-
vances in deep learning in image [41, 68, 17, 50] and speech [21, 27] domain
have encouraged techniques to learn robust video feature representations to
effectively exploit abundant multimodal clues in video data.
In this paper, we focus on reviewing two lines of research aiming to stimu-
late the comprehension of videos with deep learning: video classification and
video captioning. While video classification concentrates on automatically
labeling video clips based on their semantic contents like human actions or
complex events, video captioning attempts to generate a complete and nat-
ural sentence, enriching the single label as in video classification, to capture
the most informative dynamics in videos.
There have been several efforts surveying literatures on video content
understanding. Most of the approaches surveyed in these works adopted
hand-crafted features coupled with typical machine learning pipelines for
action recognition and event detection [1, 88, 61, 35]. In contrast, this paper
focuses on discussing state-of-the-art deep learning techniques not only for
video classification but also video captioning. As deep learning for video
analysis is an emerging and vibrant field, we hope this paper could help
stimulate future research along the line.
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('2053452', 'Ting Yao', 'ting yao')
('35782003', 'Yanwei Fu', 'yanwei fu')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
zxwu@cs.umd.edu, tiyao@microsoft.com, {ygj, yanweifu}@fudan.edu.cn
1a9a192b700c080c7887e5862c1ec578012f9ed1IEEE TRANSACTIONS ON SYSTEM, MAN AND CYBERNETICS, PART B
Discriminant Subspace Analysis for Face
Recognition with Small Number of Training
Samples
('1844328', 'Hui Kong', 'hui kong')
('1786811', 'Xuchun Li', 'xuchun li')
('1752714', 'Matthew Turk', 'matthew turk')
('1708413', 'Chandra Kambhamettu', 'chandra kambhamettu')
1af52c853ff1d0ddb8265727c1d70d81b4f9b3a9ARTICLE
International Journal of Advanced Robotic Systems
Face Recognition Under Illumination
Variation Using Shadow Compensation
and Pixel Selection
Regular Paper
Dankook University, 126 Jukjeon-dong, Suji-gu, Yongin-si, Gyeonggi-do, Korea
Received 14 Jun 2012; Accepted 31 Aug 2012
DOI: 10.5772/52939
© 2012 Choi; licensee InTech. This is an open access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
to  other 
features 
for 
face 
retinal  or 
is  similar 
to 
the 
fingerprint, 
image 
taken  with 
it  widely  applicable 
illumination  variation.  By  using 
('1737997', 'Sang-Il Choi', 'sang-il choi')* Corresponding author E-mail: choisi@dankook.ac.kr
1a8ccc23ed73db64748e31c61c69fe23c48a2bb1Extensive Facial Landmark Localization
with Coarse-to-fine Convolutional Network Cascade
Megvii Inc.
('1848243', 'Erjin Zhou', 'erjin zhou'){zej,fhq,czm,jyn,yq}@megvii.com
1a40092b493c6b8840257ab7f96051d1a4dbfeb2Sports Videos in the Wild (SVW): A Video Dataset for Sports Analysis
Michigan State University, East Lansing, MI, USA
2 TechSmith Corporation, Okemos, MI, USA
('2941187', 'Seyed Morteza Safdarnejad', 'seyed morteza safdarnejad')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('1938832', 'Lalita Udpa', 'lalita udpa')
('40467330', 'Brooks Andrus', 'brooks andrus')
('1678721', 'John Wood', 'john wood')
('37008125', 'Dean Craven', 'dean craven')
1ad97cce5fa8e9c2e001f53f6f3202bddcefba22Grassmann Averages for Scalable Robust PCA
DIKU and MPIs T¨ubingen∗
Denmark and Germany
DTU Compute∗
Lyngby, Denmark
('1808965', 'Aasa Feragen', 'aasa feragen')
('2142792', 'Søren Hauberg', 'søren hauberg')
aasa@diku.dk
sohau@dtu.dk
1a1118cd4339553ad0544a0a131512aee50cf7de
1a6c9ef99bf0ab9835a91fe5f1760d98a0606243ConceptMap:
Mining Noisy Web Data for Concept Learning
Bilkent University, 06800 Cankaya, Turkey
('2540074', 'Eren Golge', 'eren golge')
1afdedba774f6689eb07e048056f7844c9083be9Markov Random Field Structures for Facial Action Unit Intensity Estimation
∗Department of Computing
Imperial College London
180 Queen’s Gate
London, UK
†EEMCS
University of Twente
7522 NB Enschede
Netherlands
('3007548', 'Georgia Sandbach', 'georgia sandbach')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{gls09,s.zafeiriou,m.pantic}@imperial.ac.uk
1a2b3fa1b933042687eb3d27ea0a3fcb67b66b43WANG AND MORI: MAX-MARGIN LATENT DIRICHLET ALLOCATION
Max-Margin Latent Dirichlet Allocation for
Image Classification and Annotation
University
of Illinois at Urbana Champaign
School of Computing Science, Simon
Fraser University
('40457160', 'Yang Wang', 'yang wang')
('10771328', 'Greg Mori', 'greg mori')
yangwang@uiuc.edu
mori@cs.sfu.ca
1a7a2221fed183b6431e29a014539e45d95f0804Person Identification Using Text and Image Data
David S. Bolme, J. Ross Beveridge and Adele E. Howe
Computer Science Department
Colorado State Univeristy
Fort Collins, Colorado 80523
[bolme,ross,howe]@cs.colostate.edu
1a5b39a4b29afc5d2a3cd49087ae23c6838eca2bCompetitive Game Designs for Improving the Cost
Effectiveness of Crowdsourcing
L3S Research Center, Hannover, Germany
('2993225', 'Markus Rokicki', 'markus rokicki')
('3257370', 'Sergiu Chelaru', 'sergiu chelaru')
('2553718', 'Sergej Zerr', 'sergej zerr')
('1745880', 'Stefan Siersdorfer', 'stefan siersdorfer')
{rokicki,chelaru,siersdorfer,zerr}@L3S.de
2878b06f3c416c98496aad6fc2ddf68d2de5b8f6Available online at www.sciencedirect.com
Computer Vision and Image Understanding 110 (2008) 91–101
www.elsevier.com/locate/cviu
Two-stage optimal component analysis
Florida State University, Tallahassee, FL 32306, USA
Florida State University, Tallahassee, FL 32306, USA
c School of Computational Science, Florida State University, Tallahassee, FL 32306, USA
Received 26 September 2006; accepted 30 April 2007
Available online 8 June 2007
('2207859', 'Yiming Wu', 'yiming wu')
('1800002', 'Xiuwen Liu', 'xiuwen liu')
('2436294', 'Washington Mio', 'washington mio')
287795991fad3c61d6058352879c7d7ae1fdd2b6International Journal of Computer Applications (0975 – 8887)
Volume 66– No.8, March 2013
Biometrics Security: Facial Marks Detection from the
Low Quality Images
and facial marks are detected using LoG with morphological
operator. This method though was not enough to detect the
facial marks from the low quality images [7]. But, facial
marks have been used to speed up the retrieval process in
order to differentiate the human faces [15].
B.S.Abdur Rahman University B.S.Abdur Rahman University
Dept. Of Information Technology Dept. Of Computer Science & Engineering
Chennai, India Chennai, India
('9401261', 'Ziaul Haque Choudhury', 'ziaul haque choudhury')
28a900a07c7cbce6b6297e4030be3229e094a950382 The International Arab Journal of Information Technology, Vol. 9, No. 4, July 2012
Local Directional Pattern Variance (LDPv): A
Robust Feature Descriptor for Facial
Expression Recognition
Kyung Hee University, South Korea
('3182680', 'Taskeed Jabid', 'taskeed jabid')
('1685505', 'Oksam Chae', 'oksam chae')
282503fa0285240ef42b5b4c74ae0590fe169211Feeding Hand-Crafted Features for Enhancing the Performance of
Convolutional Neural Networks
Seoul National University
Seoul Nat’l Univ.
Seoul National University
('35453923', 'Sepidehsadat Hosseini', 'sepidehsadat hosseini')
('32193683', 'Seok Hee Lee', 'seok hee lee')
('1707645', 'Nam Ik Cho', 'nam ik cho')
sepid@ispl.snu.ac.kr
seokheel@snu.ac.kr
nicho@snu.ac.kr
28e0ed749ebe7eb778cb13853c1456cb6817a166
28b9d92baea72ec665c54d9d32743cf7bc0912a7
283d226e346ac3e7685dd9a4ba8ae55ee4f2fe43BAYESIAN DATA ASSOCIATION FOR TEMPORAL SCENE
UNDERSTANDING
by
A Dissertation Submitted to the Faculty of the
DEPARTMENT OF COMPUTER SCIENCE
In Partial Fulfillment of the Requirements
For the Degree of
DOCTOR OF PHILOSOHPY
In the Graduate College
THE UNIVERSITY OF ARIZONA
2013
('10399726', 'Ernesto Brau Avila', 'ernesto brau avila')
28d7029cfb73bcb4ad1997f3779c183972a406b4Discriminative Nonlinear Analysis Operator
Learning: When Cosparse Model Meets Image
Classification
('2833510', 'Zaidao Wen', 'zaidao wen')
('1940528', 'Biao Hou', 'biao hou')
('1734497', 'Licheng Jiao', 'licheng jiao')
280d59fa99ead5929ebcde85407bba34b1fcfb59978-1-4799-9988-0/16/$31.00 ©2016 IEEE
2662
ICASSP 2016
28f5138d63e4acafca49a94ae1dc44f7e9d84827Journal of Machine Learning Research xx (2012) xx-xx
Submitted xx/xx; Published xx/xx
MahNMF: Manhattan Non-negative Matrix Factorization
Center for Quantum Computation and Intelligent Systems
Faculty of Engineering and Information Technology
University of Technology, Sydney
Sydney, NSW 2007, Australia
Center for Quantum Computation and Intelligent Systems
Faculty of Engineering and Information Technology
University of Technology, Sydney
Sydney, NSW 2007, Australia
School of Computer Science
National University of Defense Technology
Changsha, Hunan 410073, China
Centre for Computational Statistics and Machine Learning (CSML)
Department of Computer Science
University College London
Gower Street, London WC1E 6BT, United Kingdom
Editor: xx
('2067095', 'Naiyang Guan', 'naiyang guan')
('1692693', 'Dacheng Tao', 'dacheng tao')
('1764542', 'Zhigang Luo', 'zhigang luo')
('1792322', 'John Shawe-Taylor', 'john shawe-taylor')
Guan.Naiyang@uts.edu.au
dacheng.tao@uts.edu.au
zgluo@nudt.edu.cn
J.Shawe-Taylor@cs.ucl.ac.uk
28e1668d7b61ce21bf306009a62b06593f1819e3RESEARCH ARTICLE
Validation of the Amsterdam Dynamic Facial
Expression Set – Bath Intensity Variations
(ADFES-BIV): A Set of Videos Expressing Low,
Intermediate, and High Intensity Emotions
University of Bath, Bath, United Kingdom
☯ These authors contributed equally to this work.
('7249951', 'Tanja S. H. Wingenbach', 'tanja s. h. wingenbach')
('2708124', 'Chris Ashwin', 'chris ashwin')
('39455300', 'Mark Brosnan', 'mark brosnan')
* tshw20@bath.ac.uk
28cd46a078e8fad370b1aba34762a874374513a5CVPAPER.CHALLENGE IN 2016, JULY 2017
cvpaper.challenge in 2016: Futuristic Computer
Vision through 1,600 Papers Survey
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('1713046', 'Yun He', 'yun he')
('9935341', 'Shunya Ueta', 'shunya ueta')
('5014206', 'Teppei Suzuki', 'teppei suzuki')
('3408038', 'Kaori Abe', 'kaori abe')
('2554424', 'Asako Kanezaki', 'asako kanezaki')
('22219521', 'Toshiyuki Yabe', 'toshiyuki yabe')
('10800402', 'Yoshihiro Kanehara', 'yoshihiro kanehara')
('22174281', 'Hiroya Yatsuyanagi', 'hiroya yatsuyanagi')
('1692565', 'Shinya Maruyama', 'shinya maruyama')
('3217653', 'Masataka Fuchida', 'masataka fuchida')
('2642022', 'Yudai Miyashita', 'yudai miyashita')
('34935749', 'Kazushige Okayasu', 'kazushige okayasu')
('20505300', 'Yuta Matsuzaki', 'yuta matsuzaki')
286adff6eff2f53e84fe5b4d4eb25837b46cae23Single-Image Depth Perception in the Wild
University of Michigan, Ann Arbor
('1732404', 'Weifeng Chen', 'weifeng chen')
('8342699', 'Jia Deng', 'jia deng')
('2097755', 'Zhao Fu', 'zhao fu')
('2500067', 'Dawei Yang', 'dawei yang')
{wfchen,zhaofu,ydawei,jiadeng}@umich.edu
286812ade95e6f1543193918e14ba84e5f8e852eDOU, WU, SHAH, KAKADIARIS: 3D FACE RECONSTRUCTION FROM 2D LANDMARKS
Robust 3D Face Shape Reconstruction from
Single Images via Two-Fold Coupled
Structure Learning
Computational Biomedicine Lab
Department of Computer Science
University of Houston
Houston, TX, USA
('39634395', 'Pengfei Dou', 'pengfei dou')
('2461369', 'Yuhang Wu', 'yuhang wu')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
bensondou@gmail.com
yuhang@cbl.uh.edu
sshah@central.uh.edu
ioannisk@uh.edu
282a3ee79a08486f0619caf0ada210f5c3572367
288dbc40c027af002298b38954d648fddd4e2fd3
28f311b16e4fe4cc0ff6560aae3bbd0cb6782966Learning Language from Perceptual Context
Department of Computer Science
University of Texas at Austin
David L. Chen
Austin, TX 78712
Doctoral Dissertation Proposal
('1797655', 'Raymond J. Mooney', 'raymond j. mooney')dlcc@cs.utexas.edu
28312c3a47c1be3a67365700744d3d6665b86f22
28d06fd508d6f14cd15f251518b36da17909b79eWhat’s in a Name? First Names as Facial Attributes
Stanford University
Cornell University
Stanford University
('2896700', 'Huizhong Chen', 'huizhong chen')
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1739786', 'Bernd Girod', 'bernd girod')
hchen2@stanford.edu
andrew.c.gallagher@cornell.edu
bgirod@stanford.edu
28b5b5f20ad584e560cd9fb4d81b0a22279b2e7bA New Fuzzy Stacked Generalization Technique
and Analysis of its Performance
('2159942', 'Mete Ozay', 'mete ozay')
('7158165', 'Fatos T. Yarman Vural', 'fatos t. yarman vural')
281486d172cf0c78d348ce7d977a82ff763efccdMining a Deep And-OR Object Semantics from Web Images via Cost-Sensitive
Question-Answer-Based Active Annotations
Shanghai Jiao Tong University
University of California, Los Angeles
cid:107)Chongqing University of Posts and Telecommunications
('22063226', 'Quanshi Zhang', 'quanshi zhang')
('39092098', 'Ying Nian Wu', 'ying nian wu')
('3133970', 'Song-Chun Zhu', 'song-chun zhu')
288964068cd87d97a98b8bc927d6e0d2349458a2Mean-Variance Loss for Deep Age Estimation from a Face
1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing, 100049, China
3CAS Center for Excellence in Brain Science and Intelligence Technology
('34393045', 'Hu Han', 'hu han')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
hongyu.pan@vipl.ict.ac.cn, {hanhu,sgshan,xlchen}@ict.ac.cn
28bc378a6b76142df8762cd3f80f737ca2b79208Understanding Objects in Detail with Fine-grained Attributes
Ross Girshick5
David Weiss7
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
('2585200', 'Siddharth Mahendran', 'siddharth mahendran')
('2381485', 'Stavros Tsogkas', 'stavros tsogkas')
('35208858', 'Subhransu Maji', 'subhransu maji')
('1776374', 'Juho Kannala', 'juho kannala')
('2827962', 'Esa Rahtu', 'esa rahtu')
('1758219', 'Matthew B. Blaschko', 'matthew b. blaschko')
('1685978', 'Ben Taskar', 'ben taskar')
('2362960', 'Naomi Saphra', 'naomi saphra')
('2920190', 'Sammy Mohamed', 'sammy mohamed')
('2010660', 'Iasonas Kokkinos', 'iasonas kokkinos')
('34838386', 'Karen Simonyan', 'karen simonyan')
287900f41dd880802aa57f602e4094a8a9e5ae56
28c0cb56e7f97046d6f3463378d084e9ea90a89aAutomatic Face Recognition for Film Character Retrieval in Feature-Length
Films
Ognjen Arandjelovi´c
University of Oxford, UK
('1688869', 'Andrew Zisserman', 'andrew zisserman')E-mail: oa214@cam.ac.uk,az@robots.ox.ac.uk
28be652db01273289499bc6e56379ca0237506c0FaLRR: A Fast Low Rank Representation Solver
School of Computer Engineering, Nanyang Technological University, Singapore
of Engineering and Information Technology, University of Technology, Sydney, Australia
‡Centre for Quantum Computation & Intelligent Systems and the Faculty
In this paper, we develop a fast solver of low rank representation (LRR) [3]
called FaLRR, which achieves order-of-magnitude speedup over existing
LRR solvers, and is theoretically guaranteed to obtain a global optimum.
LRR [3] has shown promising performance for various computer vision
applications such as face clustering. Let X = [x1, . . . ,xn] ∈ Rd×n be a set
of data samples drawn from a union of several subspaces, where d is the
feature dimension and n is the total number of data samples. LRR seeks
a low-rank data representation matrix Z ∈ Rn×n such that X can be self-
expressed (i.e., X = XZ) when the data is clean. Considering that input
data may contain outliers (i.e., some columns of X are corrupted), the LRR
problem can be formulated as,
(cid:107)Z(cid:107)∗ + λ(cid:107)E(cid:107)2,1
min
Z,E
s.t. X = XZ + E,
(1)
where λ is a tradeoff parameter and E ∈ Rd×n denotes the representation
error. The nuclear norm based term (cid:107)Z(cid:107)∗ acts as an approximation of the
rank regularizer, and the (cid:96)2,1 norm based term (cid:107)E(cid:107)2,1 encourages E to be
column-sparse.
Regarding optimization, several algorithms [2, 3, 4] were proposed to
exactly solve LRR. Moreover, to efficiently obtain an approximated solution
of LRR, a distributed framework [5] was developed. However, the existing
algorithms are usually based on the original formulation in (1) or a similar
variant [4], which are two-variable problems with regard to the original data
matrix. In this paper, we develop a fast LRR solver named FaLRR, which
is based on a new reformulation of LRR as an optimization problem with
regard to factorized data (which is obtained by skinny SVD on the original
data matrix).
Reformulation. Specifically, we study a more general formulation of
LRR as follows,
min
Z∈Rn×m,E∈Rd×m
(cid:107)Z(cid:107)∗ + λ(cid:107)E(cid:107)2,1
s.t. XD = XZ + E
(2)
rUr = V(cid:48)
which includes (1) as a special case. Let r denote the rank of X. More-
over, let us factorize X via the skinny singular value decomposition (SVD):
X = UrSrV(cid:48)
r, where Ur ∈ Rd×r and Vr ∈ Rn×r are two column-wise orthog-
onal matrices that satisfy U(cid:48)
rVr = Ir, Sr ∈ Rr×r is a diagonal matrix
defined as Sr = diag([σ1, . . . ,σr](cid:48)), in which {σi}r
i=1 are the r positive sin-
gular values of X sorted in descending order. Based on the definitions above,
we present the reformulation by the following theorem:
Theorem 1 Let W∗ denote an optimal solution of the following problem,
(3)
Then, {Z∗,E∗}, defined as Z∗ = VrW∗ and E∗ = XD− XVrW∗, is an op-
timal solution of the problem in (2). In particular, (cid:107)Z∗(cid:107)∗ = (cid:107)W∗(cid:107)∗ and
(cid:107)E∗(cid:107)2,1 = (cid:107)Sr(V(cid:48)
rD−W∗)(cid:107)2,1 always hold, implying that the two problems
in (2) and (3) have equal optimal objective values.
(cid:107)W(cid:107)∗ + λ(cid:107)Sr(V(cid:48)
rD− W)(cid:107)2,1 .
min
W∈Rr×m
Optimization. In terms of optimization, we rewrite the problem in (3)
as follows by introducing another variable Q ∈ Rr×m:
min
W,Q∈Rr×m
(cid:107)W(cid:107)∗ + λ(cid:107)SrQ(cid:107)2,1
s.t. W + Q = V(cid:48)
rD,
(4)
and develop an efficient algorithm based on the alternating direction method
(ADM) [1, 2], in which both resultant subproblems can be solved exactly.
The corresponding augmented Lagrangian [1] w.r.t. (4) is
Lρ (W,Q,L)
= (cid:107)W(cid:107)∗ + λ(cid:107)SrQ(cid:107)2,1 +(cid:10)L,V(cid:48)
rD− W− Q(cid:11) +
(cid:107)V(cid:48)
rD− W− Q(cid:107)2
F ,
('2518469', 'Shijie Xiao', 'shijie xiao')
('12135788', 'Wen Li', 'wen li')
('38188040', 'Dong Xu', 'dong xu')
('1692693', 'Dacheng Tao', 'dacheng tao')
28bcf31f794dc27f73eb248e5a1b2c3294b3ec9dInternational Journal of Computer Applications (0975 – 8887)
Volume 96– No.13, June 2014
Improved Combination of LBP plus LFDA for Facial
Expression Recognition using SRC
Research Scholar, CSE Department,
Government College of Engineering, Aurangabad
human
facial
expression
recognition
2836d68c86f29bb87537ea6066d508fde838ad71Personalized Age Progression with Aging Dictionary
School of Computer Science and Engineering, Nanjing University of Science and Technology
National University of Singapore
Figure 1. A personalized aging face by the proposed method. The personalized aging face contains the aging layer (e.g.,
wrinkles) and the personalized layer (e.g., mole). The former can be seen as the corresponding face in a linear combination
of the aging patterns, while the latter is invariant in the aging process. For better view, please see ×3 original color PDF.
('2287686', 'Xiangbo Shu', 'xiangbo shu')
('8053308', 'Jinhui Tang', 'jinhui tang')
('2356867', 'Hanjiang Lai', 'hanjiang lai')
('1776665', 'Luoqi Liu', 'luoqi liu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
{shuxb104,laihanj}@gmail.com, jinhuitang@njust.edu.cn, {liuluoqi, eleyans}@nus.edu.sg
28de411a5b3eb8411e7bcb0003c426aa91f33e97 Volume 4, Issue 4, April 2014 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
Emotion Detection Using Facial Expressions -A Review

Department of computer science and Application
M Tech Student
Department of computer science and Application
Assistant professor
Kurukshetra University, Kurukshetra
Kurukshetra University, Kurukshetra
Haryana (India)

Haryana (India)
('2234813', 'Jyoti Rani', 'jyoti rani')
('39608299', 'Kanwal Garg', 'kanwal garg')
28b26597a7237f9ea6a9255cde4e17ee18122904Cerebral Cortex September 2015;25:2876–2882
doi:10.1093/cercor/bhu083
Advance Access publication April 25, 2014
Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus
1MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK and 2Wellcome Centre for Imaging Neuroscience,
University College London, 12 Queen Square, London WC1N 3BG, UK
The superior temporal sulcus (STS) in the human and monkey is sen-
sitive to the motion of complex forms such as facial and bodily
actions. We used functional magnetic resonance imaging (fMRI) to
explore network-level explanations for how the form and motion
information in dynamic facial expressions might be combined in the
human STS. Ventral occipitotemporal areas selective for facial form
were localized in occipital and fusiform face areas (OFA and FFA),
and motion sensitivity was localized in the more dorsal temporal
area V5. We then tested various connectivity models that modeled
communication between the ventral form and dorsal motion path-
ways. We show that facial form information modulated transmission
of motion information from V5 to the STS, and that this face-
selective modulation likely originated in OFA. This finding shows that
form-selective motion sensitivity in the STS can be explained in
terms of modulation of gain control on information flow in the motion
pathway, and provides a substantial constraint for theories of the
perception of faces and biological motion.
Keywords: biological motion, dynamic causal modeling, face perception,
functional magnetic resonance imaging, superior temporal sulcus
Introduction
Humans and other animals effortlessly recognize facial iden-
tities and actions such as emotional expressions even when
faces continuously move. Brain representations of dynamic
faces may be manifested as greater responses in the superior
temporal sulcus (STS) to facial motion than motion of nonface
objects (Pitcher et al. 2011), suggesting localized representa-
tions that combine information about motion and facial form.
This finding relates to a considerable literature on “biological
motion,” which studies how the complex forms of bodily actions
are perceived from only the motion of light points fixed to limb
joints, with form-related texture cues removed (Johansson 1973).
Perception of such stimuli has been repeatedly associated with
the human posterior STS (Vaina et al. 2001; Vaina and Gross
2004; Giese and Poggio 2003; Hein and Knight 2008; Jastorff
and Orban 2009) with similar results observed in potentially cor-
responding areas of the macaque STS (Oram and Perrett 1994;
Jastorff et al. 2012). The STS has been described as integrating
form and motion information (Vaina et al. 2001; Giese and
Poggio 2003), containing neurons that code for conjunctions of
certain forms and movements (Oram and Perrett 1996). Never-
theless, the mechanisms by which STS neurons come to be sensi-
tive to the motion of some forms, but not others, remains a
matter of speculation (Giese and Poggio 2003).
We propose that network interactions can provide a mech-
anistic explanation for STS sensitivity to motion that is selective
to certain forms, in this case, faces. Specifically, STS responses
to dynamic faces could result from communicative interactions
between pathways sensitive to motion and facial form. Such in-
teractions can occur when one pathway modulates or “gates”
the ability of the other pathway to transmit information to the
STS. Using functional magnetic resonance imaging (fMRI), we
localized face-selective motion sensitivity in the STS of the
human and then used causal connectivity analyses to model
how these STS responses are influenced by areas sensitive to
motion and areas selective to facial form. We localized ventral
occipital and fusiform face areas (OFA and FFA) (Kanwisher
et al. 1997), which selectively respond to facial form versus
other objects (Calder and Young 2005; Calder 2011). We also
localized motion sensitivity to faces and nonfaces in the more
dorsal temporal hMT+/V5 complex (hereafter, V5). Together,
these areas provide ventral and dorsal pathways to the STS.
The ventral pathway transmits facial form information, via OFA
and FFA, and the dorsal pathway transmits motion informa-
tion, via V5. We then compared combinations of bilinear and
nonlinear dynamic causal models (Friston et al. 2003) to iden-
tify connectivity models that optimally explain how interac-
tions between these form and motion pathways could generate
STS responses to dynamic faces. We found that information
about facial form, most likely originating in the OFA, gates the
transmission of information about motion from V5 to the STS.
Thus, integrated facial form and motion information in the STS
can arise due to network interactions, where form and motion
pathways play distinct roles.
Materials and Methods
Participants
fMRI data were collected from 18 healthy, right-handed participants
(over 18 years, 13 females) with normal or corrected-to-normal vision.
Experimental procedures were approved by the Cambridge Psych-
ology Research Ethics Committee.
Imaging Acquisition
A 3T Siemens Tim Trio MRI scanner with a 32-channel head coil was
used for data acquisition. We collected a structural T1-weighted MPRAGE
image (1-mm isotropic voxels). Functional data consisted of whole-brain
T2*-weighted echo-planar imaging volumes with 32 oblique axial slices
that were 3.5 mm thick, in-plane 64 × 64 matrix with resolution of 3 × 3
mm, TR 2 s, TE 30 ms, flip angle 78°. We discarded the first 5 “dummy”
volumes to ensure magnetic equilibration.
Experimental Design
The experiment used a block design with 2 runs (229 scans per run),
which were collected as the localizer for another experiment (Furl,
Henson, et al. 2013). Note that the dynamic causal modeling (DCM)
analyses reported in Furl, Henson et al. (2013) used independent data
(from separate runs using different stimuli) to address a different phe-
nomenon than considered here. All blocks were 11 s, comprised
The Author 2014. Published by Oxford University Press
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted
reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
('3162581', 'Nicholas Furl', 'nicholas furl')
('1690599', 'Richard N. Henson', 'richard n. henson')
('1737497', 'Karl J. Friston', 'karl j. friston')
('2825775', 'Andrew J. Calder', 'andrew j. calder')
('3162581', 'Nicholas Furl', 'nicholas furl')
UK. E-mail: nick.furl@mrc-cbu.cam.ac.uk
28fe6e785b32afdcd2c366c9240a661091b850cfInternational Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 10 – No.7, March 2016 – www.ijais.org
Facial Expression Recognition using Patch based Gabor
Features
Electronics & Telecommunication Engg
Electronics & Telecommunication Engg
St. Francis Institute of Technology
St. Francis Institute of Technology
Department
Mumbai, India
Department
Mumbai, India
('40187425', 'Vaqar Ansari', 'vaqar ansari')
('9390824', 'Anju Chandran', 'anju chandran')
28c9198d30447ffe9c96176805c1cd81615d98c8rsos.royalsocietypublishing.org
Research
Cite this article: Saunders TJ, Taylor AH,
Atkinson QD. 2016 No evidence that a range of
artificial monitoring cues influence online
donations to charity in an MTurk sample.
R. Soc. open sci. 3: 150710.
http://dx.doi.org/10.1098/rsos.150710
Received: 22 December 2015
Accepted: 13 September 2016
Subject Category:
Psychology and cognitive neuroscience
Subject Areas:
behaviour/psychology/evolution
Keywords:
prosociality, eye images, charity donation,
reputation, online behaviour
Author for correspondence:
Quentin D. Atkinson
No evidence that a range of
artificial monitoring cues
influence online donations
to charity in an MTurk
sample
Timothy J. Saunders, Alex H. Taylor and
Quentin D. Atkinson
School of Psychology, University of Auckland, Auckland, New Zealand
AHT, 0000-0003-3492-7667
Monitoring cues, such as an image of a face or pair of
eyes, have been found to increase prosocial behaviour in
several studies. However, other studies have found little
or no support for this effect. Here, we examined whether
monitoring cues affect online donations to charity while
manipulating the emotion displayed, the number of watchers
and the cue type. We also include as statistical controls a
range of likely covariates of prosocial behaviour. Using the
crowdsourcing Internet marketplace, Amazon Mechanical Turk
(MTurk), 1535 participants completed our survey and were
given the opportunity to donate to charity while being shown
an image prime. None of the monitoring primes we tested
had a significant effect on charitable giving. By contrast, the
control variables of culture, age, sex and previous charity
giving frequency did predict donations. This work supports
the importance of cultural differences and enduring individual
differences in prosocial behaviour and shows that a range of
artificial monitoring cues do not reliably boost online charity
donation on MTurk.
Introduction
1.
Humans care deeply about their reputations [1]. If we know
our choices will be made public, we act more prosocially [2–6].
Recent work has shown that simple but evolutionarily significant
artificial monitoring cues, such as an image of a pair of eyes,
can promote cooperation [7–22]. While an image alone cannot
monitor behaviour, the evolutionary legacy hypothesis holds that
humans possess an evolved proximate mechanism that causes us
to react to monitoring cues as if our reputations are at stake [9].
Work using a range of economic games has shown that people act
2016 The Authors. Published by the Royal Society under the terms of the Creative Commons
Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted
use, provided the original author and source are credited.
e-mail: q.atkinson@auckland.ac.nz
28d4e027c7e90b51b7d8908fce68128d1964668a
2866cbeb25551257683cf28f33d829932be651feIn Proceedings of the 2018 IEEE International Conference on Image Processing (ICIP)
The final publication is available at: http://dx.doi.org/10.1109/ICIP.2018.8451026
A TWO-STEP LEARNING METHOD FOR DETECTING LANDMARKS
ON FACES FROM DIFFERENT DOMAINS
Erickson R. Nascimento
Universidade Federal de Minas Gerais (UFMG), Brazil
('2749017', 'Bruna Vieira Frade', 'bruna vieira frade'){brunafrade, erickson}@dcc.ufmg.br
28d99dc2d673d62118658f8375b414e5192eac6fUsing Ranking-CNN for Age Estimation
1Department of Computer Science
2Department of Mathematics
3Research & Innovation Center
Wayne State University
Wayne State University
Ford Motor Company
('15841224', 'Shixing Chen', 'shixing chen')
('28887876', 'Jialiang Le', 'jialiang le')
{schen, czhang, mdong}@wayne.edu
{jle1, mrao}@ford.com
280bc9751593897091015aaf2cab39805768b463U.U.Tariq et al. / Carpathian Journal of Electronic and Computer Engineering 6/1 (2013) 8-15 8
________________________________________________________________________________________________________
Gender Perception From Faces Using Boosted LBPH
(Local Binary Patten Histograms)
COMSATS Institute of Information Technology
Department of Electrical Engineering
Abbottabad, Pakistan
Umair_tariq29@yahoo.com
28aa89b2c827e5dd65969a5930a0520fdd4a3dc7
28b061b5c7f88f48ca5839bc8f1c1bdb1e6adc68Predicting User Annoyance Using Visual Attributes
Virginia Tech
Goibibo
Virginia Tech
Virginia Tech
('1755657', 'Gordon Christie', 'gordon christie')
('2076800', 'Amar Parkash', 'amar parkash')
('3051209', 'Ujwal Krothapalli', 'ujwal krothapalli')
('1713589', 'Devi Parikh', 'devi parikh')
gordonac@vt.edu
amar08007@iiitd.ac.in
ujjwal@vt.edu
parikh@vt.edu
288d2704205d9ca68660b9f3a8fda17e18329c13Studying Very Low Resolution Recognition Using Deep Networks
Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
('2969311', 'Zhangyang Wang', 'zhangyang wang')
('3307026', 'Shiyu Chang', 'shiyu chang')
('2680237', 'Yingzhen Yang', 'yingzhen yang')
('1771885', 'Ding Liu', 'ding liu')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
{zwang119, chang87, yyang58, dingliu2, t-huang1}@illinois.edu
17b46e2dad927836c689d6787ddb3387c6159eceGeoFaceExplorer: Exploring the Geo-Dependence of
Facial Attributes
University of Kentucky
UNC Charlotte
UNC Charlotte
University of Kentucky
('2121759', 'Connor Greenwell', 'connor greenwell')
('1690110', 'Richard Souvenir', 'richard souvenir')
('1715594', 'Scott Spurlock', 'scott spurlock')
('1990750', 'Nathan Jacobs', 'nathan jacobs')
csgr222@uky.edu
souvenir@uncc.edu
sspurloc@uncc.edu
jacobs@cs.uky.edu
17a85799c59c13f07d4b4d7cf9d7c7986475d01cADVERTIMENT. La consulta d’aquesta tesi queda condicionada a l’acceptació de les següents
condicions d'ús: La difusió d’aquesta tesi per mitjà del servei TDX (www.tesisenxarxa.net) ha
estat autoritzada pels titulars dels drets de propietat intel·lectual únicament per a usos privats
emmarcats en activitats d’investigació i docència. No s’autoritza la seva reproducció amb finalitats
de lucre ni la seva difusió i posada a disposició des d’un lloc aliè al servei TDX. No s’autoritza la
presentació del seu contingut en una finestra o marc aliè a TDX (framing). Aquesta reserva de
drets afecta tant al resum de presentació de la tesi com als seus continguts. En la utilització o cita
de parts de la tesi és obligat indicar el nom de la persona autora.
ADVERTENCIA. La consulta de esta tesis queda condicionada a la aceptación de las siguientes
condiciones de uso: La difusión de esta tesis por medio del servicio TDR (www.tesisenred.net) ha
sido autorizada por los titulares de los derechos de propiedad intelectual únicamente para usos
privados enmarcados en actividades de investigación y docencia. No se autoriza su reproducción
con finalidades de lucro ni su difusión y puesta a disposición desde un sitio ajeno al servicio TDR.
No se autoriza la presentación de su contenido en una ventana o marco ajeno a TDR (framing).
Esta reserva de derechos afecta tanto al resumen de presentación de la tesis como a sus
contenidos. En la utilización o cita de partes de la tesis es obligado indicar el nombre de la
persona autora.
WARNING. On having consulted this thesis you’re accepting the following use conditions:
Spreading this thesis by the TDX (www.tesisenxarxa.net) service has been authorized by the
titular of the intellectual property rights only for private uses placed in investigation and teaching
activities. Reproduction with lucrative aims is not authorized neither its spreading and availability
from a site foreign to the TDX service. Introducing its content in a window or frame foreign to the
TDX service is not authorized (framing). This rights affect to the presentation summary of the
thesis as well as to its contents. In the using or citation of parts of the thesis it’s obliged to indicate
the name of the author
1768909f779869c0e83d53f6c91764f41c338ab5A Large-Scale Car Dataset for Fine-Grained Categorization and Verification
The Chinese University of Hong Kong
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology
Chinese Academy of Sciences, Shenzhen, China
('2889075', 'Linjie Yang', 'linjie yang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1717179', 'Chen Change Loy', 'chen change loy')
('1693209', 'Ping Luo', 'ping luo')
{yl012,pluo,ccloy,xtang}@ie.cuhk.edu.hk
171ca25bc2cdfc79cad63933bcdd420d35a541abCalibration-Free Gaze Estimation Using Human Gaze Patterns
University of Amsterdam
Amsterdam, The Netherlands
('1765602', 'Fares Alnajar', 'fares alnajar')
('1695527', 'Theo Gevers', 'theo gevers')
('9301018', 'Roberto Valenti', 'roberto valenti')
('1682828', 'Sennay Ghebreab', 'sennay ghebreab')
{f.alnajar,th.gevers,r.valenti,s.ghebreab}@uva.nl
176bd61cc843d0ed6aa5af83c22e3feb13b89fe114
Investigating Spontaneous Facial Action
Recognition through
AAM Representations of the Face
Carnegie Mellon University
USA
1. Introduction
The Facial Action Coding System (FACS) [Ekman et al., 2002] is the leading method for
measuring facial movement in behavioral science. FACS has been successfully applied, but
not limited to, identifying the differences between simulated and genuine pain, differences
betweenwhen people are telling the truth versus lying, and differences between suicidal and
non-suicidal patients [Ekman and Rosenberg, 2005]. Successfully recognizing facial actions
is recognized as one of the “major” hurdles to overcome, for successful automated
expression recognition.
How one should represent the face for effective action unit recognition is the main topic of
interest in this chapter. This interest is motivated by the plethora of work in existence in
other areas of face analysis, such as face recognition [Zhao et al., 2003], that demonstrate the
benefit of representation when performing recognition tasks. It is well understood in the
field of statistical pattern recognition [Duda et al., 2001] given a fixed classifier and training
set that how one represents a pattern can greatly effect recognition performance. The face
can be represented in a myriad of ways. Much work in facial action recognition has centered
solely on the appearance (i.e., pixel values) of the face given quite a basic alignment (e.g.,
eyes and nose). In our work we investigate the employment of the Active Appearance
Model (AAM) framework [Cootes et al., 2001, Matthews and Baker, 2004] in order to derive
effective representations for facial action recognition. Some of the representations we will be
employing can be seen in Figure 1.
Experiments in this chapter are run across two action unit databases. The Cohn- Kanade
FACS-Coded Facial Expression Database [Kanade et al., 2000] is employed to investigate the
effect of face representation on posed facial action unit recognition. Posed facial actions are
those that have been elicited by asking subjects to deliberately make specific facial actions or
expressions. Facial actions are typically recorded under controlled circumstances that
include full-face frontal view, good lighting, constrained head movement and selectivity in
terms of the type and magnitude of facial actions. Almost all work in automatic facial
expression analysis has used posed image data and the Cohn-Kanade database may be the
database most widely used [Tian et al., 2005]. The RU-FACS Spontaneous Expression
Database is employed to investigate how these same representations affect spontaneous facial
action unit recognition. Spontaneous facial actions are representative of “real-world” facial
Source: Face Recognition, Book edited by: Kresimir Delac and Mislav Grgic, ISBN 978-3-902613-03-5, pp.558, I-Tech, Vienna, Austria, June 2007
('1820249', 'Simon Lucey', 'simon lucey')
('2640279', 'Ahmed Bilal Ashraf', 'ahmed bilal ashraf')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
17d01f34dfe2136b404e8d7f59cebfb467b72b26Riemannian Similarity Learning
Bioinformatics Institute, A*STAR, Singapore
School of Computing, National University of Singapore, Singapore
('39466179', 'Li Cheng', 'li cheng')chengli@bii.a-star.edu.sg
176f26a6a8e04567ea71677b99e9818f8a8819d0MEG: Multi-Expert Gender classification from
face images in a demographics-balanced dataset
('1763890', 'Maria De Marsico', 'maria de marsico')
('1795333', 'Michele Nappi', 'michele nappi')
('1772512', 'Daniel Riccio', 'daniel riccio')
1Universidad de Las Palmas de Gran Canaria, Spain. Email: mcastrillon@siani.es
2Sapienza University of Rome, Italy. Email: demarsico@di.uniroma1.it
3University of Salerno, Fisciano (SA), Italy. Email: mnappi@unisa.it
4University of Naples Federico II, Italy, Email: daniel.riccio@unina.it
17cf838720f7892dbe567129dcf3f7a982e0b56eGlobal-Local Face Upsampling Network
Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA
('2577513', 'Oncel Tuzel', 'oncel tuzel')
('2066068', 'Yuichi Taguchi', 'yuichi taguchi')
('2387467', 'John R. Hershey', 'john r. hershey')
17035089959a14fe644ab1d3b160586c67327db2
17370f848801871deeed22af152489e39b6e1454UNDERSAMPLED FACE RECOGNITION WITH ONE-PASS DICTIONARY LEARNING
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
('2017922', 'Chia-Po Wei', 'chia-po wei')
('2733735', 'Yu-Chiang Frank Wang', 'yu-chiang frank wang')
{cpwei, ycwang}@citi.sinica.edu.tw
178a82e3a0541fa75c6a11350be5bded133a59fdTechset Composition Ltd, Salisbury
Doc:
{IEE}BMT/Articles/Pagination/BMT20140045.3d
www.ietdl.org
Received on 15th July 2014
Revised on 17th September 2014
Accepted on 23rd September 2014
doi: 10.1049/iet-bmt.2014.0045
ISSN 2047-4938
BioHDD: a dataset for studying biometric
identification on heavily degraded data
IT Instituto de Telecomunica es, University of Beira Interior, Covilh , Portugal
Remote Sensing Unit Optics, Optometry and Vision Sciences Group, University of Beira Interior
Covilhã, Portugal
('1712429', 'Hugo Proença', 'hugo proença')E-mail: gmelfe@ubi.pt
17479e015a2dcf15d40190e06419a135b66da4e0Predicting First Impressions with Deep Learning
University of Notre Dame
Harvard University 3Perceptive Automata, Inc
('7215627', 'Mel McCurrie', 'mel mccurrie')
('51174355', 'Fernando Beletti', 'fernando beletti')
('51176594', 'Lucas Parzianello', 'lucas parzianello')
('51176974', 'Allen Westendorp', 'allen westendorp')
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
17fa1c2a24ba8f731c8b21f1244463bc4b465681Published as a conference paper at ICLR 2016
DEEP MULTI-SCALE VIDEO PREDICTION BEYOND
MEAN SQUARE ERROR
New York University
2Facebook Artificial Intelligence Research
('2341378', 'Camille Couprie', 'camille couprie')mathieu@cs.nyu.edu, {coupriec,yann}@fb.com
17579791ead67262fcfb62ed8765e115fb5eca6fReal-Time Fashion-guided Clothing Semantic Parsing: a Lightweight Multi-Scale
Inception Neural Network and Benchmark
1School of Data and Computer Science
Beijing University of Posts and Telecommunications, Beijing, P.R. China
Sun Yat-Sen University, Guangzhou, P.R. China
2 PRMCT Lab
('3079146', 'Yuhang He', 'yuhang he')
177d1e7bbea4318d379f46d8d17720ecef3086acJMLR: Workshop and Conference Proceedings 44 (2015) 60-71
NIPS 2015
The 1st International Workshop “Feature Extraction: Modern Questions and Challenges”
Learning Multi-channel Deep Feature Representations for
Face Recognition
Wayne State University, Detroit, MI 48202, USA
University of Illinois at Urbana Champaign, Urbana
IL 61801, USA
Editor: Afshin Rostamizadeh
('2410994', 'Xue-wen Chen', 'xue-wen chen')
('2708905', 'Melih S. Aslan', 'melih s. aslan')
('1982110', 'Kunlei Zhang', 'kunlei zhang')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
xuewen.chen@wayne.edu
melih.aslan@wayne.edu
kunlei.zhang@wayne.edu
t-huang1@illinois.edu
17a995680482183f3463d2e01dd4c113ebb31608IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. X, NO. Y, MONTH Z
Structured Label Inference for
Visual Understanding
('3079079', 'Nelson Nauata', 'nelson nauata')
('2804000', 'Hexiang Hu', 'hexiang hu')
('2057809', 'Guang-Tong Zhou', 'guang-tong zhou')
('47640964', 'Zhiwei Deng', 'zhiwei deng')
('2928799', 'Zicheng Liao', 'zicheng liao')
('10771328', 'Greg Mori', 'greg mori')
17aa78bd4331ef490f24bdd4d4cd21d22a18c09c
170a5f5da9ac9187f1c88f21a88d35db38b4111aOnline Real-time Multiple Spatiotemporal Action Localisation and Prediction
Philip Torr2
Oxford Brookes University
Oxford University
Figure 1: Online spatiotemporal action localisation in a test ‘fencing’ video from UCF-101 [39]. (a) to (c): A 3D volumetric view of
the video showing detection boxes and selected frames. At any given time, a certain portion (%) of the entire video is observed by the
system, and the detection boxes are linked up to incrementally build online space-time action tubes in real-time. Note that the proposed
method is able to detect multiple co-occurring action instances (3 action instances are shown in different colours). Note also that one of
the fencers moves out of the image boundaries between frames 114 and 145, to which our model responds by trimming action tube 01
at frame 114, and initiating a new tube (03) at frame 146.
('1931660', 'Gurkirt Singh', 'gurkirt singh')
('3017538', 'Suman Saha', 'suman saha')
('3019396', 'Michael Sapienza', 'michael sapienza')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
{gurkirt.singh-2015,suman.saha-2014,fabio.cuzzolin}@brookes.ac.uk
{michael.sapienza,philip.torr}@eng.ox.ac.uk
17c0d99171efc957b88c31a465c59485ab033234
1742ffea0e1051b37f22773613f10f69d2e4ed2c
1791f790b99471fc48b7e9ec361dc505955ea8b1
17a8d1b1b4c23a630b051f35e47663fc04dcf043Differential Angular Imaging for Material Recognition
Rutgers University, Piscataway, NJ
Drexel University, Philadelphia, PA
('48181328', 'Jia Xue', 'jia xue'){jia.xue,zhang.hang}@rutgers.edu, kdana@ece.rutgers.edu, kon@drexel.edu
171d8a39b9e3d21231004f7008397d5056ff23afSimultaneous Facial Landmark Detection, Pose and Deformation Estimation
under Facial Occlusion
ECSE Department
Institute of Automation
ECSE Department
Rensselaer Polytechnic Institute
Chinese Academy of Sciences
Rensselaer Polytechnic Institute
('1746738', 'Yue Wu', 'yue wu')
('2864523', 'Chao Gou', 'chao gou')
('1726583', 'Qiang Ji', 'qiang ji')
wuyuesophia@gmail.com
gouchao2012@ic.ac.cn
jiq@rpi.edu
17045163860fc7c38a0f7d575f3e44aaa5fa40d7Boosting VLAD with Supervised Dictionary
Learning and High-Order Statistics
Southwest Jiaotong University, Chengdu, China
The Chinese University of Hong Kong
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology, CAS
Hong Kong, China
Hengyang Normal University, Hengyang, China
Shenzhen, China
('1766837', 'Xiaojiang Peng', 'xiaojiang peng')
('33345248', 'Limin Wang', 'limin wang')
('33427555', 'Yu Qiao', 'yu qiao')
('37040717', 'Qiang Peng', 'qiang peng')
174930cac7174257515a189cd3ecfdd80ee7dd54Multi-view Face Detection Using Deep Convolutional
Neural Networks
Yahoo
Mohammad Saberian
inc.com
Yahoo
Yahoo
('2114438', 'Sachin Sudhakar Farfade', 'sachin sudhakar farfade')
('33642044', 'Li-Jia Li', 'li-jia li')
fsachin@yahoo-inc.com
saberian@yahoo-
lijiali.vision@gmail.com
17fad2cc826d2223e882c9fda0715fcd5475acf3
17e563af203d469c456bb975f3f88a741e43fb71Naming TV Characters by Watching and Analyzing Dialogs
Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany
('3408009', 'Monica-Laura Haurilet', 'monica-laura haurilet')
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('2256981', 'Ziad Al-Halah', 'ziad al-halah')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
{haurilet, tapaswi, ziad.al-halah, rainer.stiefelhagen}@kit.edu
171389529df11cc5a8b1fbbe659813f8c3be024dManifold Estimation in View-based Feature
Space for Face Synthesis across Poses
Center for Visualization and Virtual Environments
University of Kentucky, USA
('2257812', 'Xinyu Huang', 'xinyu huang')
('2943451', 'Jizhou Gao', 'jizhou gao')
('1772171', 'Sen-Ching S. Cheung', 'sen-ching s. cheung')
('38958903', 'Ruigang Yang', 'ruigang yang')
17d5e5c9a9ee4cf85dfbb9d9322968a6329c3735Study on Parameter Selection Using SampleBoost
Computer Science and Engineering Department,
University of North Texas, Denton, Texas, USA
('1898814', 'Mohamed Abouelenien', 'mohamed abouelenien')
('1982703', 'Xiaohui Yuan', 'xiaohui yuan')
{mohamed, xiaohui.yuan}@unt.edu
1750db78b7394b8fb6f6f949d68f7c24d28d934fDetecting Facial Retouching Using Supervised
Deep Learning
Bowyer, Fellow, IEEE
('5014060', 'Aparna Bharati', 'aparna bharati')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
17cf6195fd2dfa42670dc7ada476e67b381b8f69†Image Processing Laboratory, Department of Image Engineering
Graduate School of Advanced Imaging Science, Multimedia, and Film
Chung-Ang University, Seoul, Korea
Korea Electronics Technology Institute, 203-103 B/D 192, Yakdae-Dong
Wonmi-Gu Puchon-Si, Kyunggi-Do 420-140, Korea
‡Imaging, Robotics, and Intelligent Systems Laboratory
Department of Electrical and Computer Engineering
The University of Tennessee, Knoxville
AUTOMATIC FACE REGION TRACKING FOR HIGHLY ACCURATE FACE
RECOGNITION IN UNCONSTRAINED ENVIRONMENTS
('2243148', 'Young-Ouk Kim', 'young-ouk kim')
('1684329', 'Joonki Paik', 'joonki paik')
('39533703', 'Jingu Heo', 'jingu heo')
173657da03e3249f4e47457d360ab83b3cefbe63HKU-Face: A Large Scale Dataset for
Deep Face Recognition
Final Report
3035140108
COMP4801 Final Year Project
Project Code: 17007
('3347561', 'Haicheng Wang', 'haicheng wang')
174f46eccb5852c1f979d8c386e3805f7942baceThe Shape-Time Random Field for Semantic Video Labeling
School of Computer Science
University of Massachusetts, Amherst MA, USA
('2177037', 'Andrew Kae', 'andrew kae'){akae,marlin,elm}@cs.umass.edu
17670b60dcfb5cbf8fdae0b266e18cf995f6014cLongitudinal Face Modeling via
Temporal Deep Restricted Boltzmann Machines
Computer Science and Software Engineering, Concordia University, Montr eal, Qu ebec, Canada
2 CyLab Biometrics Center and the Department of Electrical and Computer Engineering,
Carnegie Mellon University, Pittsburgh, PA, USA
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
('1769788', 'Khoa Luu', 'khoa luu')
('2687827', 'Kha Gia Quach', 'kha gia quach')
('1699922', 'Tien D. Bui', 'tien d. bui')
1{c duon, k q, bui}@encs.concordia.ca, 2kluu@andrew.cmu.edu
17027a05c1414c9a06a1c5046899abf382a1142dArticulated Motion Discovery using Pairs of Trajectories
University of Edinburgh
2Google Research
('2059950', 'Luca Del Pero', 'luca del pero')
('2262946', 'Susanna Ricco', 'susanna ricco')
('1694199', 'Rahul Sukthankar', 'rahul sukthankar')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
ldelper@inf.ed.ac.uk
ricco@google.com
sukthankar@google.com
ferrari@inf.ed.ac.uk
17ded725602b4329b1c494bfa41527482bf83a6fCompact Convolutional Neural Network Cascade for Face Detection
Kalinovskii I.A.
Spitsyn V.G.
Tomsk Polytechnic University
Tomsk Polytechnic University
Tomsk, Russia
Tomsk, Russia
kua_21@mail.ru
spvg@tpu.ru
177bc509dd0c7b8d388bb47403f28d6228c14b5cDeep Learning Face Representation from Predicting 10,000 Classes
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1681656', 'Yi Sun', 'yi sun')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
sy011@ie.cuhk.edu.hk
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
7ba0bf9323c2d79300f1a433ff8b4fe0a00ad889
7bbaa09c9e318da4370a83b126bcdb214e7f8428FaaSter, Better, Cheaper: The Prospect of
Serverless Scientific Computing and HPC
Zurich University of Applied Sciences, School of Engineering
Service Prototyping Lab (blog.zhaw.ch/icclab/), 8401 Winterthur, Switzerland
ISISTAN Research Institute - CONICET - UNICEN
Campus Universitario, Paraje Arroyo Seco, Tandil (7000), Buenos Aires, Argentina
ITIC Research Institute, National University of Cuyo
Padre Jorge Contreras 1300, M5502JMA Mendoza, Argentina
('1765470', 'Josef Spillner', 'josef spillner')
('2891834', 'Cristian Mateos', 'cristian mateos')
('34889755', 'David A. Monge', 'david a. monge')
josef.spillner@zhaw.ch
cristian.mateos@isistan.unicen.edu.ar
dmonge@uncu.edu.ar
7b63ed54345d8c06523f6b03c41a09b5c8f227e2Facial Expression Recognition Based on
Combination of Spatio-temporal and Spectral
Features in Local Facial Regions
Department of Electrical Engineering,
Najafabad Branch, Islamic Azad University
Isfahan, Iran.
('9337964', 'Nakisa Abounasr', 'nakisa abounasr')n_abounasr@sel.iaun.ac.ir
7bf0a1aa1d0228a51d24c0c3a83eceb937a6ae25UNIVERSITY OF CALIFORNIA, SAN DIEGO
Video-based Car Surveillance: License Plate, Make, and Model Recognition
A thesis submitted in partial satisfaction of the
requirements for the degree Masters of Science
in Computer Science
by
Louka Dlagnekov
Committee in charge:
Professor Serge J. Belongie, Chairperson
2005
('3520515', 'David A. Meyer', 'david a. meyer')
('1765887', 'David J. Kriegman', 'david j. kriegman')
7b9961094d3e664fc76b12211f06e12c47a7e77dBridging Biometrics and Forensics
EECS, Syracuse University, Syracuse, NY, USA
('38495931', 'Yanjun Yan', 'yanjun yan')
('2598035', 'Lisa Ann Osadciw', 'lisa ann osadciw')
{yayan, laosadci}@syr.edu
7bfe085c10761f5b0cc7f907bdafe1ff577223e0
7b43326477795a772c08aee750d3e433f00f20beComputational Methods for Behavior Analysis
Thesis by
In Partial Fulfillment of the Requirements for the
degree of
Doctor of Philosophy
CALIFORNIA INSTITUTE OF TECHNOLOGY
Pasadena, California
2017
Defended September 16, 2016
('2948199', 'Eyrun Eyjolfsdottir', 'eyrun eyjolfsdottir')
7b9b3794f79f87ca8a048d86954e0a72a5f97758DOI 10.1515/jisys-2013-0016      Journal of Intelligent Systems 2013; 22(4): 365–415
Passing an Enhanced Turing Test –
Interacting with Lifelike Computer
Representations of Specific Individuals 
('1708812', 'Avelino J. Gonzalez', 'avelino j. gonzalez')
('1745342', 'Jason Leigh', 'jason leigh')
('1727179', 'Ronald F. DeMara', 'ronald f. demara')
('7777088', 'Steven Jones', 'steven jones')
('1761244', 'Sangyoon Lee', 'sangyoon lee')
('1917523', 'Carlos Leon-Barth', 'carlos leon-barth')
('3191606', 'Miguel Elvir', 'miguel elvir')
('33294824', 'James Hollister', 'james hollister')
('2680448', 'Steven Kobosko', 'steven kobosko')
7bce4f4e85a3bfcd6bfb3b173b2769b064fce0edA Psychologically-Inspired Match-Score Fusion Model
for Video-Based Facial Expression Recognition
VISLab, EBUII-216, University of California Riverside
Riverside, California, USA, 92521-0425
('1707159', 'Bir Bhanu', 'bir bhanu')
('1803478', 'Songfan Yang', 'songfan yang')
{acruz, bhanu, syang}@ee.ucr.edu
7b0f1fc93fb24630eb598330e13f7b839fb46cceLearning to Find Eye Region Landmarks for Remote Gaze
Estimation in Unconstrained Settings
ETH Zurich
MPI for Informatics
MPI for Informatics
ETH Zurich
('20466488', 'Seonwook Park', 'seonwook park')
('2520795', 'Xucong Zhang', 'xucong zhang')
('3194727', 'Andreas Bulling', 'andreas bulling')
('2531379', 'Otmar Hilliges', 'otmar hilliges')
spark@inf.ethz.ch
xczhang@mpi-inf.mpg.de
bulling@mpi-inf.mpg.de
otmarh@inf.ethz.ch
7be60f8c34a16f30735518d240a01972f3530e00Facial Expression Recognition with Temporal Modeling of Shapes

The University of Texas at Austin
('18692590', 'Suyog Jain', 'suyog jain')
('1713065', 'Changbo Hu', 'changbo hu')
suyog@cs.utexas.edu, changbo.hu@gmail.com, aggarwaljk@mail.utexas.edu
7bdcd85efd1e3ce14b7934ff642b76f017419751289
Learning Discriminant Face Descriptor
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
7b3b7769c3ccbdf7c7e2c73db13a4d32bf93d21fOn the Design and Evaluation of Robust Head Pose for
Visual User Interfaces: Algorithms, Databases, and
Comparisons
Laboratory of Intelligent and
Safe Automobiles
UCSD - La Jolla, CA, USA
Laboratory of Intelligent and
Safe Automobiles
UCSD - La Jolla, CA, USA
Laboratory of Intelligent and
Safe Automobiles
UCSD - La Jolla, CA, USA
Laboratory of Intelligent and
Safe Automobiles
UCSD - La Jolla, CA, USA
Mohan Trivedi
Laboratory of Intelligent and
Safe Automobiles
UCSD - La Jolla, CA, USA
('1841835', 'Sujitha Martin', 'sujitha martin')
('1947383', 'Ashish Tawari', 'ashish tawari')
('1780529', 'Erik Murphy-Chutorian', 'erik murphy-chutorian')
('3205274', 'Shinko Y. Cheng', 'shinko y. cheng')
scmartin@ucsd.edu
atawari@ucsd.edu
erikmc@google.com
sycheng@hrl.com
mtrivedi@ucsd.edu
8fe38962c24300129391f6d7ac24d7783e0fddd0Center for Research in Computer Vision, University of Central Florida('33209161', 'Amir Mazaheri', 'amir mazaheri')
('1745480', 'Mubarak Shah', 'mubarak shah')
amirmazaheri@knights.ucf.edu
shah@crcv.ucf.edu
8f6d05b8f9860c33c7b1a5d704694ed628db66c7Non-linear dimensionality reduction and sparse
representation models for facial analysis
To cite this version:
Medical Imaging. INSA de Lyon, 2014. English. .
HAL Id: tel-01127217
https://tel.archives-ouvertes.fr/tel-01127217
Submitted on 7 Mar 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('35061362', 'Yuyao Zhang', 'yuyao zhang')
('35061362', 'Yuyao Zhang', 'yuyao zhang')
8f772d9ce324b2ef5857d6e0b2a420bc93961196MAHPOD et al.: CFDRNN
Facial Landmark Point Localization using
Coarse-to-Fine Deep Recurrent Neural Network
('2748312', 'Shahar Mahpod', 'shahar mahpod')
('3001038', 'Rig Das', 'rig das')
('1767715', 'Emanuele Maiorana', 'emanuele maiorana')
('1926432', 'Yosi Keller', 'yosi keller')
('1682433', 'Patrizio Campisi', 'patrizio campisi')
8f3e120b030e6c1d035cb7bd9c22f6cc75782025Bayesian Networks and the Imprecise Dirichlet
Model applied to Recognition Problems
Dalle Molle Institute for Arti cial Intelligence
Galleria 2, Manno-Lugano, Switzerland
Rensselaer Polytechnic Institute
110 Eighth St., Troy, NY, USA
('1726583', 'Qiang Ji', 'qiang ji')cassio@idsia.ch, jiq@rpi.edu
8fb611aca3bd8a3a0527ac0f38561a5a9a5b8483
8fda2f6b85c7e34d3e23927e501a4b4f7fc15b2aFeature Selection with Annealing for Big Data
Learning
('2455529', 'Adrian Barbu', 'adrian barbu')
('34680388', 'Yiyuan She', 'yiyuan she')
('2139735', 'Liangjing Ding', 'liangjing ding')
('3019469', 'Gary Gramajo', 'gary gramajo')
8fed5ea3b69ea441a8b02f61473eafee25fb2374Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Two-Dimensional PCA with F-Norm Minimization
State Key Laboratory of ISN, Xidian University
State Key Laboratory of ISN, Xidian University
Xi’an China
Xi’an China
('38469552', 'Quanxue Gao', 'quanxue gao')
('40326660', 'Qianqian Wang', 'qianqian wang')
8fa3478aaf8e1f94e849d7ffbd12146946badabaAttributes for Classifier Feedback
Indraprastha Institute of Information Technology (Delhi, India
Toyota Technological Institute (Chicago, US
('2076800', 'Amar Parkash', 'amar parkash')
('1713589', 'Devi Parikh', 'devi parikh')
8f3da45ff0c3e1777c3a7830f79c10f5896bcc21Situation Recognition with Graph Neural Networks
The Chinese University of Hong Kong, 2University of Toronto, 3Youtu Lab, Tencent
Uber Advanced Technologies Group, 5Vector Institute
('8139953', 'Ruiyu Li', 'ruiyu li')
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('2246396', 'Renjie Liao', 'renjie liao')
('1729056', 'Jiaya Jia', 'jiaya jia')
('2422559', 'Raquel Urtasun', 'raquel urtasun')
('37895334', 'Sanja Fidler', 'sanja fidler')
ryli@cse.cuhk.edu.hk, {makarand,rjliao,urtasun,fidler}@cs.toronto.edu, leojia9@gmail.com
8ff8c64288a2f7e4e8bf8fda865820b04ab3dbe8Age Estimation Using Expectation of Label Distribution Learning ∗
National Key Laboratory for Novel Software Technology, Nanjing University, China
MOE Key Laboratory of Computer Network and Information Integration, Southeast University, China
('2226422', 'Bin-Bin Gao', 'bin-bin gao')
('7678704', 'Hong-Yu Zhou', 'hong-yu zhou')
('1808816', 'Jianxin Wu', 'jianxin wu')
('1735299', 'Xin Geng', 'xin geng')
{gaobb,zhouhy,wujx}@lamda.nju.edu.cn, xgeng@seu.edu.cn
8f9c37f351a91ed416baa8b6cdb4022b231b9085Generative Adversarial Style Transfer Networks for Face Aging
Sveinn Palsson
D-ITET, ETH Zurich
Eirikur Agustsson
D-ITET, ETH Zurich
spalsson@ethz.ch
aeirikur@ethz.ch
8f8c0243816f16a21dea1c20b5c81bc223088594
8f08b2101d43b1c0829678d6a824f0f045d57da5Supplementary Material for: Active Pictorial Structures
Imperial College London
180 Queens Gate, SW7 2AZ, London, U.K.
In the following sections, we provide additional material for the paper “Active Pictorial Structures”. Section 1 explains in
more detail the differences between the proposed Active Pictorial Structures (APS) and Pictorial Structures (PS). Section 2
presents the proofs about the structure of the precision matrices of the Gaussian Markov Random Filed (GMRF) (Eqs. 10
and 12 of the main paper). Section 3 gives an analysis about the forward Gauss-Newton optimization of APS and shows that
the inverse technique with fixed Jacobian and Hessian, which is used in the main paper, is much faster. Finally, Sec. 4 shows
additional experimental results and conducts new experiments on different objects (human eyes and cars). An open-source
implementation of APS is available within the Menpo Project [1] in http://www.menpo.org/.
1. Differences between Active Pictorial Structures and Pictorial Structures
As explained in the main paper, the proposed model is partially motivated by PS [4, 8]. In the original formulation of PS,
the cost function to be optimized has the form
(cid:88)
n(cid:88)
n(cid:88)
i=1
arg min
= arg min
i=1
mi((cid:96)i) +
dij((cid:96)i, (cid:96)j) =
i,j:(vi,vj )∈E
[A((cid:96)i) − µa
i ]T (Σa
i )−1[A((cid:96)i) − µa
i ] +
(cid:88)
i,j:(vi,vj )∈E
[(cid:96)i − (cid:96)j − µd
ij]T (Σd
ij)−1[(cid:96)i − (cid:96)j − µd
ij]
(1)
1 , . . . , (cid:96)T
n ]T is the vector of landmark coordinates ((cid:96)i = [xi, yi]T , ∀i = 1, . . . , n), A((cid:96)i) is a feature vector
where s = [(cid:96)T
ij} denote the mean
extracted from the image location (cid:96)i and we have assumed a tree G = (V, E). {µa
and covariances of the appearance and deformation respectively. In Eq. 1, mi((cid:96)i) is a function measuring the degree of
mismatch when part vi is placed at location (cid:96)i in the image. Moreover, dij((cid:96)i, (cid:96)j) denotes a function measuring the degree
of deformation of the model when part vi is placed at location (cid:96)i and part vj is placed at location (cid:96)j. The authors show
an inference algorithm based on distance transform [3] that can find a global minimum of Eq. 1 without any initialization.
However, this algorithm imposes two important restrictions: (1) appearance of each part is independent of the rest of them
and (2) G must always be acyclic (a tree). Additionally, the computation of mi((cid:96)i) for all parts (i = 1, . . . , n) and all possible
image locations (response maps) has a high computational cost, which makes the algorithm very slow. Finally, in [8], the
authors only use a diagonal covariance for the relative locations (deformation) of each edge of the graph, which restricts the
flexibility of the model.
i } and {µd
ij, Σd
i , Σa
In the proposed APS, we aim to minimize the cost function (Eq. 19 of the main paper)
(cid:107)A(S(¯s, p)) − ¯a(cid:107)2
[A(S(¯s, p)) − ¯a]T Qa[A(S(¯s, p)) − ¯a] + [S(¯s, p) − ¯s]T Qd[S(¯s, p) − ¯s]
Qa + (cid:107)S(¯s, p) − ¯s(cid:107)2
Qd =
arg min
= arg min
(2)
There are two main differences between APS and PS: (1) we employ a statistical shape model and optimize with respect
to its parameters and (2) we use the efficient Gauss-Newton optimization technique. However, these differences introduce
some important advantages, as also mentioned in the main paper. The proposed formulation allows to define a graph (not
only tree) between the object’s parts. This means that we can assume dependencies between any pair of landmarks for both
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
{e.antonakos, ja310, s.zafeiriou}@imperial.ac.uk
8fbec9105d346cd23d48536eb20c80b7c2bbbe30The Effectiveness of Face Detection Algorithms in Unconstrained Crowd Scenes
Department of Computer Science and Engineering
University of Notre Dame
Notre Dame, IN 46656
('27937356', 'Jeremiah R. Barr', 'jeremiah r. barr')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
jbarr1,kwb,flynn@nd.edu
8f3e3f0f97844d3bfd9e9ec566ac7a54f6931b09Electronic Letters on Computer Vision and Image Analysis 14(2):24-44; 2015
A Survey on Human Emotion Recognition Approaches,
Databases and Applications
Francis Xavier Engineering College, Tirunelveli, Tamilnadu, India
P.S.R Engineering College, Sivakasi, Tamilnadu, India
Received 7th Aug 2015; accepted 30th Nov 2015
8f8a5be9dc16d73664285a29993af7dc6a598c83IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.1, January 2011
71
Neural Network based Face Recognition with Gabor Filters
Jahangirnagar University, Savar, Dhaka 1342, Bangladesh
('5463951', 'Amina Khatun', 'amina khatun')
('38674112', 'Al-Amin Bhuiyan', 'al-amin bhuiyan')
8f5ce25e6e1047e1bf5b782d045e1dac29ca747eA Novel Discriminant Non-negative Matrix
Factorization Algorithm with Applications to
Facial Image Characterization Problems
yAristotle University of Thessaloniki
Department of Informatics
Box 451
54124 Thessaloniki, Greece
Address for correspondence:
Aristotle University of Thessaloniki
54124 Thessaloniki
GREECE
Tel. ++ 30 231 099 63 04
Fax ++ 30 231 099 63 04
April 18, 2007
DRAFT
('1754270', 'Irene Kotsia', 'irene kotsia')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
email: fekotsia, dralbert, pitasg@aiia.csd.auth.gr
8f89aed13cb3555b56fccd715753f9ea72f27f05Attended End-to-end Architecture for Age
Estimation from Facial Expression Videos
('1678473', 'Wenjie Pei', 'wenjie pei')
8f92cccacf2c84f5d69db3597a7c2670d93be781FACIAL EXPRESSION SYNTHESIS THROUGH FACIAL EXPRESSIONS
STATISTICAL ANALYSIS
Aristotle University of Thessaloniki
Department of Informatics
Box 451, 54124 Thessaloniki, Greece
('2764130', 'Stelios Krinidis', 'stelios krinidis')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
email: pitas@zeus.csd.auth.gr, stelios.krinidis@mycosmos.gr
8f6263e4d3775757e804796e104631c7a2bb8679Characterizing Visual Representations within Convolutional Neural Networks:
Toward a Quantitative Approach
Center for Brain Science, Harvard University, Cambridge, MA 02138 USA
Center for Brain Science, Harvard University, Cambridge, MA 02138 USA
('1739108', 'Chuan-Yung Tsai', 'chuan-yung tsai')
('2042941', 'David D. Cox', 'david d. cox')
CHUANYUNGTSAI@FAS.HARVARD.EDU
DAVIDCOX@FAS.HARVARD.EDU
8f9f599c05a844206b1bd4947d0524234940803d
8f60c343f76913c509ce623467bf086935bcadacJoint 3D Face Reconstruction and Dense
Alignment with Position Map Regression
Network
Shanghai Jiao Tong University, CloudWalk Technology
Research Center for Intelligent Security Technology, CIGIT
('9196752', 'Yao Feng', 'yao feng')
('1917608', 'Fan Wu', 'fan wu')
('3492237', 'Xiaohu Shao', 'xiaohu shao')
('1706354', 'Yanfeng Wang', 'yanfeng wang')
('39851640', 'Xi Zhou', 'xi zhou')
fengyao@sjtu.edu.cn, wufan@cloudwalk.cn, shaoxiaohu@cigit.ac.cn
wangyanfeng@sjtu.edu.cn, zhouxi@cloudwalk.cn
8fd9c22b00bd8c0bcdbd182e17694046f245335f  
Recognizing Facial Expressions in Videos
('8502461', 'Lin Su', 'lin su')
('14362431', 'Matthew Balazsi', 'matthew balazsi')
8f5facdc0a2a79283864aad03edc702e2a400346

ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 4, Issue 7, January 2015
Human Age Estimation Framework using
Bio-Inspired Features for Facial Image
Santhosh Kumar G, Dr. Suresh H. N.
Research scholor, BIT, under VTU, Belgaum India
Bangalore Institute of Technology
Bangalore–04, Karnataka
8a3c5507237957d013a0fe0f082cab7f757af6eeFacial Landmark Detection by
Deep Multi-task Learning
The Chinese University of Hong Kong
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('1693209', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
8af411697e73f6cfe691fe502d4bfb42510b4835Dynamic Local Ternary Pattern for Face Recognition and
Verification
Institute of Information Technology
University of Dhaka, Bangladesh
Department of Industrial and Management Engineering
Hankuk University of Foreign Studies, South Korea
M. Abdullah-Al-Wadud
('39036762', 'Mohammad Ibrahim', 'mohammad ibrahim')
('31210416', 'Humayun Kayesh', 'humayun kayesh')
('13193999', 'Shah', 'shah')
('2233124', 'Mohammad Shoyaib', 'mohammad shoyaib')
ibrahim iit@yahoo.com, iftekhar.efat@gmail.com, hkayesh@gmail.com, khaled@univdhaka.edu, shoyaib@du.ac.bd
wadud@hufs.ac.kr
8acdc4be8274e5d189fb67b841c25debf5223840Gultepe and Makrehchi
Hum. Cent. Comput. Inf. Sci. (2018) 8:25
https://doi.org/10.1186/s13673-018-0148-3
RESEARCH
Improving clustering performance
using independent component analysis
and unsupervised feature learning
Open Access
*Correspondence:
Department of Electrical
and Computer Engineering,
University of Ontario Institute
of Technology, 2000 Simcoe
St N, Oshawa, ON L1H 7K4,
Canada
('2729102', 'Eren Gultepe', 'eren gultepe')
('3183840', 'Masoud Makrehchi', 'masoud makrehchi')
eren.gultepe@uoit.net
8a1ed5e23231e86216c9bdd62419c3b05f1e0b4dFacial Keypoint Detection
Stanford University
March 13, 2016
('29909347', 'Shayne Longpre', 'shayne longpre')
('9928926', 'Ajay Sohmshetty', 'ajay sohmshetty')
slongpre@stanford.edu, ajay14@stanford.edu
8a54f8fcaeeede72641d4b3701bab1fe3c2f730aWhat do you think of my picture? Investigating factors
of influence in profile images context perception
Heynderickx
To cite this version:
think of my picture? Investigating factors of influence in profile images context perception. Human
Vision and Electronic Imaging XX, Mar 2015, San Francisco, United States. Proc. SPIE 9394, Hu-
man Vision and Electronic Imaging XX, 9394, electronic-imaging>. <10.1117/12.2082817>.
HAL Id: hal-01149535
https://hal.archives-ouvertes.fr/hal-01149535
Submitted on 7 May 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('34678433', 'Filippo Mazza', 'filippo mazza')
('40130265', 'Matthieu Perreira Da Silva', 'matthieu perreira da silva')
('7591543', 'Patrick Le Callet', 'patrick le callet')
('34678433', 'Filippo Mazza', 'filippo mazza')
('40130265', 'Matthieu Perreira Da Silva', 'matthieu perreira da silva')
('7591543', 'Patrick Le Callet', 'patrick le callet')
('1728396', 'Ingrid Heynderickx', 'ingrid heynderickx')
8a8861ad6caedc3993e31d46e7de6c251a8cda22StreetStyle: Exploring world-wide clothing styles from millions of photos
Cornell University
Figure 1: Extracting and measuring clothing style from Internet photos at scale. (a) We apply deep learning methods to learn to extract
fashion attributes from images and create a visual embedding of clothing style. We use this embedding to analyze millions of Instagram photos
of people sampled worldwide, in order to study spatio-temporal trends in clothing around the globe. (b) Further, using our embedding, we
can cluster images to produce a global set of representative styles, from which we can (c) use temporal and geo-spatial statistics to generate
concise visual depictions of what makes clothing unique in each city versus the rest.
('40353974', 'Kevin Matzen', 'kevin matzen')
('1791337', 'Kavita Bala', 'kavita bala')
('1830653', 'Noah Snavely', 'noah snavely')
8aae23847e1beb4a6d51881750ce36822ca7ed0bComparison Between Geometry-Based and Gabor-Wavelets-Based
Facial Expression Recognition Using Multi-Layer Perceptron
ATR Human Information Processing Research Laboratories
ATR Interpreting Telecommunications Research Laboratories
2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan
INRIA, 2004 route des Lucioles, BP 93, F-06902 Sophia-Antipolis Cedex, France
('1809184', 'Zhengyou Zhang', 'zhengyou zhang')
('34801422', 'Shigeru Akamatsu', 'shigeru akamatsu')
('36206997', 'Michael Schuster', 'michael schuster')
e-mail: zzhang@sophia.inria.fr, zzhang@hip.atr.co.jp
8a866bc0d925dfd8bb10769b8b87d7d0ff01774dWikiArt Emotions: An Annotated Dataset of Emotions Evoked by Art
National Research Council Canada
('2886725', 'Svetlana Kiritchenko', 'svetlana kiritchenko'){saif.mohammad,svetlana.kiritchenko}@nrc-cnrc.gc.ca
8a40b6c75dd6392ee0d3af73cdfc46f59337efa9
8a3bb63925ac2cdf7f9ecf43f71d65e210416e17ShearFace: Efficient Extraction of Anisotropic
Features for Face Recognition
1Research Groups on Intelligent Machines,
University of Sfax
Sfax 3038, Tunisia
and anisotropic
('2791150', 'Mohamed Anouar Borgi', 'mohamed anouar borgi')
('8847309', 'Demetrio Labate', 'demetrio labate')
{anoir.borgi@ieee.org; dlabate@math.uh.edu}
8a0159919ee4e1a9f4cbfb652a1be212bf0554fdUniversity of Surrey
Faculty of Engineering and Physical Sciences
Department of Computer Science
PhD Thesis
Application of Power Laws to
Biometrics, Forensics and
Network Traffic Analysis
by
Supervisor: Prof. A.T.S. Ho
Co-supervisors: Dr. N. Poh, Dr. S. Li
November, 2016
('2909991', 'Aamo Iorliam', 'aamo iorliam')
8ad0d8cf4bcb5c7eccf09f23c8b7d25439c4ae2bPredicting the Future with Transformational
States
University of Pennsylvania, 2Ryerson University
('2689633', 'Andrew Jaegle', 'andrew jaegle')
('40805511', 'Oleh Rybkin', 'oleh rybkin')
('3150825', 'Konstantinos G. Derpanis', 'konstantinos g. derpanis')
('1751586', 'Kostas Daniilidis', 'kostas daniilidis')
ajaegle@upenn.edu, oleh@cis.upenn.edu,
kosta@scs.ryerson.ca, kostas@cis.upenn.edu
8adb2fcab20dab5232099becbd640e9c4b6a905aBeyond Euclidean Eigenspaces:
Bayesian Matching for Visual Recognition
Mitsubishi Electric Research Laboratory
MIT Media Laboratory
 Broadway
 Ames St.
Cambridge, MA  , USA
Cambridge, MA  , USA
('1780935', 'Baback Moghaddam', 'baback moghaddam')
('1682773', 'Alex Pentland', 'alex pentland')
baback@merl.com
sandy@media.mit.edu
8a0d10a7909b252d0e11bf32a7f9edd0c9a8030bAnimals on the Web
University of California, Berkeley
University of Illinois, Urbana-Champaign
Computer Science Division
Department of Computer Science
('1685538', 'Tamara L. Berg', 'tamara l. berg')
('1744452', 'David A. Forsyth', 'david a. forsyth')
millert@cs.berkeley.edu
daf@cs.uiuc.edu
8a91ad8c46ca8f4310a442d99b98c80fb8f7625f2592
2D Segmentation Using a Robust Active
Shape Model With the EM Algorithm
('38769654', 'Carlos Santiago', 'carlos santiago')
('3259175', 'Jacinto C. Nascimento', 'jacinto c. nascimento')
('1744810', 'Jorge S. Marques', 'jorge s. marques')
8aed6ec62cfccb4dba0c19ee000e6334ec585d70Localizing and Visualizing Relative Attributes ('2299381', 'Fanyi Xiao', 'fanyi xiao')
('1883898', 'Yong Jae Lee', 'yong jae lee')
8a336e9a4c42384d4c505c53fb8628a040f2468eWang and Luo EURASIP Journal on Bioinformatics
and Systems Biology (2016) 2016:13
DOI 10.1186/s13637-016-0048-7
R ES EAR CH
Detecting Visually Observable Disease
Symptoms from Faces
Open Access
('2207567', 'Kuan Wang', 'kuan wang')
('33642939', 'Jiebo Luo', 'jiebo luo')
7e600faee0ba11467d3f7aed57258b0db0448a72
7ed3b79248d92b255450c7becd32b9e5c834a31eL1-regularized Logistic Regression Stacking and Transductive CRF Smoothing
for Action Recognition in Video
University of Florence
Lorenzo Seidenari
University of Florence
Andrew D. Bagdanov
University of Florence
University of Florence
('2602265', 'Svebor Karaman', 'svebor karaman')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
svebor.karaman@unifi.it
lorenzo.seidenari@unifi.it
bagdanov@dsi.unifi.it
alberto.delbimbo@unifi.it
7e8016bef2c180238f00eecc6a50eac473f3f138TECHNISCHE UNIVERSIT ¨AT M ¨UNCHEN
Lehrstuhl f¨ur Mensch-Maschine-Kommunikation
Immersive Interactive Data Mining and Machine
Learning Algorithms for Big Data Visualization
Vollst¨andiger Abdruck der von der Fakult¨at f¨ur Elektrotechnik und Informationstechnik
der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines
Doktor-Ingenieurs (Dr.-Ing.)
genehmigten Dissertation.
Vorsitzender:
Univ.-Prof. Dr. sc.techn. Andreas Herkersdorf
Pr¨ufer der Dissertation:
1. Univ.-Prof. Dr.-Ing. habil. Gerhard Rigoll
2. Univ.-Prof. Dr.-Ing. habil. Dirk Wollherr
3. Prof. Dr. Mihai Datcu
Die Dissertation wurde am 13.08.2015 bei der Technischen Universit¨at M¨unchen eingerei-
cht und durch die Fakult¨at f¨ur Elektrotechnik und Informationstechnik am 16.02.2016
angenommen.
('2133342', 'Mohammadreza Babaee', 'mohammadreza babaee')
7ed2c84fdfc7d658968221d78e745dfd1def6332May 15, 2007 6:32
World Scientific Review Volume - 9.75in x 6.5in
ObjectRecognitionLCV2
Chapter 1
Evaluation of linear combination of views for object recognition
on real and synthetic datasets
Department of computer science,
University College London
Malet Place, London, WC1E 6BT
In this work, we present a method for model-based recognition of 3d objects from
a small number of 2d intensity images taken from nearby, but otherwise arbitrary
viewpoints. Our method works by linearly combining images from two (or more)
viewpoints of a 3d object to synthesise novel views of the object. The object is
recognised in a target image by matching to such a synthesised, novel view. All
that is required is the recovery of the linear combination parameters, and since
we are working directly with pixel intensities, we suggest searching the parameter
space using a global, evolutionary optimisation algorithm combined with a local
search method in order efficiently to recover the optimal parameters and thus
recognise the object in the scene. We have experimented with both synthetic
data and real-image, public databases.
1.1. Introduction
Object recognition is one of the most important and basic problems in computer
vision and, for this reason, it has been studied extensively resulting in a plethora
of publications and a variety of different approachesa aiming to solve this problem.
Nevertheless accurate, robust and efficient solutions remain elusive because of the
inherent difficulties when dealing in particular with 3d objects that may be seen
from a variety of viewpoints. Variations in geometry, photometry and viewing angle,
noise, occlusions and incomplete data are some of the problems with which object
recognition systems are faced.
In this paper, we will address a particular kind of extrinsic variations: varia-
tions of the image due to changes in the viewpoint from which the object is seen.
Traditionally, methods that aimed to solve the recognition problem for objects with
varying pose relied on an explicit 3d model of the object, generating 2d projections
from that model and comparing them with the scene image. Such was the work
aFor a comprehensive review of object recognition methods and deformable templates in particular,
see Refs. 1–4.
('1797883', 'Vasileios Zografos', 'vasileios zografos')
('31557997', 'Bernard F. Buxton', 'bernard f. buxton')
{v.zografos,b.buxton}@cs.ucl.ac.uk
7eaa97be59019f0d36aa7dac27407b004cad5e93Sampling Generative Networks
School of Design
Victoria University of Wellington
Wellington, New Zealand
('40603980', 'Tom White', 'tom white')tom.white@vuw.ac.nz
7eb895e7de883d113b75eda54389460c61d63f67Can you tell a face from a HEVC bitstream?
School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
('3393216', 'Saeed Ranjbar Alvar', 'saeed ranjbar alvar')
('3320198', 'Hyomin Choi', 'hyomin choi')
Email: {saeedr,chyomin, ibajic}@sfu.ca
7e467e686f9468b826133275484e0a1ec0f5bde6Efficient On-the-fly Category Retrieval
using ConvNets and GPUs
Visual Geometry Group, University of Oxford
('34838386', 'Karen Simonyan', 'karen simonyan')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{ken,karen,az}@robots.ox.ac.uk
7e3367b9b97f291835cfd0385f45c75ff84f4dc5Improved Local Binary Pattern Based Action Unit Detection Using
Morphological and Bilateral Filters
1Signal Processing Laboratory (LTS5)
´Ecole Polytechnique F´ed´erale de Lausanne,
Switzerland
2nViso SA
Lausanne, Switzerland
('2916630', 'Matteo Sorci', 'matteo sorci')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
{anil.yuce;jean-philippe.thiran}@epfl.ch
matteo.sorci@nviso.ch
7ef0cc4f3f7566f96f168123bac1e07053a939b2Triangular Similarity Metric Learning: a Siamese
Architecture Approach
To cite this version:
puter Science [cs]. UNIVERSITE DE LYON, 2016. English. . 01314392>
HAL Id: tel-01314392
https://hal.archives-ouvertes.fr/tel-01314392
Submitted on 11 May 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('37848497', 'Lilei Zheng', 'lilei zheng')
('37848497', 'Lilei Zheng', 'lilei zheng')
7e00fb79576fe213853aeea39a6bc51df9fdca16Online Multi-Face Detection and Tracking
using Detector Confidence and Structured SVMs
Eindhoven University of Technology, The Netherlands
2TNO Embedded Systems Innovation, Eindhoven, The Netherlands
('3199035', 'Francesco Comaschi', 'francesco comaschi')
('1679431', 'Sander Stuijk', 'sander stuijk')
('1708289', 'Twan Basten', 'twan basten')
('1684335', 'Henk Corporaal', 'henk corporaal')
{f.comaschi, s.stuijk, a.a.basten, h.corporaal}@tue.nl
7e2cfbfd43045fbd6aabd9a45090a5716fc4e179Global Norm-Aware Pooling for Pose-Robust Face Recognition at Low False Positive Rate
Global Norm-Aware Pooling for Pose-Robust Face Recognition at Low False
Positive Rate
a School of Computer and Information Technology, Beijing Jiaotong University, Beijing
China
b Research Institute, Watchdata Inc., Beijing, China
c DeepInSight, China
('39326372', 'Sheng Chen', 'sheng chen')
('3007274', 'Jia Guo', 'jia guo')
('1681842', 'Yang Liu', 'yang liu')
('46757550', 'Xiang Gao', 'xiang gao')
('2765914', 'Zhen Han', 'zhen han')
{shengchen, zhan}@bjtu.edu.cn
{yang.liu.yj, xiang.gao}@watchdata.com
guojia@gmail.com
7ee53d931668fbed1021839db4210a06e4f33190What if we do not have multiple videos of the same action? —
Video Action Localization Using Web Images
Center for Research in Computer Vision (CRCV), University of Central Florida (UCF
('3195774', 'Waqas Sultani', 'waqas sultani')
('1745480', 'Mubarak Shah', 'mubarak shah')
waqassultani@knights.ucf.edu, shah@crcv.ucf.edu
7e18b5f5b678aebc8df6246716bf63ea5d8d714eOriginal research
published: 15 January 2018
doi: 10.3389/fpsyt.2017.00309
increased loss aversion in
Unmedicated Patients with
Obsessive–compulsive Disorder
1 Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, United States, 2 Fishberg Department of
Neuroscience, Icahn School of Medicine at Mount Sinai, Friedman Brain Institute, New York, NY, United States
of Psychology, University of Michigan, Ann Arbor, MI, United States, University of Michigan, Ann
Arbor, MI, United States
introduction: Obsessive–compulsive disorder (OCD) patients show abnormalities in
decision-making and, clinically, appear to show heightened sensitivity to potential nega-
tive outcomes. Despite the importance of these cognitive processes in OCD, few studies
have examined the disorder within an economic decision-making framework. Here, we
investigated loss aversion, a key construct in the prospect theory that describes the
tendency for individuals to be more sensitive to potential losses than gains when making
decisions.
Methods: Across two study sites, groups of unmedicated OCD patients (n = 14), medi-
cated OCD patients (n = 29), and healthy controls (n = 34) accepted or rejected a series
of 50/50 gambles containing varying loss/gain values. Loss aversion was calculated
as the ratio of the likelihood of rejecting a gamble with increasing potential losses to
the likelihood of accepting a gamble with increasing potential gains. Decision times to
accept or reject were also examined and correlated with loss aversion.
results: Unmedicated OCD patients exhibited significantly more loss aversion com-
pared to medicated OCD or controls, an effect that was replicated across both sites
and remained significant even after controlling for OCD symptom severity, trait anxiety,
and sex. Post hoc analyses further indicated that unmedicated patients’ increased
likelihood to reject a gamble as its loss value increased could not be explained solely by
greater risk aversion among patients. Unmedicated patients were also slower to accept
than reject gambles, effects that were not found in the other two groups. Loss aversion
was correlated with decision times in unmedicated patients but not in the other two
groups.
Discussion: These data identify abnormalities of decision-making in a subgroup
of OCD patients not taking psychotropic medication. The findings help elucidate
the cognitive mechanisms of the disorder and suggest that future treatments could
aim to target abnormalities of loss/gain processing during decision-making in this
population.
Keywords: decision-making, prospect theory, choice behavior, reward, obsessive–compulsive disorder
Edited by:
Qinghua He,
Southwest University, China
Reviewed by:
Qiang Wang,
Beijing Normal University, China
Michael Grady Wheaton,
Columbia University, United States
*Correspondence:
Specialty section:
This article was submitted
to Psychopathology,
a section of the journal
Frontiers in Psychiatry
Received: 08 December 2017
Accepted: 26 December 2017
Published: 15 January 2018
Citation:
Sip KE, Gonzalez R, Taylor SF and
Stern ER (2018) Increased Loss
Aversion in Unmedicated Patients
with Obsessive–Compulsive Disorder.
Front. Psychiatry 8:309.
doi: 10.3389/fpsyt.2017.00309
Frontiers in Psychiatry | www.frontiersin.org
January 2018 | Volume 8 | Article 309
('3592712', 'Kamila E. Sip', 'kamila e. sip')
('31801083', 'Richard Gonzalez', 'richard gonzalez')
('2085281', 'Stephan F. Taylor', 'stephan f. taylor')
('2025121', 'Emily R. Stern', 'emily r. stern')
('2025121', 'Emily R. Stern', 'emily r. stern')
emily.stern@mssm.edu,
emily.stern@nyumc.org
7e9df45ece7843fe050033c81014cc30b3a8903aAUDIO-VISUAL INTENT-TO-SPEAK DETECTION FOR HUMAN-COMPUTER
INTERACTION
Institut Eurecom
 , route des Cr^etes, BP  
  Sophia-Antipolis Cedex, FRANCE
IBM T.J. Watson Research Center
Yorktown Heights, NY  , USA
('3163391', 'Philippe de Cuetos', 'philippe de cuetos')
('2264160', 'Chalapathy Neti', 'chalapathy neti')
('33666044', 'Andrew W. Senior', 'andrew w. senior')
decuetos@eurecom.fr
cneti,aws@us.ibm.com
7ebd323ddfe3b6de8368c4682db6d0db7b70df62Proceedings of the International Conference on Computer and Information Science and Technology
Ottawa, Ontario, Canada, May 11 – 12, 2015
Paper No. 111
Location-based Face Recognition Using Smart Mobile Device
Sensors
Department of Computer Science
University of Victoria, Victoria, Canada
('2019933', 'Nina Taherimakhsousi', 'nina taherimakhsousi')
('1747880', 'Hausi A. Müller', 'hausi a. müller')
ninata@uvic.ca; hausi@uvic.ca
7eb85bcb372261bad707c05e496a09609e27fdb3A Compute-efficient Algorithm for Robust Eyebrow Detection
Nanyang Technological University, 2University of California San Diego
('36375772', 'Supriya Sathyanarayana', 'supriya sathyanarayana')
('1710219', 'Ravi Kumar Satzoda', 'ravi kumar satzoda')
('1924458', 'Suchitra Sathyanarayana', 'suchitra sathyanarayana')
supriya001@e.ntu.edu.sg, rsatzoda@eng.ucsd.edu, ssathyanarayana@ucsd.edu, astsrikan@ntu.edu.sg
7ed6ff077422f156932fde320e6b3bd66f8ffbcbState of 3D Face Biometrics for Homeland Security Applications
Chaudhari4
('2925401', 'Anshuman Razdan', 'anshuman razdan')
('1693971', 'Gerald Farin', 'gerald farin')
7ebb153704706e457ab57b432793d2b6e5d12592ZHONG, ARANDJELOVI ´C, ZISSERMAN: FACES IN PLACES
Faces In Places: compound query retrieval
Relja Arandjelovi´c2
1 Visual Geometry Group
Department of Engineering Science
University of Oxford, UK
2 WILLOW project
Departement d’Informatique de l’École
Normale Supérieure
ENS/INRIA/CNRS UMR 8548
('6730372', 'Yujie Zhong', 'yujie zhong')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
yujie@robots.ox.ac.uk
relja.arandjelovic@inria.fr
az@robots.ox.ac.uk
7ec7163ec1bc237c4c2f2841c386f2dbfd0cc922ORIGINAL RESEARCH
published: 20 June 2018
doi: 10.3389/fpsyg.2018.00971
Skiing and Thinking About It:
Moment-to-Moment and
Retrospective Analysis of Emotions
in an Extreme Sport
and Tove Irene Dahl
UiT The Arctic University of Norway, Troms , Norway
Happiness is typically reported as an important reason for participating in challenging
activities like extreme sport. While in the middle of the activity, however, participants
do not seem particularly happy. So where does the happiness come from? The
article proposes some answers from a study of facially expressed emotions measured
moment-by-moment during a backcountry skiing event. Self-reported emotions were
also assessed immediately after the skiing. Participants expressed lower levels of
happiness while skiing, compared to when stopping for a break. Moment-to-moment
and self-reported measures of emotions were largely unrelated. These findings are
explained with reference to the Functional Wellbeing Approach (Vittersø, 2013), which
argues that some moment-to-moment feelings are non-evaluative in the sense of being
generated directly by the difficulty of an activity. By contrast, retrospective emotional
feelings are more complex as they include an evaluation of the overall goals and values
associated with the activity as a whole.
Keywords: emotions, facial expression, moment-to-moment, functional wellbeing approach, extreme sport,
backcountry skiing
INTRODUCTION
We engage in recreational activities in order to feel good. This pursuit is not restricted to
leisure activities like sunbathing at the beach or enjoying a fine meal with friends and family.
Mountaineers, BASE jumpers, and other extreme athletes also claim that the importance of their
favorite activities is the experience of positive feelings (Brymer, 2005; Willig, 2008; Brown and
Fraser, 2009; Hetland and Vittersø, 2012). But what exactly is it that feels so good about these
vigorous and exhausting activities, often referred to as extreme sport? To explore this question,
we developed a new way of measuring emotions in real time during the activity. We equipped
the participants with a camera that captured their facially expressed emotion while skiing. These
films were then analyzed with software for automatic coding of facial expressions and compared
the participants self-reported emotions assessed in retrospect. This approach enabled us to explore
long standing questions as to how such positive experiences are created. Are they a result of a series
of online positive feelings? Or is it the impact of a few central features like intensity peaks, rapid
emotional changes, and happy endings that create them? Is it the experience of flow? Or is it the
feeling of mastery that kicks in only after the activity has been successfully accomplished?
Edited by:
Eric Brymer,
Leeds Beckett University
United Kingdom
Reviewed by:
Michael Banissy,
Goldsmiths, University of London
United Kingdom
Ralf Christopher Buckley,
Grif th University, Australia
*Correspondence:
Specialty section:
This article was submitted to
Movement Science and Sport
Psychology,
a section of the journal
Frontiers in Psychology
Received: 26 September 2017
Accepted: 25 May 2018
Published: 20 June 2018
Citation:
Hetland A, Vittersø J, Wie SOB,
Kjelstrup E, Mittner M and Dahl TI
(2018) Skiing and Thinking About It:
Moment-to-Moment
and Retrospective Analysis
of Emotions in an Extreme Sport.
Front. Psychol. 9:971.
doi: 10.3389/fpsyg.2018.00971
Frontiers in Psychology | www.frontiersin.org
June 2018 | Volume 9 | Article 971
('50814786', 'Audun Hetland', 'audun hetland')
('2956586', 'Joar Vittersø', 'joar vittersø')
('50823709', 'Simen Oscar Bø Wie', 'simen oscar bø wie')
('50829546', 'Eirik Kjelstrup', 'eirik kjelstrup')
('4281140', 'Matthias Mittner', 'matthias mittner')
('50814786', 'Audun Hetland', 'audun hetland')
audun.hetland@uit.no
7e0c75ce731131e613544e1a85ae0f2c28ee4c1fImperial College London
Department of Computing
Regression-based Estimation of Pain and
Facial Expression Intensity
June, 2015
Submitted in part fulfilment of the requirements for the degree of PhD in Computing and
the Diploma of Imperial College London. This thesis is entirely my own work, and, except
where otherwise indicated, describes my own research.
('3291812', 'Sebastian Kaltwang', 'sebastian kaltwang')
('1694605', 'Maja Pantic', 'maja pantic')
7ef44b7c2b5533d00001ae81f9293bdb592f1146No d’ordre : 227-2013
Anne 2013
THESE DE L’UNIVERSITE DE LYON
Dlivre par
L’UNIVERSITE CLAUDE BERNARD - LYON 1
Ecole Doctorale Informatique et Mathmatiques
P H D T H E S I S
D´etection des ´emotions `a partir de vid´eos dans un
environnement non contrˆol´e
Detection of emotions from video in non-controlled environment
Soutenue publiquement (Public defense) le 14/11/2013
Composition du jury (Dissertation committee):
Rapporteurs
Mr. Renaud SEGUIER
Mr. Jean-Claude MARTIN
Examinateurs
Mr. Thomas MOESLUND
Mr. Patrick LAMBERT
Mr. Samir GARBAYA
Directeur
Mme. Saida BOUAKAZ
Co-encadrant
Mr. Alexandre MEYER
Mr. Hubert KONIK
Professor, Supelec, CNRS UMR 6164, Rennes, France
Professor, LIMSI-CNRS, Universit´e Paris-Sud, France
Professor, Department of Architecture, Design and Media Technology,
Aalborg University, Denmark
Professor, LISTIC - Polytech Annecy-Chambery, France
Associate Professor, Le2i, ENSAM, Chalon sur Saone, France
Professor, LIRIS-CNRS, Universit´e Claude Bernard Lyon 1, France
Associate Professor, LIRIS, Universit´e Claude Bernard Lyon 1, France
Associate Professor, LaHC, Universit´e Jean Monnet, Saint-Etienne, France
('1943666', 'Rizwan Ahmed Khan', 'rizwan ahmed khan')
7e1ea2679a110241ed0dd38ff45cd4dfeb7a8e83Extensions of Hierarchical Slow Feature
Analysis for Efficient Classification and
Regression on High-Dimensional Data
Dissertation
Submitted to the Faculty of Electrical
Engineering and Information Technology
at the
Ruhr University Bochum
for the
Degree of Doktor-Ingenieur
Alberto Nicol´as Escalante Ba˜nuelos
by
Bochum, Germany, January, 2017
7e507370124a2ac66fb7a228d75be032ddd083ccThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAFFC.2017.2708106, IEEE
Transactions on Affective Computing
Dynamic Pose-Robust Facial Expression
Recognition by Multi-View Pairwise Conditional
Random Forests
1 Sorbonne Universit´es, UPMC Univ Paris 06
CNRS, UMR 7222, F-75005, Paris, France
('3190846', 'Arnaud Dapogny', 'arnaud dapogny')
('2521061', 'Kevin Bailly', 'kevin bailly')
1056347fc5e8cd86c875a2747b5f84fd570ba232
10550ee13855bd7403946032354b0cd92a10d0aaAccelerating Neuromorphic Vision Algorithms
for Recognition
Ahmed Al Maashri
Vijaykrishnan Narayanan
Microsystems Design Lab, The Pennsylvania State University
†IBM System and Technology Group
School of Electrical, Computer and Energy Engineering, Arizona State University
('1723845', 'Michael DeBole', 'michael debole')
('36156473', 'Matthew Cotter', 'matthew cotter')
('2916636', 'Nandhini Chandramoorthy', 'nandhini chandramoorthy')
('37095722', 'Yang Xiao', 'yang xiao')
('1685028', 'Chaitali Chakrabarti', 'chaitali chakrabarti')
{maashri, mjcotter, nic5090, yux106, vijay}@cse.psu.edu
mvdebole@us.ibm.com
chaitali@asu.edu
10e12d11cb98ffa5ae82343f8904cfe321ae8004A New Simplex Sparse Learning Model to Measure
Data Similarity for Clustering
University of Texas at Arlington
Arlington, Texas 76019, USA
('39122448', 'Jin Huang', 'jin huang')
('1688370', 'Feiping Nie', 'feiping nie')
('1748032', 'Heng Huang', 'heng huang')
huangjinsuzhou@gmail.com, feipingnie@gmail.com, heng@uta.edu
10e7dd3bbbfbc25661213155e0de1a9f043461a2Cross Euclidean-to-Riemannian Metric Learning
with Application to Face Recognition from Video
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1681236', 'Luc Van Gool', 'luc van gool')
('1710220', 'Xilin Chen', 'xilin chen')
100105d6c97b23059f7aa70589ead2f61969fbc3Frontal to Profile Face Verification in the Wild
Center for Automation Research, University of Maryland, College Park, MD 20740, USA
The State University of New Jersey
Piscataway, NJ 08854, USA.
('2500202', 'Soumyadip Sengupta', 'soumyadip sengupta')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
('34734622', 'David W. Jacobs', 'david w. jacobs')
100da509d4fa74afc6e86a49352751d365fceee5Multiclass Recognition and Part Localization with Humans in the Loop
†Department of Computer Science and Engineering
University of California, San Diego
Serge Belongie†
‡Department of Electrical Engineering
California Institute of Technology
('2367820', 'Catherine Wah', 'catherine wah')
('3251767', 'Steve Branson', 'steve branson')
('1690922', 'Pietro Perona', 'pietro perona')
{cwah,sbranson,sjb}@cs.ucsd.edu
perona@caltech.edu
10ab1b48b2a55ec9e2920a5397febd84906a7769
10af69f11301679b6fbb23855bf10f6af1f3d2e6Beyond Gaussian Pyramid: Multi-skip Feature Stacking for Action Recognition
School of Computer Science, Carnegie Mellon University
('46329993', 'Ming Lin', 'ming lin')
('2314980', 'Xuanchong Li', 'xuanchong li')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
('1681921', 'Bhiksha Raj', 'bhiksha raj')
lanzhzh, minglin, xcli, alex, bhiksha@cs.cmu.edu
10ce3a4724557d47df8f768670bfdd5cd5738f95Fihe igh Fied f Face Recgii
Ac e ad  iai
Rah G ai ahew ad Si Bake
The Rbic i e Caegie e Uiveiy
5000 Fbe Ave e ib gh A 15213
Abac.  ay face ecgii ak he e ad i iai
cdii f he be ad gaey iage ae di(cid:11)ee.  he cae
 ie gaey  be iage ay be avaiabe each ca ed f
a di(cid:11)ee e ad de a di(cid:11)ee i iai. We e a face
ecgii agih which ca e ay  be f gaey iage e
 bjec ca ed a abiay e ad de abiay i iai
ad ay  be f be iage agai ca ed a abiay e ad
de abiay i iai. The agih eae by eiaig he
Fihe igh (cid:12)ed f he  bjec head f he i  gaey  be
iage. achig bewee he be ad gaey i he efed ig
he Fihe igh (cid:12)ed.
d ci
 ay face ecgii ceai he e f he be ad gaey iage ae
di(cid:11)ee. The gaey cai he iage ed d ig aiig f he agih.
The agih ae eed wih he iage i he be e. F exae he
gaey iage igh be a fa \ g h" ad he be iage igh be a 3/4
view ca ed f a caea i he ce f he . The  be f gaey
ad be iage ca a vay. F exae he gaey ay ci f a ai f
iage f each  bjec a fa  g h ad f (cid:12)e view ike he iage
yicay ca ed by ice deae. The be ay be a iia ai f
iage a ige 3/4 view  eve a ceci f view f ad e.
Face ecgii ac e i.e. face ecgii whee he gaey ad be
iage d  have he ae e ha eceived vey ie aei. Agih
have bee ed which ca ecgize face [1]  e geea bjec [2]
a a vaiey f e. weve  f hee agih e ie gaey iage
a evey e. Agih have bee ed which d geeaize ac e
f exae [3] b  hi agih c e 3D head de ig a gaey
caiig a age  be f iage e  bjec ca ed ig ced i
iai vaiai.  ca be ed wih abiay gaey ad be e.
Afe e vaiai he ex  igi(cid:12)ca fac a(cid:11)ecig he aea
ace f face i i iai. A  be f agih have bee deveed f
face ecgii ac i iai b  hey yicay y dea wih fa
face [4 5]. y a few aache have bee ed  hade bh e ad
i iai vaiai a he ae ie. F exae [3] c e a 3D head
fgiaiibg@c.c .ed
100428708e4884300e4c1ac1f84cbb16e7644ccfREGULARIZED SHEARLET NETWORK FOR FACE RECOGNITION USING SINGLE
SAMPLE PER PERSON
Research Groups on Intelligent Machines, University of Sfax, Sfax 3038, Tunisia
University of Houston, Houston, TX 77204, USA
('2791150', 'Mohamed Anouar Borgi', 'mohamed anouar borgi')
('8847309', 'Demetrio Labate', 'demetrio labate')
('3410172', 'Chokri Ben Amar', 'chokri ben amar')
{anoir.borgi@ieee.org; dlabate@math.uh.edu ; maher.elarbi@gmail.com ; chokri.benamar@ieee.org }
102e374347698fe5404e1d83f441630b1abf62d9Facial Image Analysis for Fully-Automatic
Prediction of Difficult Endotracheal Intubation
('40564153', 'Patrick Schoettker', 'patrick schoettker')
('2916630', 'Matteo Sorci', 'matteo sorci')
('1697965', 'Hua Gao', 'hua gao')
('2612411', 'Christophe Perruchoud', 'christophe perruchoud')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
10f17534dba06af1ddab96c4188a9c98a020a459People-LDA: Anchoring Topics to People using Face Recognition
Erik Learned-Miller
University of Massachusetts Amherst
Amherst MA 01003
http://vis-www.cs.umass.edu/(cid:152)vidit/peopleLDA
('2246870', 'Vidit Jain', 'vidit jain')
('1735747', 'Andrew McCallum', 'andrew mccallum')
10e0e6f1ec00b20bc78a5453a00c792f1334b016Pose-Selective Max Pooling for Measuring Similarity
1Dept. of Computer Science
2Dept. of Electrical & Computer Engineering
Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA
('40031188', 'Xiang Xiang', 'xiang xiang')xxiang@cs.jhu.edu
102b968d836177f9c436141e382915a4f8549276Affective Multimodal Human-Computer Interaction
Faculty of EEMCS, Delft University of Technology, The Netherlands
Faculty of Science, University of Amsterdam, The Netherlands
Psychology and Psychiatry, University of Pittsburgh, USA
Beckman Institute, University of Illinois at Urbana-Champaign, USA
('1694605', 'Maja Pantic', 'maja pantic')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
mpantic@ieee.org, nicu@science.uva.nl, jeffcohn@pitt.edu, huang@ifp.uiuc.edu
100641ed8a5472536dde53c1f50fa2dd2d4e9be9Visual Attributes for Enhanced Human-Machine Communication* ('1713589', 'Devi Parikh', 'devi parikh')
10195a163ab6348eef37213a46f60a3d87f289c5
10e704c82616fb5d9c48e0e68ee86d4f83789d96
101569eeef2cecc576578bd6500f1c2dcc0274e2Multiaccuracy: Black-Box Post-Processing for Fairness in
Classification
James Zou
('40102677', 'Michael P. Kim', 'michael p. kim')
('27316199', 'Amirata Ghorbani', 'amirata ghorbani')
mpk@cs.stanford.edu
amiratag@stanford.edu
jamesz@stanford.edu
106732a010b1baf13c61d0994552aee8336f8c85Expanded Parts Model for Semantic Description
of Humans in Still Images
('2515597', 'Gaurav Sharma', 'gaurav sharma')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
10e70a34d56258d10f468f8252a7762950830d2b
102b27922e9bd56667303f986404f0e1243b68abWang et al. Appl Inform (2017) 4:13
DOI 10.1186/s40535-017-0042-5
RESEARCH
Multiscale recurrent regression networks
for face alignment
Open Access
*Correspondence:
3 State Key Lab of Intelligent
Technologies and Systems,
Beijing 100084, People’s
Republic of China
Full list of author information
is available at the end of the
article
('27660491', 'Caixun Wang', 'caixun wang')
('22192520', 'Haomiao Sun', 'haomiao sun')
('1697700', 'Jiwen Lu', 'jiwen lu')
('2632601', 'Jianjiang Feng', 'jianjiang feng')
('25060740', 'Jie Zhou', 'jie zhou')
lujiwen@tsinghua.edu.cn
10fcbf30723033a5046db791fec2d3d286e34daaOn-Line Cursive Handwriting Recognition: A Survey of Methods
and Performances
*Faculty of Computer Science & Information Systems, Universiti Teknologi Malaysia (UTM) , 81310
Skudai, Johor, Malaysia.
('1731121', 'Dzulkifli Mohamad', 'dzulkifli mohamad')
('1921146', 'M. Othman', 'm. othman')
1dzul@fsksm.utm.my, faisal@gmm.fsksm.utm.my, razib@fsksm.utm.my
101d4cfbd6f8a7a10bd33505e2b183183f1d8770The 2013 SESAME Multimedia Event Detection and
Recounting System
SRI International (SRI)
University of Amsterdam (UvA
University of Southern California
(USC)
Cees G.M. Snoek
Remi Trichet
('1764443', 'Robert C. Bolles', 'robert c. bolles')
('40560201', 'J. Brian Burns', 'j. brian burns')
('48804780', 'James A. Herson', 'james a. herson')
('31693932', 'Gregory K. Myers', 'gregory k. myers')
('2594026', 'Stephanie Pancoast', 'stephanie pancoast')
('1746492', 'Julien van Hout', 'julien van hout')
('49966591', 'Julie Wong', 'julie wong')
('3000952', 'AmirHossein Habibian', 'amirhossein habibian')
('1769315', 'Dennis C. Koelma', 'dennis c. koelma')
('3245057', 'Zhenyang Li', 'zhenyang li')
('2690389', 'Masoud Mazloom', 'masoud mazloom')
('37806314', 'Silvia-Laura Pintea', 'silvia-laura pintea')
('1964898', 'Sung Chun Lee', 'sung chun lee')
('1858100', 'Pramod Sharma', 'pramod sharma')
('40559421', 'Chen Sun', 'chen sun')
108b2581e07c6b7ca235717c749d45a1fa15bb24Using Stereo Matching with General Epipolar
Geometry for 2D Face Recognition
across Pose
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')
('34734622', 'David W. Jacobs', 'david w. jacobs')
106092fafb53e36077eba88f06feecd07b9e78e7Attend and Interact: Higher-Order Object Interactions for Video Understanding
Georgia Institute of Technology, 2NEC Laboratories America, 3Georgia Tech Research Institute
('7437104', 'Chih-Yao Ma', 'chih-yao ma')
('2293919', 'Asim Kadav', 'asim kadav')
('50162780', 'Iain Melvin', 'iain melvin')
('1746245', 'Zsolt Kira', 'zsolt kira')
('1775043', 'Hans Peter Graf', 'hans peter graf')
103c8eaca2a2176babab2cc6e9b25d48870d6928FINDING RELEVANT SEMANTIC CONTENT FOR GROUNDED LANGUAGE LEARNING
PANNING FOR GOLD:
The University of Texas at Austin
Department of Computer Science
Austin, TX 78712, USA
('47514115', 'David L. Chen', 'david l. chen')
('1797655', 'Raymond J. Mooney', 'raymond j. mooney')
dlcc@cs.utexas.edu and mooney@cs.utexas.edu
10d334a98c1e2a9e96c6c3713aadd42a557abb8bScene Text Recognition using Part-based Tree-structured Character Detection
State Key Laboratory of Management and Control for Complex Systems, CASIA, Beijing, China
('1959339', 'Cunzhao Shi', 'cunzhao shi')
('1683416', 'Chunheng Wang', 'chunheng wang')
('2658590', 'Baihua Xiao', 'baihua xiao')
('1698138', 'Yang Zhang', 'yang zhang')
('39001252', 'Song Gao', 'song gao')
('34539206', 'Zhong Zhang', 'zhong zhang')
{cunzhao.shi,chunheng.wang,baihua.xiao,yang.zhang,song.gao,zhong.zhang}@ia.ac.cn
10f66f6550d74b817a3fdcef7fdeba13ccdba51cBenchmarking Face Alignment
Institute for Anthropomatics
Karlsruhe Institute of Technology
Karlsruhe, Germany
('1697965', 'Hua Gao', 'hua gao')Email: {gao, ekenel}@kit.edu
107fc60a6c7d58a6e2d8572ad8c19cc321a9ef53Hollywood in Homes: Crowdsourcing Data
Collection for Activity Understanding
Carnegie Mellon University
2Inria
University of Washington
The Allen Institute for AI
http://allenai.org/plato/charades/
('34280810', 'Gunnar A. Sigurdsson', 'gunnar a. sigurdsson')
('39849136', 'Xiaolong Wang', 'xiaolong wang')
('2270286', 'Ali Farhadi', 'ali farhadi')
('1785596', 'Ivan Laptev', 'ivan laptev')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
1048c753e9488daa2441c50577fe5fdba5aa5d7cRecognising faces in unseen modes: a tensor based approach
Curtin University of Technology
GPO Box U1987, Perth, WA 6845, Australia.
('2867032', 'Santu Rana', 'santu rana')
('1713220', 'Wanquan Liu', 'wanquan liu')
('1679953', 'Mihai Lazarescu', 'mihai lazarescu')
('1679520', 'Svetha Venkatesh', 'svetha venkatesh')
{santu.rana, wanquan, m.lazarescu, svetha}@cs.curtin.edu.au
10ca2e03ff995023a701e6d8d128455c6e8db030Modeling Stylized Character Expressions
via Deep Learning
University of Washington
Seattle, WA, USA
2 Zillow Group, Seattle, WA, USA
3 Gage Academy of Art, Seattle, WA, USA
('2494850', 'Deepali Aneja', 'deepali aneja')
('2952700', 'Alex Colburn', 'alex colburn')
('9610752', 'Gary Faigin', 'gary faigin')
('3349536', 'Barbara Mones', 'barbara mones')
{deepalia,shapiro,mones}@cs.washington.edu
alexco@cs.washington.edu
gary@gageacademy.org
1921e0a97904bdf61e17a165ab159443414308edBielefeld University
Faculty of Technology
Applied Informatics
Bachelor Thesis
Retrieval of Web Images for
Computer Vision Research
September 28, 2009
Author:
malinke techfak.uni-bielefeld.de
Supervisors:
Dipl.-Inform. Marco Kortkamp
PD Dr.-Ing. Sven Wachsmuth
19841b721bfe31899e238982a22257287b9be66aPublished as a conference paper at ICLR 2018
SKIP RNN: LEARNING TO SKIP STATE UPDATES IN
RECURRENT NEURAL NETWORKS
†Barcelona Supercomputing Center, ‡Google Inc,
Universitat Polit`ecnica de Catalunya, Columbia University
('2447185', 'Brendan Jou', 'brendan jou')
('1711068', 'Jordi Torres', 'jordi torres')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{victor.campos, jordi.torres}@bsc.es, bjou@google.com,
xavier.giro@upc.edu, shih.fu.chang@columbia.edu
1922ad4978ab92ce0d23acc4c7441a8812f157e5Face Alignment by Coarse-to-Fine Shape Searching
The Chinese University of Hong Kong
2SenseTime Group
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('2226254', 'Shizhan Zhu', 'shizhan zhu')
('40475617', 'Cheng Li', 'cheng li')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
zs014@ie.cuhk.edu.hk, chengli@sensetime.com, ccloy@ie.cuhk.edu.hk, xtang@ie.cuhk.edu.hk
19e62a56b6772bbd37dfc6b8f948e260dbb474f5Cross-Domain Metric Learning Based on Information Theory
1. State Key Laboratory of Computer Science
2. Science and Technology on Integrated Information System Laboratory
Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
University of Science and Technology of China
('39483391', 'Hao Wang', 'hao wang')
('40451597', 'Wei Wang', 'wei wang')
('1783918', 'Chen Zhang', 'chen zhang')
('34532334', 'Fanjiang Xu', 'fanjiang xu')
weiwangpenny@gmail.com
192723085945c1d44bdd47e516c716169c06b7c0This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation
Vision and Attention Theory Based Sampling
for Continuous Facial Emotion Recognition
Ninad S. Thakoor, Member, IEEE
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
('1693314', 'Albert C. Cruz', 'albert c. cruz')
('1707159', 'Bir Bhanu', 'bir bhanu')
19fb5e5207b4a964e5ab50d421e2549ce472baa8International Conference on Computer Systems and Technologies - CompSysTech’14
Online Emotional Facial Expression Dictionary
Léon Rothkrantz
1989a1f9ce18d8c2a0cee3196fe6fa363aab80c2ROBUST ONLINE FACE TRACKING-BY-DETECTION
2TNO Embedded Systems Innovation, Eindhoven, The Netherlands
Eindhoven University of Technology, The Netherlands
('3199035', 'Francesco Comaschi', 'francesco comaschi')
('1679431', 'Sander Stuijk', 'sander stuijk')
('1708289', 'Twan Basten', 'twan basten')
('1684335', 'Henk Corporaal', 'henk corporaal')
{f.comaschi, s.stuijk, a.a.basten, h.corporaal}@tue.nl
1962e4c9f60864b96c49d85eb897141486e9f6d1Neural Comput & Applic (2011) 20:565–573
DOI 10.1007/s00521-011-0577-7
O R I G I N A L A R T I C L E
Locality preserving embedding for face and handwriting digital
recognition
Received: 3 December 2008 / Accepted: 11 March 2011 / Published online: 1 April 2011
Ó Springer-Verlag London Limited 2011
supervised manifold
the local sub-manifolds.
('5383601', 'Zhihui Lai', 'zhihui lai')
193debca0be1c38dabc42dc772513e6653fd91d8Mnemonic Descent Method:
A recurrent process applied for end-to-end face alignment
Imperial College London, UK
Goldsmiths, University of London, UK
Center for Machine Vision and Signal Analysis, University of Oulu, Finland
('2814229', 'George Trigeorgis', 'george trigeorgis')
('2796644', 'Patrick Snape', 'patrick snape')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('1752913', 'Mihalis A. Nicolaou', 'mihalis a. nicolaou')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
(cid:63){g.trigeorgis, p.snape, e.antonakos, s.zafeiriou}@imperial.ac.uk, †m.nicolaou@gold.ac.uk
191674c64f89c1b5cba19732869aa48c38698c84International Journal of Advanced Technology in Engineering and Science www.ijates.com
Volume No.03, Issue No. 03, March 2015 ISSN (online): 2348 – 7550
FACE IMAGE RETRIEVAL USING ATTRIBUTE -
ENHANCED SPARSE CODEWORDS
E.Sakthivel1 , M.Ashok kumar2
PG scholar, Communication Systems, Adhiyamaan College of Engineeing, Hosur, (India
Electronics And Communication Engg., Adhiyamaan College of Engg., Hosur, (India
190d8bd39c50b37b27b17ac1213e6dde105b21b8This document is downloaded from DR-NTU, Nanyang Technological
University Library, Singapore
Title
Mining weakly labeled web facial images for search-
based face annotation
Author(s) Wang, Dayong; Hoi, Steven C. H.; He, Ying; Zhu, Jianke
Citation
Wang, D., Hoi, S. C. H., He, Y., & Zhu, J. (2014). Mining
weakly labeled web facial images for search-based face
annotation. IEEE Transactions on Knowledge and Data
Engineering, 26(1), 166-179.
Date
2014
URL
http://hdl.handle.net/10220/18955
Rights
© 2014 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other
uses, in any current or future media, including
reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for
resale or redistribution to servers or lists, or reuse of any
copyrighted component of this work in other works.
Published version of this article is available at [DOI:
http://dx.doi.org/10.1109/TKDE.2012.240].
19af008599fb17bbd9b12288c44f310881df951cDiscriminative Local Sparse Representations for
Robust Face Recognition
('1719561', 'Yi Chen', 'yi chen')
('35210356', 'Umamahesh Srinivas', 'umamahesh srinivas')
('1694440', 'Thong T. Do', 'thong t. do')
('3346079', 'Vishal Monga', 'vishal monga')
('1709073', 'Trac D. Tran', 'trac d. tran')
19296e129c70b332a8c0a67af8990f2f4d4f44d1Metric Learning Approaches for Face Identification
Is that you?
M. Guillaumin, J. Verbeek and C. Schmid
LEAR team, INRIA Rhˆone-Alpes, France
Supplementary Material
19666b9eefcbf764df7c1f5b6938031bcf777191Group Component Analysis for Multi-block Data:
Common and Individual Feature Extraction
('1764724', 'Guoxu Zhou', 'guoxu zhou')
('1747156', 'Andrzej Cichocki', 'andrzej cichocki')
('38741479', 'Yu Zhang', 'yu zhang')
198b6beb53e0e61357825d57938719f614685f75Vaulted Verification: A Scheme for Revocable Face
Recognition
University of Colorado, Colorado Springs
('3035230', 'Michael Wilber', 'michael wilber')mwilber@uccs.edu
1921795408345751791b44b379f51b7dd54ebfa2From Face Recognition to Models of Identity:
A Bayesian Approach to Learning about
Unknown Identities from Unsupervised Data
Imperial College London, UK
2 Microsoft Research, Cambridge, UK
('2388416', 'Sebastian Nowozin', 'sebastian nowozin')dc315@imperial.ac.uk
Sebastian.Nowozin@microsoft.com
190b3caa2e1a229aa68fd6b1a360afba6f50fde4
19e0cc41b9f89492b6b8c2a8a58d01b8242ce00bW. ZHANG ET AL.: IMPROVING HFR WITH CGAN
Improving Heterogeneous Face Recognition
with Conditional Adversarial Networks
1 Laboratory LIRIS
Ecole Centrale de Lyon
Ecully, France
2 Computer Vision Lab
Stony Brook University
Stony Brook, NY, USA
('2553752', 'Wuming Zhang', 'wuming zhang')
('2496409', 'Zhixin Shu', 'zhixin shu')
('1686020', 'Dimitris Samaras', 'dimitris samaras')
('34086868', 'Liming Chen', 'liming chen')
wuming.zhang@ec-lyon.fr
zhshu@cs.stonybrook.edu
samaras@cs.stonybrook.edu
liming.chen@ec-lyon.fr
19e7bdf8310f9038e1a9cf412b8dd2c77ff64c54Facial Action Coding Using Multiple Visual Cues and a Hierarchy of Particle
Filters
Computer Vision and Robotics Research Laboratory
University of California, San Diego
('32049271', 'Joel C. McCall', 'joel c. mccall')
('1713989', 'Mohan M. Trivedi', 'mohan m. trivedi')
jmccall@ucsd.edu mtrivedi@ucsd.edu
1938d85feafdaa8a65cb9c379c9a81a0b0dcd3c4Monogenic Binary Coding: An Efficient Local Feature
Extraction Approach to Face Recognition
The Hong Kong Polytechnic University, Hong Kong, China
('5828998', 'Meng Yang', 'meng yang')
('36685537', 'Lei Zhang', 'lei zhang')
('1738911', 'Simon C. K. Shiu', 'simon c. k. shiu')
('1698371', 'David Zhang', 'david zhang')
195d331c958f2da3431f37a344559f9bce09c0f7Parsing Occluded People by Flexible Compositions
University of California, Los Angeles
Figure 1: An illustration of the flexible compositions. Each connected sub-
tree of the full graph (include the full graph itself) is a flexible composition.
The flexible compositions that do not have certain parts are suitable for the
people with those parts occluded.
Figure 2: The absence of body parts evidence can help to predict occlusion.
However, absence of evidence is not evidence of absence.
It can fail in
some challenging scenes. The local image measurements near the occlusion
boundary (i.e., around the right elbow and left shoulder) can reliably provide
evidence of occlusion.
This paper presents an approach to parsing humans when there is signifi-
cant occlusion. We model humans using a graphical model which has a tree
structure building on recent work [1, 6] and exploit the connectivity prior
that, even in presence of occlusion, the visible nodes form a connected sub-
tree of the graphical model. We call each connected subtree a flexible com-
position of object parts. This involves a novel method for learning occlusion
cues. During inference we need to search over a mixture of different flexible
models. By exploiting part sharing, we show that this inference can be done
extremely efficiently requiring only twice as many computations as search-
ing for the entire object (i.e., not modeling occlusion). We evaluate our
model on the standard benchmarked “We Are Family" Stickmen dataset [2]
and obtain significant performance improvements over the best alternative
algorithms.
Parsing humans into parts is an important visual task with many applica-
tions such as activity recognition. A common approach is to formulate this
task in terms of graphical models where the graph nodes and edges repre-
sent human parts and their spatial relationships respectively. This approach
is becoming successful on benchmarked datasets [1, 6]. But in many real
world situations many human parts are occluded. Standard methods are par-
tially robust to occlusion by, for example, using a latent variable to indicate
whether a part is present and paying a penalty if the part is not detected, but
are not designed to deal with significant occlusion. One of these models [1]
will be used in this paper as a base model, and we will compare to it.
In this paper, we observe that part occlusions often occur in regular pat-
terns. The visible parts of a human tend to consist of a subset of connected
parts even when there is significant occlusion (see Figures 1 and 2). In the
terminology of graphical models, the visible (non-occluded) nodes form a
connected subtree of the full graphical model (following current models, for
simplicity, we assume that the graphical model is treelike). This connectiv-
ity prior is not always valid (i.e., the visible parts of humans may form two
or more connected subsets), but our analysis suggests it’s often true. In any
case, we will restrict ourselves to it in this paper, since verifying that some
isolated pieces of body parts belong to the same person is still very difficult
for vision methods, especially in challenging scenes where multiple people
occlude one another (see Figure 2).
To formulate our approach we build on the base model [1], which is the
state of the art on several benchmarked datasets [3, 4, 5], but is not designed
for dealing with significant occlusion. We explicitly model occlusions us-
ing the connectivity prior above. This means that we have a mixture of
models where the number of components equals the number of all the pos-
sible connected subtrees of the graph, which we call flexible compositions,
('34420250', 'Xianjie Chen', 'xianjie chen')
199c2df5f2847f685796c2523221c6436f022464Self Quotient Image for Face Recognition
Institute of Automation, Chinese Academy of Sciences; 2Miscrosoft Research Asian; 3Media School
Bournemouth University
('29948255', 'Haitao Wang', 'haitao wang')
('34679741', 'Stan Z. Li', 'stan z. li')
('1744302', 'Yangsheng Wang', 'yangsheng wang')
19c0069f075b5b2d8ac48ad28a7409179bd08b86Modifying the Memorability of Face Photographs
Massachusetts Institute of Technology
Computer Science and Artificial Intelligence Laboratory
('2556428', 'Aditya Khosla', 'aditya khosla')
('2553201', 'Wilma A. Bainbridge', 'wilma a. bainbridge')
('1690178', 'Antonio Torralba', 'antonio torralba')
('31735139', 'Aude Oliva', 'aude oliva')
{khosla, wilma, torralba, oliva}@csail.mit.edu
19c0c7835dba1a319b59359adaa738f0410263e8228
Natural Image Statistics and
Low-Complexity Feature Selection
('30125215', 'Manuela Vasconcelos', 'manuela vasconcelos')
('1699559', 'Nuno Vasconcelos', 'nuno vasconcelos')
19808134b780b342e21f54b60095b181dfc7a600
19d583bf8c5533d1261ccdc068fdc3ef53b9ffb9FaceNet: A Unified Embedding for Face Recognition and Clustering
Google Inc.
Google Inc.
Google Inc.
('3302320', 'Florian Schroff', 'florian schroff')
('2741985', 'Dmitry Kalenichenko', 'dmitry kalenichenko')
('2276542', 'James Philbin', 'james philbin')
fschroff@google.com
dkalenichenko@google.com
jphilbin@google.com
1910f5f7ac81d4fcc30284e88dee3537887acdf3 Volume 6, Issue 5, May 2016 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
Semantic Based Hypergraph Reranking Model for Web
Image Search
1, 2, 3, 4 B. E. Dept of CSE, 5 Asst. Prof. Dept of CSE
Dr.D.Y.Patil College of Engineering, Pune, Maharashtra, India
19a9f658ea14701502d169dc086651b1d9b2a8eaStructural Models for Face Detection
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1721677', 'Junjie Yan', 'junjie yan')
('2520795', 'Xucong Zhang', 'xucong zhang')
('1718623', 'Zhen Lei', 'zhen lei')
('1716143', 'Dong Yi', 'dong yi')
('34679741', 'Stan Z. Li', 'stan z. li')
{jjyan,xczhang,zlei,dyi,szli}@nlpr.ia.ac.cn
197c64c36e8a9d624a05ee98b740d87f94b4040cRegularized Greedy Column Subset Selection
aDepartment of Computer Systems, Universidad Polit´ecnica de Madrid
bDepartment of Applied Mathematics, Universidad Polit´ecnica de Madrid
('1858768', 'Alberto Mozo', 'alberto mozo')*bruno.ordozgoiti@upm.es
19d4855f064f0d53cb851e9342025bd8503922e2Learning SURF Cascade for Fast and Accurate Object Detection
Intel Labs China
('35423937', 'Jianguo Li', 'jianguo li')
('2470865', 'Yimin Zhang', 'yimin zhang')
19d3b02185ad36fb0b792f2a15a027c58ac91e8eIm2Text: Describing Images Using 1 Million
Captioned Photographs
Tamara L Berg
Stony Brook University
Stony Brook, NY 11794
('2004053', 'Vicente Ordonez', 'vicente ordonez')
('2170826', 'Girish Kulkarni', 'girish kulkarni')
{vordonezroma or tlberg}@cs.stonybrook.edu
193ec7bb21321fcf43bbe42233aed06dbdecbc5cUC Santa Barbara
UC Santa Barbara Previously Published Works
Title
Automatic 3D facial expression analysis in videos
Permalink
https://escholarship.org/uc/item/3g44f7k8
Authors
Chang, Y
Vieira, M
Turk, M
et al.
Publication Date
2005-01-01
Peer reviewed
eScholarship.org
Powered by the California Digital Library
University of California
19da9f3532c2e525bf92668198b8afec14f9efeaChallenge: Face verification across age progression using real-world data
Video and Image Modeling and Synthesis Lab, Department of Computer Science,
University of Delaware, Newark, DE. USA
1. Overview
Analysis of face images has been the topic of in-depth research with wide spread applications. Face recognition, verifi-
cation, age progression studies are some of the topics under study. In order to facilitate comparison and benchmarking of
different approaches, various datasets have been released. For the specific topics of face verification with age progression,
aging pattern extraction and age estimation, only two public datasets are currently available. The FGNET and MORPH
datasets contain a large number of subjects, but only a few images are available for each subject. We present a new dataset,
VADANA, which complements them by providing a large number of high quality digital images for each subject within and
across ages (depth vs. breadth). It provides the largest number of intra-personal pairs, essential for better training and testing.
The images also offer a natural range of pose, expression and illumination variation. We demonstrate the difference and
difficulty of VADANA by testing with state-of-the-art algorithms. Our findings from experiments show how VADANA can
aid further research on different types of verification algorithms.
The following sections provide details for the proposed challenge. The dataset details, the need and motivation for its
creation, comparison to existing benchmarks and the experiments performed on the same have been provided in the attached
paper.
2. Problem definition and challenges
There are various problems in facial image analysis, such as face detection (finding faces in a given image), face recogni-
tion (matching new image to a known dataset), face verification (determine if a given unknown pair of face images belong to
same person) and many others. In this work, we focus on face verification specifically in the case of age progression.
Problem definition: The input is a pair of facial images. The images are such that at least region from top of forehead till
the chin is covered. Though in general, the images cover from top of head region and part of neck region also. The identity
of the person(s) in the images is not known a priori. The system must determine if the two images belong to the same person
(intra-personal pair or intra-pair) or to different persons (extra-personal pair or extra-pair). The two images are taken across a
time period such that the age gap between the pair may range from 0 to 9 years. Also, the pose, expression and illumination
is uncontrolled for both images.
Training setup: During the training phase, the system is provided with pair of images (both intra-pairs and extra-pairs).
The age of the subject in a given image and thus the age gap between a pair is provided during training. A classifier is trained
using the features from the images.
Testing setup: During the testing phase, the input is a pair of images. The subjects in these pairs are different from those
in the training, i.e, the training and testing subjects are non-overlapping. There is no explicit age (or age-gap) information
provided at this stage. The system must classify the pair as either intra-personal or extra-personal.
Applications: The above problem definition closely resembles various real-world application scenarios such as passport
verification, security and surveillance matching in videos/image captured over a period of time, clustering of people in large
datasets where identities are unknown and many others.
Challenges: The challenges stem from various aspects of the above problem definition: (1) The subject identities are not
known, the system must therefore only rely on the information from the pair of images to determine the final classification.
(2) The images are taken at different times, ranging from a gap of few months to up to 9 years (as in the case of passport
verification). The effects due to aging thus contribute to shape and appearance changes even for an intra-pair (same person).
('1692539', 'Gowri Somanath', 'gowri somanath')
('1708413', 'Chandra Kambhamettu', 'chandra kambhamettu')
somanath,chandra@cis.udel.edu
19868a469dc25ee0db00947e06c804b88ea94fd0SP-SVM: Large Margin Classifier for Data on Multiple Manifolds
Purdue University, West Lafayette, IN. 47907, USA
College of Information and Control Engineering, China University of Petroleum, Qingdao 266580, China
Santa Clara University, Santa Clara, CA. 95053, USA
cid:5)School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN. 47907, USA
('39274045', 'Bin Shen', 'bin shen')
('1678435', 'Bao-Di Liu', 'bao-di liu')
('34913796', 'Qifan Wang', 'qifan wang')
('3047254', 'Yi Fang', 'yi fang')
('1741931', 'Jan P. Allebach', 'jan p. allebach')
bshen@purdue.edu, thu.liubaodi@gmail.com, wang868@purdue.edu, yfang@scu.edu, allebach@ecn.purdue.edu
192235f5a9e4c9d6a28ec0d333e36f294b32f764Reconfiguring the Imaging Pipeline for Computer Vision
Cornell University
Carnegie Mellon University
Cornell University
('2328520', 'Mark Buckler', 'mark buckler')
('39131476', 'Suren Jayasuriya', 'suren jayasuriya')
('2138184', 'Adrian Sampson', 'adrian sampson')
19878141fbb3117d411599b1a74a44fc3daf296dEye-State Action Unit Detection by Gabor Wavelets
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
University of Pittsburgh, Pittsburgh, PA
http://www.cs.cmu.edu/face
('40383812', 'Ying-li Tian', 'ying-li tian')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
Email: fyltian, tkg@cs.cmu.edu
jeffcohn@pitt.edu
19f076998ba757602c8fec04ce6a4ca674de0e25Turk J Elec Eng & Comp Sci
(2016) 24: 219 { 233
c⃝ T (cid:127)UB_ITAK
doi:10.3906/elk-1304-139
Fast and de-noise support vector machine training method based on fuzzy
clustering method for large real world datasets
(cid:3)
Islamic Azad University, Gonabad, Iran
Received: 15.04.2013
(cid:15)
Accepted/Published Online: 30.10.2013
(cid:15)
Final Version: 01.01.2016
('9437627', 'Omid Naghash ALMASI', 'omid naghash almasi')
('4945660', 'Modjtaba ROUHANI', 'modjtaba rouhani')
19eb486dcfa1963c6404a9f146c378fc7ae3a1df
4c6daffd092d02574efbf746d086e6dc0d3b1e91
4cb8a691a15e050756640c0a35880cdd418e2b87Class-based matching of object parts
Department of Computer Science and Applied Mathematics
Weizmann Institute of Science
Rehovot, ISRAEL 76100
('1938475', 'Evgeniy Bart', 'evgeniy bart')
('1743045', 'Shimon Ullman', 'shimon ullman')
(cid:0)evgeniy.bart, shimon.ullman(cid:1)@weizmann.ac.il
4cc681239c8fda3fb04ba7ac6a1b9d85b68af31dMining Spatial and Spatio-Temporal ROIs for Action Recognition
Jiang Wang2 Alan Yuille1,3
University of California, Los Angeles
Baidu Research, USA 3John Hopkins University
('5964529', 'Xiaochen Lian', 'xiaochen lian'){lianxiaochen@,yuille@stat.}ucla.edu
{chenzhuoyuan,yangyi05,wangjiang03}@baidu.com
4c6e1840451e1f86af3ef1cb551259cb259493baHAND POSTURE DATASET CREATION FOR GESTURE
RECOGNITION
Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria
Campus Universitario de Tafira, 35017 Gran Canaria, Spain
Departamento de E.I.O. y Computacion
38271 Universidad de La Laguna, Spain
Keywords:
Image understanding, Gesture recognition, Hand dataset.
('1706692', 'Luis Anton-Canalis', 'luis anton-canalis')
('1797958', 'Elena Sanchez-Nielsen', 'elena sanchez-nielsen')
lanton@iusiani.ulpgc.es
enielsen@ull.es
4c87aafa779747828054cffee3125fcea332364dView-Constrained Latent Variable Model
for Multi-view Facial Expression Classification
Imperial College London, UK
EEMCS, University of Twente, The Netherlands
('2308430', 'Stefanos Eleftheriadis', 'stefanos eleftheriadis')
('1729713', 'Ognjen Rudovic', 'ognjen rudovic')
('1694605', 'Maja Pantic', 'maja pantic')
{s.eleftheriadis,o.rudovic,m.pantic}@imperial.ac.uk
4c29e1f31660ba33e46d7e4ffdebb9b8c6bd5adc
4cdae53cebaeeebc3d07cf6cd36fecb2946f3e56Photorealistic Facial Texture Inference Using Deep Neural Networks
*Pinscreen
University of Southern California
USC Institute for Creative Technologies
Figure 1: We present an inference framework based on deep neural networks for synthesizing photorealistic facial texture
along with 3D geometry from a single unconstrained image. We can successfully digitize historic figures that are no longer
available for scanning and produce high-fidelity facial texture maps with mesoscopic skin details.
('2059597', 'Shunsuke Saito', 'shunsuke saito')
('1792471', 'Lingyu Wei', 'lingyu wei')
('1808579', 'Liwen Hu', 'liwen hu')
('1897417', 'Koki Nagano', 'koki nagano')
('1706574', 'Hao Li', 'hao li')
4c8e5fc0877d066516bb63e6c31eb1b8b5f967ebMODI, KOVASHKA: CONFIDENCE AND DIVERSITY FOR ACTIVE SELECTION
Confidence and Diversity for Active
Selection of Feedback in Image Retrieval
Department of Computer Science
University of Pittsburgh
Pittsburgh, PA, USA
('1770205', 'Adriana Kovashka', 'adriana kovashka')bhavin_modi@hotmail.com
kovashka@cs.pitt.edu
4c8ef4f98c6c8d340b011cfa0bb65a9377107970Sentiment Recognition in Egocentric
Photostreams
Intelligent Systems Group, University of Groningen, The Netherlands
University of Barcelona, Spain
3 Computer Vision Center, Barcelona, Spain
('1742086', 'Nicola Strisciuglio', 'nicola strisciuglio')
('1730388', 'Nicolai Petkov', 'nicolai petkov')
('1724155', 'Petia Radeva', 'petia radeva')
e.talavera.martinez@rug.nl,
4c822785c29ceaf67a0de9c699716c94fefbd37dA Key Volume Mining Deep Framework for Action Recognition
2 SenseTime Group Limited
Tsinghua University
Shenzhen Institutes of Advanced Technology, CAS, China
Figure 1. Key volumes detected by our key volume mining deep framework. A volume is a spatial-temporal video clip. The top row shows
key volumes are very sparse among the whole video, and the second row shows that key volumes may come from different modalities
(different motion patterns here). Note that frames are sampled with fixed time interval.
('2121584', 'Wangjiang Zhu', 'wangjiang zhu')
('1748341', 'Jie Hu', 'jie hu')
('1687740', 'Gang Sun', 'gang sun')
('2032273', 'Xudong Cao', 'xudong cao')
('40612284', 'Yu Qiao', 'yu qiao')
4c815f367213cc0fb8c61773cd04a5ca8be2c959978-1-4244-4296-6/10/$25.00 ©2010 IEEE
2470
ICASSP 2010
4ccf64fc1c9ca71d6aefdf912caf8fea048fb211Light-weight Head Pose Invariant Gaze Tracking
University of Maryland
NVIDIA
NVIDIA
('48467498', 'Rajeev Ranjan', 'rajeev ranjan')
('24817039', 'Shalini De Mello', 'shalini de mello')
('1690538', 'Jan Kautz', 'jan kautz')
rranjan1@umiacs.umd.edu
shalinig@nvidia.com
jkautz@nvidia.com
4cdb6144d56098b819076a8572a664a2c2d27f72Face Synthesis for Eyeglass-Robust Face
Recognition
CBSRandNLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
('46220439', 'Jianzhu Guo', 'jianzhu guo')
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
{jianzhu.guo,xiangyu.zhu,zlei,szli}@nlpr.ia.ac.cn
4c4e49033737467e28aa2bb32f6c21000deda2efImproving Landmark Localization with Semi-Supervised Learning
MILA-University of Montreal, 2NVIDIA, 3Ecole Polytechnique of Montreal, 4CIFAR, 5Facebook AI Research
('25056820', 'Sina Honari', 'sina honari')
('2824500', 'Pavlo Molchanov', 'pavlo molchanov')
('2342481', 'Stephen Tyree', 'stephen tyree')
('1707326', 'Pascal Vincent', 'pascal vincent')
('1690538', 'Jan Kautz', 'jan kautz')
1{honaris, vincentp}@iro.umontreal.ca,
2{pmolchanov, styree, jkautz}@nvidia.com, 3christopher.pal@polymtl.ca
4c6233765b5f83333f6c675d3389bbbf503805e3Real-time High Performance Deformable Model for Face Detection in the Wild
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1721677', 'Junjie Yan', 'junjie yan')
('2520795', 'Xucong Zhang', 'xucong zhang')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
{jjyan,xczhang,zlei,szli}@nlpr.ia.ac.cn
4c078c2919c7bdc26ca2238fa1a79e0331898b56Unconstrained Facial Landmark Localization with Backbone-Branches
Fully-Convolutional Networks
Sun Yat-Sen University
Guangzhou Higher Education Mega Center, Guangzhou 510006, PR China
('1742286', 'Zhujin Liang', 'zhujin liang')
('2442939', 'Shengyong Ding', 'shengyong ding')
('1737218', 'Liang Lin', 'liang lin')
alfredtofu@gmail.com, marcding@163.com, linliang@ieee.org
4cfa8755fe23a8a0b19909fa4dec54ce6c1bd2f7EFFICIENT LIKELIHOOD BAYESIAN CONSTRAINED LOCAL MODEL
The Hong Kong Polytechnic University
Hong Kong Applied Science and Technology Research Institute Company Limited, Hong Kong, China
('2116302', 'Hailiang Li', 'hailiang li')
('1703078', 'Kin-Man Lam', 'kin-man lam')
('3280193', 'Man-Yau Chiu', 'man-yau chiu')
('2233216', 'Kangheng Wu', 'kangheng wu')
('1982263', 'Zhibin Lei', 'zhibin lei')
harley.li@connect.polyu.hk,{harleyli, edmondchiu, khwu, lei}@astri.org, enkmlam@polyu.edu.hk
4cac9eda716a0addb73bd7ffea2a5fb0e6ec2367Representing Videos based on Scene Layouts
for Recognizing Agent-in-Place Actions
University of Maryland, College Park
2Comcast Applied AI Research
3DeepMind
4Adobe Research
('2180291', 'Ruichi Yu', 'ruichi yu')
('3254319', 'Hongcheng Wang', 'hongcheng wang')
('7674316', 'Jingxiao Zheng', 'jingxiao zheng')
{richyu, jxzheng, lsd}@umiacs.umd.edu
anglili@google.com morariu@adobe.com
4c4236b62302957052f1bbfbd34dbf71ac1650ecSEMI-SUPERVISED FACE RECOGNITION WITH LDA SELF-TRAINING
Multimedia Communications Department, EURECOM
2229 Route des Crêtes , BP 193, F-06560 Sophia-Antipolis Cedex, France
('37560971', 'Xuran Zhao', 'xuran zhao')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
{zhaox, evans, dugelay}@eurecom.fr
4cd0da974af9356027a31b8485a34a24b57b8b90Binarized Convolutional Landmark Localizers for Human Pose Estimation and
Face Alignment with Limited Resources
Computer Vision Laboratory, The University of Nottingham
Nottingham, United Kingdom
('3458121', 'Adrian Bulat', 'adrian bulat')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
{adrian.bulat, yorgos.tzimiropoulos}@nottingham.ac.uk
4c170a0dcc8de75587dae21ca508dab2f9343974FaceTracer: A Search Engine for
Large Collections of Images with Faces
Columbia University
('40631426', 'Neeraj Kumar', 'neeraj kumar')
4c81c76f799c48c33bb63b9369d013f51eaf5adaMulti-modal Score Fusion and Decision Trees for Explainable Automatic Job
Candidate Screening from Video CVs
Nam k Kemal University, Tekirda g, Turkey
Bo gazic i University, Istanbul, Turkey
('38007788', 'Heysem Kaya', 'heysem kaya')
('1764521', 'Albert Ali Salah', 'albert ali salah')
hkaya@nku.edu.tr, furkan.gurpinar@boun.edu.tr,salah@boun.edu.tr
4c1ce6bced30f5114f135cacf1a37b69bb709ea1Gaze Direction Estimation by Component Separation for
Recognition of Eye Accessing Cues
Ruxandra Vrˆanceanu
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
Image Processing and Analysis Laboratory
University Politehnica of Bucharest, Romania, Address Splaiul Independent ei
('2760434', 'Corneliu Florea', 'corneliu florea')
('2143956', 'Laura Florea', 'laura florea')
('2905899', 'Constantin Vertan', 'constantin vertan')
rvranceanu@imag.pub.ro
corneliu.florea@upb.ro
laura.florea@upb.ro
constantin.vertan@upb.ro
4c5b38ac5d60ab0272145a5a4d50872c7b89fe1bFacial Expression Recognition with Emotion-Based
Feature Fusion
The Hong Kong Polytechnic University, Hong Kong, SAR, 2University of Technology Sydney, Australia
('13671251', 'Cigdem Turan', 'cigdem turan')
('1703078', 'Kin-Man Lam', 'kin-man lam')
('1706670', 'Xiangjian He', 'xiangjian he')
E-mail: cigdem.turan@connect.polyu.hk, enkmlam@polyu.edu.hk, Xiangjian.He@uts.edu.au
4c523db33c56759255b2c58c024eb6112542014ePatch-based Within-Object Classification∗
University College London
MRC Laboratory For Molecular Cell Biology, University College London
('1904148', 'Jania Aghajanian', 'jania aghajanian')
('1734784', 'Jonathan Warrell', 'jonathan warrell')
('1695363', 'Peng Li', 'peng li')
('32948556', 'Jennifer L. Rohn', 'jennifer l. rohn')
('31046411', 'Buzz Baum', 'buzz baum')
1{j.aghajanian, j.warrell, s.prince, p.li}@cs.ucl.ac.uk 2{j.rohn, b.baum}@ucl.ac.uk
261c3e30bae8b8bdc83541ffa9331b52fcf015e6PATEL, SMITH: SFS+3DMM FOR FACE RECOGNITION
Shape-from-shading driven 3D Morphable
Models for Illumination Insensitive Face
Recognition
William A.P. Smith
Department of Computer Science,
The University of York
('37519514', 'Ankur Patel', 'ankur patel')ankur@cs.york.ac.uk
wsmith@cs.york.ac.uk
26f03693c50eb50a42c9117f107af488865f3dc1Eigenhill vs. Eigenface and Eigenedge
Istanbul Technical University
Department of Computer Science
('1858702', 'Alper Yilmaz', 'alper yilmaz')
('1766445', 'Muhittin Gökmen', 'muhittin gökmen')
yilmaz@cs.ucf.edu
gokmen@cs.itu.edu.tr
2661f38aaa0ceb424c70a6258f7695c28b97238aIEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 4, AUGUST 2012
1027
Multilayer Architectures for Facial
Action Unit Recognition
('4072965', 'Tingfan Wu', 'tingfan wu')
('2593137', 'Nicholas J. Butko', 'nicholas j. butko')
('12114845', 'Paul Ruvolo', 'paul ruvolo')
('1775637', 'Jacob Whitehill', 'jacob whitehill')
('1741200', 'Javier R. Movellan', 'javier r. movellan')
2609079d682998da2bc4315b55a29bafe4df414eON RANK AGGREGATION FOR FACE RECOGNITION FROM VIDEOS
IIIT-Delhi, India
('2559473', 'Himanshu S. Bhatt', 'himanshu s. bhatt')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
264a84f4d27cd4bca94270620907cffcb889075cDeep Motion Features for Visual Tracking
Computer Vision Laboratory, Link oping University, Sweden
('8161428', 'Susanna Gladh', 'susanna gladh')
('2488938', 'Martin Danelljan', 'martin danelljan')
('2358803', 'Fahad Shahbaz Khan', 'fahad shahbaz khan')
('2228323', 'Michael Felsberg', 'michael felsberg')
26d407b911d1234e8e3601e586b49316f0818c95[POSTER] Feasibility of Corneal Imaging for Handheld Augmented Reality
Coburg University
('37101400', 'Daniel Schneider', 'daniel schneider')
('2708269', 'Jens Grubert', 'jens grubert')
26a44feb7a64db7986473ca801c251aa88748477Journal of Machine Learning Research 1 ()
Submitted ; Published
Unsupervised Learning of Gaussian Mixture Models with a
Uniform Background Component
Department of Statistics
Florida State University
Tallahassee, FL 32306-4330, USA
Department of Statistics
Florida State University
Tallahassee, FL 32306-4330, USA
Editor:
('2761870', 'Sida Liu', 'sida liu')
('2455529', 'Adrian Barbu', 'adrian barbu')
sida.liu@stat.fsu.edu
abarbu@stat.fsu.edu
264f7ab36ff2e23a1514577a6404229d7fe1242bFacial Expression Recognition by De-expression Residue Learning
Department of Computer Science
State University of New York at Binghamton, USA
('2671017', 'Huiyuan Yang', 'huiyuan yang')
('8072251', 'Lijun Yin', 'lijun yin')
{hyang51, uciftci}@binghamton.edu; lijun@cs.binghamton.edu
26a72e9dd444d2861298d9df9df9f7d147186bcdDOI 10.1007/s00138-016-0768-4
ORIGINAL PAPER
Collecting and annotating the large continuous action dataset
Received: 18 June 2015 / Revised: 18 April 2016 / Accepted: 22 April 2016 / Published online: 21 May 2016
© The Author(s) 2016. This article is published with open access at Springerlink.com
('2089428', 'Daniel Paul Barrett', 'daniel paul barrett')
266766818dbc5a4ca1161ae2bc14c9e269ddc490Article
Boosting a Low-Cost Smart Home Environment with
Usage and Access Control Rules
Institute of Information Science and Technologies of CNR (CNR-ISTI)-Italy, 56124 Pisa, Italy
Received: 27 April 2018; Accepted: 31 May 2018; Published: 8 June 2018
('1773887', 'Paolo Barsocchi', 'paolo barsocchi')
('38567341', 'Antonello Calabrò', 'antonello calabrò')
('1693901', 'Erina Ferro', 'erina ferro')
('2209975', 'Claudio Gennaro', 'claudio gennaro')
('1709783', 'Eda Marchetti', 'eda marchetti')
('2508924', 'Claudio Vairo', 'claudio vairo')
antonello.calabro@isti.cnr.it (A.C.); erina.ferro@isti.cnr.it (E.F.); claudio.gennaro@isti.cnr.it (C.G.);
eda.marchetti@isti.cnr.it (E.M.); claudio.vairo@isti.cnr.it (C.V.)
* Correspondence: paolo.barsocchi@isti.cnr.it; Tel.: +39-050-315-2965
265af79627a3d7ccf64e9fe51c10e5268fee2aae1817
A Mixture of Transformed Hidden Markov
Models for Elastic Motion Estimation
('1932096', 'Huijun Di', 'huijun di')
('3265275', 'Linmi Tao', 'linmi tao')
('1797002', 'Guangyou Xu', 'guangyou xu')
267c6e8af71bab68547d17966adfaab3b4711e6b
26af867977f90342c9648ccf7e30f94470d40a73IJIRST –International Journal for Innovative Research in Science & Technology| Volume 3 | Issue 04 | September 2016
ISSN (online): 2349-6010
Joint Gender and Face Recognition System for
RGB-D Images with Texture and DCT Features
PG Student
Department of Computer Science & Information Systems
Federal Institute of Science and Technology, Mookkannoor
PO, Angamaly, Ernakulam, Kerala 683577, India
Prasad J. C.
Associate Professor
Department of Computer Science & Engineering
Federal Institute of Science and Technology, Mookkannoor
PO, Angamaly, Ernakulam, Kerala 683577, India
26a89701f4d41806ce8dbc8ca00d901b68442d45
26c884829897b3035702800937d4d15fef7010e4IEICE TRANS. INF. & SYST., VOL.Exx–??, NO.xx XXXX 200x
PAPER
Facial Expression Recognition by Supervised Independent
Component Analysis using MAP Estimation
, Member
SUMMARY Permutation ambiguity of the classical Inde-
pendent Component Analysis (ICA) may cause problems in fea-
ture extraction for pattern classification. Especially when only a
small subset of components is derived from data, these compo-
nents may not be most distinctive for classification, because ICA
is an unsupervised method. We include a selective prior for de-
mixing coefficients into the classical ICA to alleviate the problem.
Since the prior is constructed upon the classification information
from the training data, we refer to the proposed ICA model with
a selective prior as a supervised ICA (sICA). We formulated the
learning rule for sICA by taking a Maximum a Posteriori (MAP)
scheme and further derived a fixed point algorithm for learning
the de-mixing matrix. We investigate the performance of sICA
in facial expression recognition from the aspects of both correct
rate of recognition and robustness even with few independent
components.
key words:
dent component analysis, fixed-point algorithm
facial expression recognition, supervised indepen-
1.
Introduction
Various methods have been proposed for auto-
matic recognition of facial expression in the past several
decades, which could be roughly classified into three
categories: 1) Appearance-based method, represented
by eigenfaces, fisherfaces and other methods using
machine-learning techniques, such as neural networks
and Support Vector Machine (SVM); 2) Model-based
methods, including graph matching, optical- ow-based
method and others; and 3) Hybrids of appearance based
and model-based methods, such as Active Appearance
Model (AAM). Detailed review of these methods could
be found in two surveys in Refs.[1][2]. Appearance-
based methods are superior to model-based methods
in system complexity and performance reproducibil-
ity. Further, appearance-based methods allow efficient
characterization of a low-dimensional subspace within
the overall space of raw image measurement, which
deepen our understanding of facial expressions from
their manifolds in subspace, and provide a statistical
framework for the theoretical analysis of system per-
formance. ICA, a powerful technique for blind source
separation, was applied to facial expression recognition
by Bartlett et al. for feature extraction.[3] They argued
that facial expression consists of those features standing
for minor, non-rigid, local variations of faces[3]. Struc-
Manuscript received January 1, 200x.
Manuscript revised January 1, 200x.
Final manuscript received January 1, 200x.
The author is with the school of information science,
Japan Advanced Institute of Science and Technology
tural information for those local variations are related
to higher-order statistics, which could be well extracted
by ICA.[5] The efficiency of ICA in extracting features
for facial expression recognition has been verified by
many previous works.[4][6][7]
The major purpose of the present work is to im-
prove the performance of ICA in facial expression recog-
nition. In the classical ICA, the derived independent
components are in random order, i.e., permutation am-
biguity, where the original order provides no informa-
tion on the significance of components in discrimina-
tion.[8] As a result, the derived independent compo-
nents may not be most distinctive for the classification
task, especially when only a small subset of compo-
nents is derived. Feature selection must be performed
along with the feature extraction. The selection can
be applied before, during or after ICA. In Ref.[4], Best
Individual Feature (BIF) selection was adopted, where
features were chosen according to some defined criteria
individually. Methods by means of Sequential Forward
Selection (SFS) and Sequential Floating Forward Se-
lection (SFFS) were also proposed. [9] Since the selec-
tion is performed after ICA, the features are limited to
those chosen from the set of independent components
obtained. To create a candidate set with enough repre-
sentative features for discrimination, a large number of
independent components should be learned, which may
be expensive in computational cost. It is meaningful
to search for a way to affect the selection of features
before or during ICA. GEMC [10] makes a selection
before ICA by heuristically replacing PCA with a dis-
criminant analysis as the pre-processing to ICA, which
still lacks a mathematical explanation. ICA in a local
facial residue space is also proposed for face recognition,
which can be regarded as using the pre-specified residue
space to limit the selection of independent components
before applying ICA. [11]
We propose an approach to implement the feature
selection during the learning of independent compo-
nents. A constraint ICA has been proposed for the
analysis of EEG signals, where all components should
be sparse and close to a supplied reference signal by
including a correlation term. [12] In our case, we try to
design a method to let those components with higher
degrees of class separation emerge easier than others.
The classical ICA in Ref.[13] was shown to be deriv-
able under the scheme of Maximum Log-Likelihood
('1753878', 'Fan Chen', 'fan chen')
('1791753', 'Kazunori Kotani', 'kazunori kotani')
266ed43dcea2e7db9f968b164ca08897539ca8ddBeyond Principal Components: Deep Boltzmann Machines for Face Modeling
Concordia University, Computer Science and Software Engineering, Montr eal, Qu ebec, Canada
Carnegie Mellon University, CyLab Biometrics Center, Pittsburgh, PA, USA
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
('1769788', 'Khoa Luu', 'khoa luu')
('2687827', 'Kha Gia Quach', 'kha gia quach')
('1699922', 'Tien D. Bui', 'tien d. bui')
1 {c duon, k q, bui}@encs.concordia.ca, 2 kluu@andrew.cmu.edu
26ad6ceb07a1dc265d405e47a36570cb69b2ace6RESEARCH AND EXPLOR ATORY
DEVELOPMENT DEPARTMENT
REDD-2015-384
Neural Correlates of Cross-Cultural
How to Improve the Training and Selection for
Military Personnel Involved in Cross-Cultural
Operating Under Grant #N00014-12-1-0629/113056
Adaptation
September, 2015
Interactions
Prepared for:
Office of Naval Research
('20444535', 'Jonathon Kopecky', 'jonathon kopecky')
('29125372', 'Alice Jackson', 'alice jackson')
2642810e6c74d900f653f9a800c0e6a14ca2e1c7Projection Bank: From High-dimensional Data to Medium-length Binary Codes
Department of Computer Science and Digital Technologies
Northumbria University, Newcastle upon Tyne, NE1 8ST, UK
('40017778', 'Li Liu', 'li liu')
('9452165', 'Mengyang Yu', 'mengyang yu')
('40799321', 'Ling Shao', 'ling shao')
li2.liu@northumbria.ac.uk, m.y.yu@ieee.org, ling.shao@ieee.org
26437fb289cd7caeb3834361f0cc933a022677662012 International Conference on Management and Education Innovation
IPEDR vol.37 (2012) © (2012) IACSIT Press, Singapore
Innovative Assessment Technologies: Comparing ‘Face-to-Face’ and
Game-Based Development of Thinking Skills in Classroom Settings
University of Szeged, 2 E tv s Lor nd University
('39201903', 'Gyöngyvér Molnár', 'gyöngyvér molnár')
('32197908', 'András Lőrincz', 'andrás lőrincz')
26e570049aaedcfa420fc8c7b761bc70a195657cJ Sign Process Syst
DOI 10.1007/s11265-017-1276-0
Hybrid Facial Regions Extraction for Micro-expression
Recognition System
Received: 2 February 2016 / Revised: 20 October 2016 / Accepted: 10 August 2017
© Springer Science+Business Media, LLC 2017
('39888137', 'Sze-Teng Liong', 'sze-teng liong')
('2339975', 'John See', 'john see')
('37809010', 'Su-Wei Tan', 'su-wei tan')
2654ef92491cebeef0997fd4b599ac903e48d07aFacial Expression Recognition from Near-Infrared Video Sequences
1. Machine Vision Group, Infotech Oulu and Department of Electrical and Information
Engineering,
P. O. Box 4500 FI-90014 University of Oulu, Finland
Institute of Automation, Chinese Academy of Sciences
P. O. Box 95 Zhongguancun Donglu, Beijing 100080, China
('2021982', 'Matti Taini', 'matti taini')
('1757287', 'Guoying Zhao', 'guoying zhao')
('34679741', 'Stan Z. Li', 'stan z. li')
('1714724', 'Matti Pietikäinen', 'matti pietikäinen')
E-mail: {matti.taini,gyzhao,mkp}@ee.oulu.fi
E-mail: szli@cbsr.ia.ac.cn
2679e4f84c5e773cae31cef158eb358af475e22fAdaptive Deep Metric Learning for Identity-Aware Facial Expression Recognition
Carnegie Mellon University, Pittsburgh, PA
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Science
The Hong Kong Polytechnic University, Hong Kong, China
University of Chinese Academy of Sciences, Beijing, China
('1790207', 'Xiaofeng Liu', 'xiaofeng liu')
('1748883', 'Jane You', 'jane you')
('37774211', 'Ping Jia', 'ping jia')
liuxiaofeng@cmu.edu, kumar@ece.cmu.edu, csyjia@comp.polyu.edu.hk, jiap@ciomp.ac.cn
21ef129c063bad970b309a24a6a18cbcdfb3aff5POUR L'OBTENTION DU GRADE DE DOCTEUR ÈS SCIENCESacceptée sur proposition du jury:Dr J.-M. Vesin, président du juryProf. J.-Ph. Thiran, Prof. D. Sander, directeurs de thèseProf. M. F. Valstar, rapporteurProf. H. K. Ekenel, rapporteurDr S. Marcel, rapporteurIndividual and Inter-related Action Unit Detection in Videos for Affect RecognitionTHÈSE NO 6837 (2016)ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNEPRÉSENTÉE LE 19 FÉVRIER 2016À LA FACULTÉ DES SCIENCES ET TECHNIQUES DE L'INGÉNIEURLABORATOIRE DE TRAITEMENT DES SIGNAUX 5PROGRAMME DOCTORAL EN GÉNIE ÉLECTRIQUE Suisse2016PARAnıl YÜCE
218b2c5c9d011eb4432be4728b54e39f366354c1Enhancing Training Collections for Image
Annotation: An Instance-Weighted Mixture
Modeling Approach
('1793498', 'Neela Sawant', 'neela sawant')
('40116905', 'Jia Li', 'jia li')
217a21d60bb777d15cd9328970cab563d70b5d23Hidden Factor Analysis for Age Invariant Face Recognition
1Shenzhen Key Lab of Computer Vision and Pattern Recognition
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
Toyota Technological Institute at Chicago
The Chinese University of Hong Kong
4Media Lab, Huawei Technologies Co. Ltd., China
('2856494', 'Dihong Gong', 'dihong gong')
('1911510', 'Zhifeng Li', 'zhifeng li')
('1807606', 'Dahua Lin', 'dahua lin')
('7137861', 'Jianzhuang Liu', 'jianzhuang liu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
dh.gong@siat.ac.cn
zhifeng.li@siat.ac.cn
dhlin@ttic.edu
liu.jianzhuang@huawei.com
xtang@ie.cuhk.edu.hk
21e828071249d25e2edaca0596e27dcd63237346
21a2f67b21905ff6e0afa762937427e92dc5aa0bHindawi
Computational Intelligence and Neuroscience
Volume 2017, Article ID 8710492, 13 pages
https://doi.org/10.1155/2017/8710492
Research Article
Extra Facial Landmark Localization via
Global Shape Reconstruction
School of Automation Engineering, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave
West Hi-Tech Zone, Chengdu 611731, China
Received 4 January 2017; Revised 26 March 2017; Accepted 4 April 2017; Published 23 April 2017
Academic Editor: Elio Masciari
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Localizing facial landmarks is a popular topic in the field of face analysis. However, problems arose in practical applications such
as handling pose variations and partial occlusions while maintaining moderate training model size and computational efficiency
still challenges current solutions. In this paper, we present a global shape reconstruction method for locating extra facial landmarks
comparing to facial landmarks used in the training phase. In the proposed method, the reduced configuration of facial landmarks
is first decomposed into corresponding sparse coefficients. Then explicit face shape correlations are exploited to regress between
sparse coefficients of different facial landmark configurations. Finally extra facial landmarks are reconstructed by combining the
pretrained shape dictionary and the approximation of sparse coefficients. By applying the proposed method, both the training
time and the model size of a class of methods which stack local evidences as an appearance descriptor can be scaled down with
only a minor compromise in detection accuracy. Extensive experiments prove that the proposed method is feasible and is able to
reconstruct extra facial landmarks even under very asymmetrical face poses.
1. Introduction
Facial landmark localization is the first and a crucial step for
many face analysis tasks such as face recognition [1], cartoon
facial animation [2, 3], and facial expression understanding
[4, 5]. Most facial landmarks are located along the dominant
contours around facial features like eyebrows, nose, and
mouth. Therefore facial landmarks on a face image jointly
describe a face shape which lies in the shape space [6].
For the last ten years remarkable progress has been
made in the field of facial
landmark localization [7, 8].
Among a large number of proposed methods, the most
popular solution is to treat the facial landmark localiza-
tion problem as a holistic shape regression process and
to learn a general regression model from labeled training
images [9, 10]. Following this shape regression idea, various
methods try to model a regression function that directly
maps the appearance of images to landmark coordinates
without the need of computing a parametric model. All
facial landmarks in a shape are iterated collectively and the
relationship between facial landmarks is flexibly embedded
into the iteration process. On the other hand, to generate
enough description of face images, multiscale local feature
descriptors are typically adopted in most shape regression
methods. For example, cascaded pose regression (CPR) [7]
was first proposed to estimate general object poses with pose-
indexed features and then extended for the problem of face
alignment in explicit shape regression (ESR) [11] method.
ESR combines two-level boosting regression, shape-indexed
features, and correlation-based feature selection. As another
example, supervised descent method (SDM) [12] and its
extensions also have shown an impressive performance in the
field of facial landmark localization. These kinds of methods
stack shape-indexed high dimension feature descriptors and
train regression functions from a supervised gradient descent
view.
However, facial landmark localization still meets great
challenges in practical applications, such as handling pose
variations and partial occlusion while maintaining moderate
training model size and computational efficiency. In SDM
and its improved methods, the dimension of regression
('9684590', 'Shuqiu Tan', 'shuqiu tan')
('2915473', 'Dongyi Chen', 'dongyi chen')
('9486108', 'Chenggang Guo', 'chenggang guo')
('2122143', 'Zhiqi Huang', 'zhiqi huang')
('9684590', 'Shuqiu Tan', 'shuqiu tan')
Correspondence should be addressed to Dongyi Chen; dychen@uestc.edu.cn
2162654cb02bcd10794ae7e7d610c011ce0fb51b4697
978-1-4799-5751-4/14/$31.00 ©2014 IEEE
1http://www.skype.com/
2http://www.google.com/hangouts/
tification, sparse coding
21258aa3c48437a2831191b71cd069c05fb84cf7A Robust and E(cid:14)cient Doubly Regularized
Metric Learning Approach
Siemens Corporate Research, Princeton, NJ, 08540
CISE, University of Florida, Gainesville, FL
('35582088', 'Meizhu Liu', 'meizhu liu')
('1733005', 'Baba C. Vemuri', 'baba c. vemuri')
21f3c5b173503185c1e02a3eb4e76e13d7e9c5bcm a s s a c h u s e t t s i n s t i t u t e o f
t e c h n o l o g y — a r t i f i c i a l i n t e l l i g e n c e l a b o r a t o r y
Rotation Invariant Real-time
Face Detection and
Recognition System
AI Memo 2001-010
CBCL Memo 197
May 31, 2001
© 2 0 0 1 m a s s a c h u s e t t s i n s t i t u t e o f
t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i . m i t . e d u
('35541734', 'Purdy Ho', 'purdy ho')
21bd9374c211749104232db33f0f71eab4df35d5Integrating Facial Makeup Detection Into
Multimodal Biometric User Verification System
CuteSafe Technology Inc.
Gebze, Kocaeli, Turkey
Eurecom Digital Security Department
06410 Biot, France
('39935459', 'Ekberjan Derman', 'ekberjan derman')
('3179061', 'Chiara Galdi', 'chiara galdi')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
ekberjan.derman@cutesafe.com
{chiara.galdi, jean-luc.dugelay}@eurecom.fr
214db8a5872f7be48cdb8876e0233efecdcb6061Semantic-aware Co-indexing for Image Retrieval
NEC Laboratories America, Inc
2Dept. of CS, Univ. of Texas at San Antonio
Cupertino, CA 95014
San Antonio, TX 78249
('1776581', 'Shiliang Zhang', 'shiliang zhang')
('2909406', 'Ming Yang', 'ming yang')
('3991189', 'Xiaoyu Wang', 'xiaoyu wang')
('1695082', 'Yuanqing Lin', 'yuanqing lin')
('1713616', 'Qi Tian', 'qi tian')
{myang,xwang,ylin}@nec-labs.com
slzhang.jdl@gmail.com qitian@cs.utsa.edu
21104bcf07ef0269ab133471a3200b9bf94b2948Beyond Comparing Image Pairs: Setwise Active Learning for Relative Attributes
University of Texas at Austin
('2548555', 'Lucy Liang', 'lucy liang')
('1794409', 'Kristen Grauman', 'kristen grauman')
214ac8196d8061981bef271b37a279526aab5024Face Recognition Using Smoothed High-Dimensional
Representation
Center for Machine Vision Research, PO Box 4500,
FI-90014 University of Oulu, Finland
('32683737', 'Juha Ylioinas', 'juha ylioinas')
('1776374', 'Juho Kannala', 'juho kannala')
('1751372', 'Abdenour Hadid', 'abdenour hadid')
213a579af9e4f57f071b884aa872651372b661fdInt J Comput Vis
DOI 10.1007/s11263-013-0672-6
Automatic and Efficient Human Pose Estimation for Sign
Language Videos
Received: 4 February 2013 / Accepted: 29 October 2013
© Springer Science+Business Media New York 2013
('36326860', 'James Charles', 'james charles')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
21626caa46cbf2ae9e43dbc0c8e789b3dbb420f1978-1-4673-2533-2/12/$26.00 ©2012 IEEE
1437
ICIP 2012
217de4ff802d4904d3f90d2e24a29371307942fePOOF: Part-Based One-vs-One Features for Fine-Grained Categorization, Face
Verification, and Attribute Estimation
Columbia University
Columbia University
('1778562', 'Thomas Berg', 'thomas berg')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
tberg@cs.columbia.edu
belhumeur@cs.columbia.edu
2135a3d9f4b8f5771fa5fc7c7794abf8c2840c44Lessons from Collecting a Million Biometric Samples
University of Notre Dame
Notre Dame, IN 46556, USA
National Institute of Standards and Technology
Gaithersburg, MD 20899, USA
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
flynn@cse.nd.edu
kwb@cse.nd.edu
jonathon@nist.gov
210b98394c3be96e7fd75d3eb11a391da1b3a6caSpatiotemporal Derivative Pattern: A Dynamic
Texture Descriptor for Video Matching
Saeed Mian3
Tafresh University, Tafresh, Iran
Electrical Eng. Dep., Central Tehran Branch, Islamic Azad University, Tehran, Iran
Computer Science and Software Engineering, The University of Western Australia
WA 6009, Australia
('3046235', 'Farshid Hajati', 'farshid hajati')
('2014145', 'Mohammad Tavakolian', 'mohammad tavakolian')
('2997971', 'Soheila Gheisari', 'soheila gheisari')
{hajati,m_tavakolian}@tafreshu.ac.ir
s.gheisari@iauctb.ac.ir
ajmal.mian@uwa.edu.au
21765df4c0224afcc25eb780bef654cbe6f0bc3aMulti-Channel Correlation Filters
National University of Singapore
National University of Singapore
Singapore
Singapore
CSIRO
Australia
('2860592', 'Hamed Kiani Galoogahi', 'hamed kiani galoogahi')
('1715286', 'Terence Sim', 'terence sim')
('1820249', 'Simon Lucey', 'simon lucey')
hkiani@comp.nus.edu.sg
tsim@comp.nus.edu.sg
simon.lucey@csiro.au
21b16df93f0fab4864816f35ccb3207778a51952Recognition of Static Gestures applied to Brazilian Sign Language (Libras)
Math Institute
Department of Technology, Department of Exact Sciences
Federal University of Bahia (UFBA
State University of Feira de Santana (UEFS
Salvador, Brazil
Feira de Santana, Brazil
('2009399', 'Igor L. O. Bastos', 'igor l. o. bastos')
('3057269', 'Michele F. Angelo', 'michele f. angelo')
('2563043', 'Angelo C. Loula', 'angelo c. loula')
igorcrexito@gmail.com
mfangelo@uefs.ecomp.br, angelocl@gmail.com
212608e00fc1e8912ff845ee7a4a67f88ba938fcCoupled Deep Learning for Heterogeneous Face Recognition
Center for Research on Intelligent Perception and Computing (CRIPAC),
National Laboratory of Pattern Recognition (NLPR),
Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
('2225749', 'Xiang Wu', 'xiang wu')
('3051419', 'Lingxiao Song', 'lingxiao song')
('1705643', 'Ran He', 'ran he')
('1688870', 'Tieniu Tan', 'tieniu tan')
alfredxiangwu@gmail.com, {lingxiao.song, rhe, tnt}@nlpr.ia.ac.cn
4d49c6cff198cccb21f4fa35fd75cbe99cfcbf27Topological Principal Component Analysis for
face encoding and recognition
Juan J. Villanueva
Computer Vision Center and Departament d’Inform(cid:18)atica, Edi(cid:12)ci O, Universitat
Aut(cid:18)onoma de Barcelona  , Cerdanyola, Spain
('38034605', 'Albert Pujol', 'albert pujol')
('2997661', 'Felipe Lumbreras', 'felipe lumbreras')
4d625677469be99e0a765a750f88cfb85c522cceUnderstanding Hand-Object Manipulation
with Grasp Types and Object Attributes
Institute of Industrial Science
The University of Tokyo, Japan
Robotics Institute
Carnegie Mellon University, USA
Institute of Industrial Science
The University of Tokyo, Japan
('3172280', 'Minjie Cai', 'minjie cai')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
('9467266', 'Yoichi Sato', 'yoichi sato')
cai-mj@iis.u-tokyo.ac.jp
kkitani@cs.cmu.edu
ysato@iis.u-tokyo.ac.jp
4da735d2ed0deeb0cae4a9d4394449275e316df2Gothenburg, Sweden, June 19-22, 2016
978-1-5090-1820-8/16/$31.00 ©2016 IEEE
1410
4d15254f6f31356963cc70319ce416d28d8924a3Quo vadis Face Recognition?
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
Department of Psychology
University of Pittsburgh
Pittsburgh, PA 15260
('33731953', 'Ralph Gross', 'ralph gross')
('1838212', 'Jianbo Shi', 'jianbo shi')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
frgross,jshig@cs.cmu.edu
jeffcohn@pitt.edu
4d530a4629671939d9ded1f294b0183b56a513efInternational Journal of Machine Learning and Computing, Vol. 2, No. 4, August 2012
Facial Expression Classification Method Based on Pseudo
Zernike Moment and Radial Basis Function Network
('2009230', 'Tran Binh Long', 'tran binh long')
('2710459', 'Le Hoang Thai', 'le hoang thai')
('1971778', 'Tran Hanh', 'tran hanh')
4d2975445007405f8cdcd74b7fd1dd547066f9b8Image and Video Processing
for Affective Applications
('1694605', 'Maja Pantic', 'maja pantic')
4df889b10a13021928007ef32dc3f38548e5ee56
4d6462fb78db88afff44561d06dd52227190689cFace-to-Face Social Activity Detection Using
Data Collected with a Wearable Device
1 Computer Vision Center, Campus UAB, Edifici O, Bellaterra, Barcelona, Spain
Dep. of Applied Mathematics and Analysis, University of Barcelona, Spain
http://www.cvc.uab.es, http://www.maia.ub.es
('7629833', 'Pierluigi Casale', 'pierluigi casale')
('9783922', 'Oriol Pujol', 'oriol pujol')
('1724155', 'Petia Radeva', 'petia radeva')
pierluigi@cvc.uab.es
4d423acc78273b75134e2afd1777ba6d3a398973
4db9e5f19366fe5d6a98ca43c1d113dac823a14dCombining Crowdsourcing and Face Recognition to Identify Civil War Soldiers
Are 1,000 Features Worth A Picture?
Department of Computer Science and Center for Human-Computer Interaction
Virginia Tech, Arlington, VA, USA
('32698591', 'Vikram Mohanty', 'vikram mohanty')
('51219402', 'David Thames', 'david thames')
('2427623', 'Kurt Luther', 'kurt luther')
4dd6d511a8bbc4d9965d22d79ae6714ba48c8e41
4de757faa69c1632066391158648f8611889d862International Journal of Advanced Engineering Research and Science (IJAERS) Vol-3, Issue-3 , March- 2016]
ISSN: 2349-6495
Review of Face Recognition Technology Using
Feature Fusion Vector
S.R.C.E.M, Banmore, RGPV, University, Bhopal, Madhya Pradesh, India
4dd71a097e6b3cd379d8c802460667ee0cbc8463Real-time Multi-view Facial Landmark Detector
Learned by the Structured Output SVM
1 Center for Machine Perception, Department of Cybernetics, Faculty of Electrical Engineering, Czech
Technical University in Prague, 166 27 Prague 6, Technick a 2 Czech Republic
National Institute of Informatics, Tokyo, Japan
('39492787', 'Diego Thomas', 'diego thomas')
('1691286', 'Akihiro Sugimoto', 'akihiro sugimoto')
4db0968270f4e7b3fa73e41c50d13d48e20687beFashion Forward: Forecasting Visual Style in Fashion
Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany
The University of Texas at Austin, 78701 Austin, USA
('2256981', 'Ziad Al-Halah', 'ziad al-halah')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
('1794409', 'Kristen Grauman', 'kristen grauman')
{ziad.al-halah, rainer.stiefelhagen}@kit.edu, grauman@cs.utexas.edu
4d9c02567e7b9e065108eb83ea3f03fcff880462Towards Facial Expression Recognition in the Wild: A New Database and Deep
Recognition System
School of Electronics and Information, Northwestern Polytechnical University, China
('3411701', 'Xianlin Peng', 'xianlin peng')
('1917901', 'Zhaoqiang Xia', 'zhaoqiang xia')
('2871379', 'Lei Li', 'lei li')
('4729239', 'Xiaoyi Feng', 'xiaoyi feng')
pengxl515@163.com, zxia@nwpu.edu.cn, li lei 08@163.com, fengxiao@nwpu.edu.cn
4d7e1eb5d1afecb4e238ba05d4f7f487dff96c11978-1-5090-4117-6/17/$31.00 ©2017 IEEE
2352
ICASSP 2017
4d90bab42806d082e3d8729067122a35bbc15e8d
4d3c4c3fe8742821242368e87cd72da0bd7d3783Hybrid Deep Learning for Face Verification
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1681656', 'Yi Sun', 'yi sun')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
sy011@ie.cuhk.edu.hk
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
4d01d78544ae0de3075304ff0efa51a077c903b7International Journal of Computer Applications (0975 – 8887)
Volume 77– No.13, September 2013
ART Network based Face Recognition with Gabor Filters
Dept. of Computer Science & Engineering
Dept. of Computer Science & Engineering
Jahangirnagar University
Savar, Dhaka – 1342, Bangladesh.
('5380965', 'Md. Mozammel Haque', 'md. mozammel haque')
('39604645', 'Md. Al-amin Bhuiyan', 'md. al-amin bhuiyan')
4dd2be07b4f0393995b57196f8fc79d666b3aec53572
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
EXPRESSION RECOGNITION
Dept. of Electronic Engineering
Yeungnam University
Gyeongsan, Korea
1. INTRODUCTION
('9215658', 'Rama Chellappa', 'rama chellappa')
('1685841', 'Chan-Su Lee', 'chan-su lee')
4d8ce7669d0346f63b20393ffaa438493e7adfecSimilarity Features for Facial Event Analysis
Rutgers University, Piscataway NJ 08854, USA
2 National Laboratory of Pattern Recognition, Chinese Academy of Sciences
Beijing, 100080, China
('39606160', 'Peng Yang', 'peng yang')
('1734954', 'Qingshan Liu', 'qingshan liu')
peyang@cs.rutgers.edu
4d6ad0c7b3cf74adb0507dc886993e603c863e8cHuman Activity Recognition Based on Wearable
Sensor Data: A Standardization of the
State-of-the-Art
Smart Surveillance Interest Group, Computer Science Department
Universidade Federal de Minas Gerais, Brazil
('2954974', 'Antonio C. Nazare', 'antonio c. nazare')Email: {arturjordao, antonio.nazare, jessicasena, william}@dcc.ufmg.br
4d16337cc0431cd43043dfef839ce5f0717c3483A Scalable and Privacy-Aware IoT Service for Live Video Analytics
Carnegie Mellon University
Carnegie Mellon University
Intel Labs
Norman Sadeh
Carnegie Mellon University
Carnegie Mellon University
Carnegie Mellon University
('3196473', 'Junjue Wang', 'junjue wang')
('1773498', 'Brandon Amos', 'brandon amos')
('1802347', 'Padmanabhan Pillai', 'padmanabhan pillai')
('1732721', 'Anupam Das', 'anupam das')
('1747303', 'Mahadev Satyanarayanan', 'mahadev satyanarayanan')
junjuew@cs.cmu.edu
bamos@cs.cmu.edu
padmanabhan.s.pillai@intel.com
sadeh@cs.cmu.edu
anupamd@cs.cmu.edu
satya@cs.cmu.edu
4d0b3921345ae373a4e04f068867181647d57d7dLearning attributes from human gaze
Department of Computer Science
University of Pittsburgh
IEEE 2017 Winter
Conference on
Applications of
Computer Vision
('1916866', 'Nils Murrugarra-Llerena', 'nils murrugarra-llerena')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
4dca3d6341e1d991c902492952e726dc2a443d1cLearning towards Minimum Hyperspherical Energy
Georgia Institute of Technology 2Emory University
South China University of Technology 4NVIDIA 5Google Brain 6Ant Financial
('36326884', 'Weiyang Liu', 'weiyang liu')
('10035476', 'Rongmei Lin', 'rongmei lin')
('46270580', 'Zhen Liu', 'zhen liu')
('47968201', 'Lixin Liu', 'lixin liu')
('1751019', 'Zhiding Yu', 'zhiding yu')
('47175326', 'Bo Dai', 'bo dai')
('1779453', 'Le Song', 'le song')
4d0ef449de476631a8d107c8ec225628a67c87f9© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or lists, or
reuse of any copyrighted component of this work in other works.
Pre-print of article that appeared at BTAS 2010.
The published article can be accessed from:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5634517
4d47261b2f52c361c09f7ab96fcb3f5c22cafb9fDeep multi-frame face super-resolution
Evgeniya Ustinova, Victor Lempitsky
October 17, 2017
4df3143922bcdf7db78eb91e6b5359d6ada004d2Behav Res (2015) 47:1122–1135
DOI 10.3758/s13428-014-0532-5
The Chicago face database: A free stimulus set of faces
and norming data
Published online: 13 January 2015
Psychonomic Society, Inc
('2428798', 'Joshua Correll', 'joshua correll')
75879ab7a77318bbe506cb9df309d99205862f6cAnalysis Of Emotion Recognition From Facial
Expressions Using Spatial And Transform Domain
Methods
('2855399', 'P. Suja', 'p. suja')
('2510426', 'Shikha Tripathi', 'shikha tripathi')
7574f999d2325803f88c4915ba8f304cccc232d1Transfer Learning For Cross-Dataset Recognition: A Survey
This paper summarises and analyses the cross-dataset recognition transfer learning techniques with the
emphasis on what kinds of methods can be used when the available source and target data are presented
in different forms for boosting the target task. This paper for the first time summarises several transferring
criteria in details from the concept level, which are the key bases to guide what kind of knowledge to transfer
between datasets. In addition, a taxonomy of cross-dataset scenarios and problems is proposed according the
properties of data that define how different datasets are diverged, thereby review the recent advances on
each specific problem under different scenarios. Moreover, some real world applications and corresponding
commonly used benchmarks of cross-dataset recognition are reviewed. Lastly, several future directions are
identified.
Additional Key Words and Phrases: Cross-dataset, transfer learning, domain adaptation
1. INTRODUCTION
It has been explored how human would transfer learning in one context to another
similar context [Woodworth and Thorndike 1901; Perkins et al. 1992] in the field of
Psychology and Education. For example, learning to drive a car helps a person later
to learn more quickly to drive a truck, and learning mathematics prepares students to
study physics. The machine learning algorithms are mostly inspired by human brains.
However, most of them require a huge amount of training examples to learn a new
model from scratch and fail to apply knowledge learned from previous domains or
tasks. This may be due to that a basic assumption of statistical learning theory is
that the training and test data are drawn from the same distribution and belong to
the same task. Intuitively, learning from scratch is not realistic and practical, because
it violates how human learn things. In addition, manually labelling a large amount
of data for new domain or task is labour extensive, especially for the modern “data-
hungry” and “data-driven” learning techniques (i.e. deep learning). However, the big
data era provides a huge amount available data collected for other domains and tasks.
Hence, how to use the previously available data smartly for the current task with
scarce data will be beneficial for real world applications.
To reuse the previous knowledge for current tasks, the differences between old data
and new data need to be taken into account. Take the object recognition as an ex-
ample. As claimed by Torralba and Efros [2011], despite the great efforts of object
datasets creators, the datasets appear to have strong build-in bias caused by various
factors, such as selection bias, capture bias, category or label bias, and negative set
bias. This suggests that no matter how big the dataset is, it is impossible to cover
the complexity of the real visual world. Hence, the dataset bias needs to be consid-
ered before reusing data from previous datasets. Pan and Yang [2010] summarise that
the differences between different datasets can be caused by domain divergence (i.e.
distribution shift or feature space difference) or task divergence (i.e. conditional dis-
tribution shift or label space difference), or both. For example, in visual recognition,
the distributions between the previous and current data can be discrepant due to the
different environments, lighting, background, sensor types, resolutions, view angles,
and post-processing. Those external factors may cause the distribution divergence or
even feature space divergence between different domains. On the other hand, the task
divergence between current and previous data is also ubiquitous. For example, it is
highly possible that an animal species that we want to recognize have not been seen
ACM Journal Name, Vol. V, No. N, Article A, Publication date: January YYYY.
('47539715', 'Jing Zhang', 'jing zhang')
('40508657', 'Wanqing Li', 'wanqing li')
('1719314', 'Philip Ogunbona', 'philip ogunbona')
75fcbb01bc7e53e9de89cb1857a527f97ea532ceDetection of Facial Landmarks from Neutral, Happy,
and Disgust Facial Images
Research Group for Emotions, Sociality, and Computing
Tampere Unit for Computer-Human Interaction
Department of Computer Sciences
University of Tampere
FIN-33014 Tampere, Finland
('2396729', 'Ioulia Guizatdinova', 'ioulia guizatdinova')
('1718377', 'Veikko Surakka', 'veikko surakka')
ig74400@cs.uta.fi
Veikko.Surakka@uta.fi
757e4cb981e807d83539d9982ad325331cb59b16Demographics versus Biometric Automatic
Interoperability
Sapienza University of Rome, Italy
Biometric and Image Processing Lab, University of Salerno, Italy
George Mason University, Fairfax Virginia, USA
('1763890', 'Maria De Marsico', 'maria de marsico')
('1795333', 'Michele Nappi', 'michele nappi')
('1772512', 'Daniel Riccio', 'daniel riccio')
('1781577', 'Harry Wechsler', 'harry wechsler')
demarsico@di.uniroma1.it
{mnappi,driccio}@unisa.it
wechsler@cs.gmu.edu
75e9a141b85d902224f849ea61ab135ae98e7bfb
75503aff70a61ff4810e85838a214be484a674baImproved Facial Expression Recognition via Uni-Hyperplane Classification
S.W. Chew∗, S. Lucey†, P. Lucey‡, S. Sridharan∗, and J.F. Cohn‡
75fd9acf5e5b7ed17c658cc84090c4659e5de01dProject-Out Cascaded Regression with an application to Face Alignment
School of Computer Science, University of Nottingham
Contributions. Cascaded regression approaches [2] have been recently
shown to achieve state-of-the-art performance for many computer vision
tasks. Beyond its connection to boosting, cascaded regression has been in-
terpreted as a learning-based approach to iterative optimization methods like
the Newton’s method. However, in prior work [1],[4], the connection to op-
timization theory is limited only in learning a mapping from image features
to problem parameters.
In this paper, we consider the problem of facial deformable model fit-
ting using cascaded regression and make the following contributions: (a) We
propose regression to learn a sequence of averaged Jacobian and Hessian
matrices from data, and from them descent directions in a fashion inspired
by Gauss-Newton optimization. (b) We show that the optimization problem
in hand has structure and devise a learning strategy for a cascaded regres-
sion approach that takes the problem structure into account. By doing so, the
proposed method learns and employs a sequence of averaged Jacobians and
descent directions in a subspace orthogonal to the facial appearance varia-
tion; hence, we call it Project-Out Cascaded Regression (PO-CR). (c) Based
on the principles of PO-CR, we built a face alignment system that produces
remarkably accurate results on the challenging iBUG data set outperform-
ing previously proposed systems by a large margin. Code for our system is
available from http://www.cs.nott.ac.uk/~yzt/.
Shape and appearance models. We use parametric shape and appearance
models. An instance of the shape model is given by s(p) = s0 + Sp. An
instance of the appearance model is given by A(c) = A0 + Ac.
Face alignment via Gauss-Newton optimization. In this section, we for-
mulate and solve the non-linear least squares optimization problem for face
alignment using Gauss-Newton optimization. This will provide the basis for
learning and fitting in PO-CR in the next section. In particular, to localize
the landmarks in a new image, we would like to find p and c such that [3]
||I(s(p))− A(c)||2.
argmin
p,c
An update for p and c can be found by solving the following problem
arg min
∆p,∆c
||I(s(p)) + JI∆p− A0 − Ac− A∆c||2.
(1)
(2)
By exploiting the problem structure, the calculation for the optimal ∆c at
each iteration is not necessary. We end up with the following problem [3]
||I(s(p)) + JI∆p− A0||2
P,
argmin
∆p
(3)
where P = E − AAT is a projection operator that projects out the facial
appearance variation from the image Jacobian JI. The solution to the above
problem is readily given by
∆p = −H−1
P JT
P (I(s(p))− A0).
(4)
Face alignment via Project-Out Cascaded Regression. Based on Eqs. (3)
and (4), the key idea in PO-CR is to compute from a set of training examples
a sequence of averaged Jacobians(cid:98)J(k) from which the facial appearance
variation is projected-out and from them and descent directions:
Step I. Starting from the ground truth shape parameters p∗
i for each
training image Ii, i = 1, . . . ,H, we generate a set of K perturbed shape pa-
rameters for iteration 1 pi, j(1), j = 1, . . . ,K that capture the statistics of the
PO-CR learns the averaged projected-out Jacobian(cid:98)JP(1) = P(cid:98)J(1) for itera-
face detection initialization process. Using the set ∆pi, j(1) = p∗
i − pi, j(1),
tion 1 by solving the following weighted least squares problem
||I(s(pi, j(1))) + J(1)∆pi, j(1)− A0||2
P,
arg min(cid:98)JP(1)
i=1
j=1
Step II. Having computed(cid:98)JP(1), we compute(cid:98)HP(1) =(cid:98)JP(1)T(cid:98)JP(1) .
Step III. The descent directions R(1) for iteration 1 are given by
R(1) =(cid:98)HP(1)−1(cid:98)JP(1)T .
(6)
Step IV. For each training sample, a new estimate for its shape parame-
ters (to be used at the next iteration) is obtained from
pi, j(2) = pi, j(1) + R(1)(I(s(pi, j(1)))− A0).
(7)
Finally, Steps I-IV are sequentially repeated until convergence and the whole
process produces a set of L regressor matrices R(l), l = 1, . . . ,L.
During testing,we extract image features I(s(p(k))) and then compute
an update for the shape parameters from
∆p(k) = R(k)(I(s(p(k)))− A0).
(8)
Results. We conducted a large number of experiments on LFPW, Helen,
AFW and iBUG data sets. In the following figure, we show fiiting results
from the challenging iBUG data set.
Figure 1: Application of PO-CR to the alignment of the iBUG data set.
[1] T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance models.
TPAMI, 23(6):681–685, 2001.
[2] Piotr Dollár, Peter Welinder, and Pietro Perona. Cascaded pose regres-
sion. In CVPR, 2010.
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
75308067ddd3c53721430d7984295838c81d4106Article
Rapid Facial Reactions
in Response to Facial
Expressions of Emotion
Displayed by Real Versus
Virtual Faces
i-Perception
2018 Vol. 9(4), 1–18
! The Author(s) 2018
DOI: 10.1177/2041669518786527
journals.sagepub.com/home/ipe
LIMSI, CNRS, University of Paris-Sud, Orsay, France
('28174013', 'Jean-Claude Martin', 'jean-claude martin')
75cd81d2513b7e41ac971be08bbb25c63c37029a
75bf3b6109d7a685236c8589f8ead7d769ea863fModel Selection with Nonlinear Embedding for Unsupervised Domain Adaptation
Center for Cognitive Ubiquitous Computing, Arizona State University, Tempe, AZ, USA
('3151995', 'Hemanth Venkateswara', 'hemanth venkateswara')
('2471253', 'Shayok Chakraborty', 'shayok chakraborty')
('1743991', 'Sethuraman Panchanathan', 'sethuraman panchanathan')
{hemanthv, shayok.chakraborty, troy.mcdaniel, panch}@asu.edu
759cf57215fcfdd8f59c97d14e7f3f62fafa2b30Real-time Distracted Driver Posture Classification
Department of Computer Science and Engineering, School of Sciences and Engineering
The American University in Cairo, New Cairo 11835, Egypt
('3434212', 'Yehya Abouelnaga', 'yehya abouelnaga')
('2150605', 'Hesham M. Eraqi', 'hesham m. eraqi')
('2233511', 'Mohamed N. Moustafa', 'mohamed n. moustafa')
{devyhia,heraqi,m.moustafa}@aucegypt.edu
751970d4fb6f61d1b94ca82682984fd03c74f127Array-based Electromyographic Silent Speech Interface
Cognitive Systems Lab, Karlsruhe Institute of Technology, Karlsruhe, Germany
Keywords:
EMG, EMG-based Speech Recognition, Silent Speech Interface, Electrode Array
('1723149', 'Michael Wand', 'michael wand')
('2289793', 'Christopher Schulte', 'christopher schulte')
('1684236', 'Matthias Janke', 'matthias janke')
('1713194', 'Tanja Schultz', 'tanja schultz')
{michael.wand, matthias.janke, tanja.schultz}@kit.edu, christopher.schulte@student.kit.edu
759a3b3821d9f0e08e0b0a62c8b693230afc3f8dAttribute and Simile Classifiers for Face Verification
Columbia University
('40631426', 'Neeraj Kumar', 'neeraj kumar')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
('1750470', 'Shree K. Nayar', 'shree k. nayar')
75ebe1e0ae9d42732e31948e2e9c03d680235c39“Hello! My name is... Buffy” – Automatic
Naming of Characters in TV Video
University of Oxford
('3056091', 'Mark Everingham', 'mark everingham')
('1782755', 'Josef Sivic', 'josef sivic')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{me,josef,az}@robots.ox.ac.uk
75e5ba7621935b57b2be7bf4a10cad66a9c445b9
75859ac30f5444f0d9acfeff618444ae280d661dMultibiometric Cryptosystems based on Feature
Level Fusion
('2743820', 'Abhishek Nagar', 'abhishek nagar')
('34633765', 'Karthik Nandakumar', 'karthik nandakumar')
('6680444', 'Anil K. Jain', 'anil k. jain')
758d7e1be64cc668c59ef33ba8882c8597406e53IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
AffectNet: A Database for Facial Expression,
Valence, and Arousal Computing in the Wild
('2314025', 'Ali Mollahosseini', 'ali mollahosseini')
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')
7553fba5c7f73098524fbb58ca534a65f08e91e7Available Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 3, Issue. 6, June 2014, pg.816 – 824
RESEARCH ARTICLE
A Practical Approach for Determination
of Human Gender & Age

India
India
('1802780', 'Harpreet Kaur', 'harpreet kaur')
('1802780', 'Harpreet Kaur', 'harpreet kaur')
('38968310', 'Ahsan Hussain', 'ahsan hussain')
1 hkaur_bhatia23@yahoo.com, 2 ahsanhbaba@gmail.com
751b26e7791b29e4e53ab915bfd263f96f531f56Mood Meter: Counting Smiles in the Wild
Mohammed (Ehsan) Hoque *
Media Lab
Massachusetts Institute of Technology
Cambridge, MA, USA
('2806721', 'Will Drevo', 'will drevo')
('1719389', 'Rosalind W. Picard', 'rosalind w. picard')
('15977480', 'Javier Hernandez', 'javier hernandez')
{javierhr, mehoque, drevo, picard}@mit.edu
75da1df4ed319926c544eefe17ec8d720feef8c0FDDB: A Benchmark for Face Detection in Unconstrained Settings
University of Massachusetts Amherst
University of Massachusetts Amherst
Amherst MA 01003
Amherst MA 01003
('1714536', 'Erik Learned-Miller', 'erik learned-miller')
('2246870', 'Vidit Jain', 'vidit jain')
elm@cs.umass.edu
vidit@cs.umass.edu
75259a613285bdb339556ae30897cb7e628209faUnsupervised Domain Adaptation for Zero-Shot Learning
Queen Mary University of London, London E1 4NS, UK
('2999293', 'Elyor Kodirov', 'elyor kodirov')
('1700927', 'Tao Xiang', 'tao xiang')
('2073354', 'Shaogang Gong', 'shaogang gong')
{e.kodirov, t.xiang, z.fu, s.gong}@qmul.ac.uk
754f7f3e9a44506b814bf9dc06e44fecde599878Quantized Densely Connected U-Nets for
Efficient Landmark Localization
('2986505', 'Zhiqiang Tang', 'zhiqiang tang')
('4340744', 'Xi Peng', 'xi peng')
('1947101', 'Shijie Geng', 'shijie geng')
('3008832', 'Lingfei Wu', 'lingfei wu')
('1753384', 'Shaoting Zhang', 'shaoting zhang')
1Rutgers University, {zt53, sg1309, dnm}@rutgers.edu
2Binghamton University, xpeng@binghamton.edu
3IBM T. J. Watson, lwu@email.wm.edu
4SenseTime, zhangshaoting@sensetime.com
75249ebb85b74e8932496272f38af274fbcfd696Face Identification in Large Galleries
Smart Surveillance Interest Group, Department of Computer Science
Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
('1679142', 'William Robson Schwartz', 'william robson schwartz')rafaelvareto@dcc.ufmg.br, filipe.oc87@gmail.com, william@dcc.ufmg.br
75d2ecbbcc934563dff6b39821605dc6f2d5ffccCapturing Subtle Facial Motions in 3D Face Tracking
Beckman Institute
University of Illinois at Urbana-Champaign
Urbana, IL 61801
('1735018', 'Zhen Wen', 'zhen wen')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
{zhenwen, huang}@ifp.uiuc.edu
81a142c751bf0b23315fb6717bc467aa4fdfbc92978-1-5090-4117-6/17/$31.00 ©2017 IEEE
1767
ICASSP 2017
81bfe562e42f2eab3ae117c46c2e07b3d142dadeA Hajj And Umrah Location Classification System For Video
Crowded Scenes
Adnan A. Gutub†
Center of Research Excellence in Hajj and Umrah, Umm Al-Qura University, Makkah, KSA
College of Computers and Information Systems, Umm Al-Qura University, Makkah, KSA
('2872536', 'Hossam M. Zawbaa', 'hossam m. zawbaa')
('1977955', 'Salah A. Aly', 'salah a. aly')
81695fbbbea2972d7ab1bfb1f3a6a0dbd3475c0fUNIVERSITY OF TARTU
FACULTY OF SCIENCE AND TECHNOLOGY 
Institute of Computer Science
Computer Science
Comparison of Face Recognition
Neural Networks 
Bachelor's thesis (6 ECST)
Supervisor: Tambet Matiisen
Tartu 2016
8147ee02ec5ff3a585dddcd000974896cb2edc53Angular Embedding:
A Robust Quadratic Criterion
Stella X. Yu, Member,
IEEE
8199803f476c12c7f6c0124d55d156b5d91314b6The iNaturalist Species Classification and Detection Dataset
1Caltech
2Google
3Cornell Tech
4iNaturalist
('2996914', 'Grant Van Horn', 'grant van horn')
('13412044', 'Alex Shepard', 'alex shepard')
('1690922', 'Pietro Perona', 'pietro perona')
('50172592', 'Serge Belongie', 'serge belongie')
816bd8a7f91824097f098e4f3e0f4b69f481689dLatent Semantic Analysis of Facial Action Codes
for Automatic Facial Expression Recognition
D-ITET/BIWI
ETH Zurich
Zurich, Switzerland
IDIAP Research Institute
Martigny, Switzerland
IDIAP Research Institute
Martigny, Switzerland
('8745904', 'Beat Fasel', 'beat fasel')
('1824057', 'Florent Monay', 'florent monay')
('1698682', 'Daniel Gatica-Perez', 'daniel gatica-perez')
bfasel@vision.ee.ethz.ch
monay@idiap.ch
gatica@idiap.ch
81706277ed180a92d2eeb94ac0560f7dc591ee13International Journal of Computer Applications (0975 – 8887)
Volume 55– No.15, October 2012
Emotion based Contextual Semantic Relevance
Feedback in Multimedia Information Retrieval
Department of Computer Engineering, Indian
Institute of Technology, Banaras Hindu
University, Varanasi, 221005, India
Anil K. Tripathi
Department of Computer Engineering, Indian
Institute of Technology, Banaras Hindu
University, Varanasi, 221005, India
to
find some
issued by a user
('41132883', 'Karm Veer Singh', 'karm veer singh')
81831ed8e5b304e9d28d2d8524d952b12b4cbf55
81b2a541d6c42679e946a5281b4b9dc603bc171cUniversit¨at Ulm | 89069 Ulm | Deutschland
Fakult¨at f¨ur Ingenieurwissenschaften und Informatik
Institut f¨ur Neuroinformatik
Direktor: Prof. Dr. G¨unther Palm
Semi-Supervised Learning with Committees:
Exploiting Unlabeled Data Using Ensemble
Learning Algorithms
Dissertation zur Erlangung des Doktorgrades
Doktor der Naturwissenschaften (Dr. rer. nat.)
der Fakult¨at f¨ur Ingenieurwissenschaften und Informatik
der Universit¨at Ulm
vorgelegt von
aus Kairo, ¨Agypten
Ulm, Deutschland
2010
('1799097', 'Mohamed Farouk Abdel Hady', 'mohamed farouk abdel hady')
81e11e33fc5785090e2d459da3ac3d3db5e43f65International Journal of Advances in Engineering & Technology, March 2012.
©IJAET ISSN: 2231-1963
A NOVEL FACE RECOGNITION APPROACH USING A
MULTIMODAL FEATURE VECTOR
Central Mechanical Engineering Research Institute, Durgapur, West Bengal, India
National Institute of Technology, Durgapur, West Bengal, India
('9155672', 'Jhilik Bhattacharya', 'jhilik bhattacharya')
('40301536', 'Nattami Sekhar', 'nattami sekhar')
('1872045', 'Somajyoti Majumder', 'somajyoti majumder')
('33606010', 'Gautam Sanyal', 'gautam sanyal')
81e366ed1834a8d01c4457eccae4d57d169cb932Pose-Configurable Generic Tracking of Elongated Objects
Multimedia Systems Department
Gdansk University of Technology
Departement Electronique et Physique
Institut Mines-Telecom / Telecom SudParis
('2120042', 'Daniel Wesierski', 'daniel wesierski')
('2603633', 'Patrick Horain', 'patrick horain')
daniel.wesierski@pg.gda.pl
patrick.horain@telecom-sudaris.eu
8164ebc07f51c9e0db4902980b5ac3f5a8d8d48cShuffle-Then-Assemble: Learning
Object-Agnostic Visual Relationship Features
School of Computer Science and Engineering,
Nanyang Technological University
('47008946', 'Xu Yang', 'xu yang')
('5462268', 'Hanwang Zhang', 'hanwang zhang')
('1688642', 'Jianfei Cai', 'jianfei cai')
s170018@e.ntu.edu.sg,{hanwangzhang,asjfcai}@ntu.edu.sg
81fc86e86980a32c47410f0ba7b17665048141ecSegment-based Methods for Facial Attribute
Detection from Partial Faces
Department of Electrical and Computer Engineering and the Center for Automation Research,
UMIACS, University of Maryland, College Park, MD
('3152615', 'Upal Mahbub', 'upal mahbub'){umahbub, ssarkar2, rama}@umiacs.umd.edu
8160b3b5f07deaa104769a2abb7017e9c031f1c1683
Exploiting Discriminant Information in Nonnegative
Matrix Factorization With Application
to Frontal Face Verification
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('2336758', 'Ioan Buciu', 'ioan buciu')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
814d091c973ff6033a83d4e44ab3b6a88cc1cb66Behav Res (2016) 48:567–576
DOI 10.3758/s13428-015-0601-4
The EU-Emotion Stimulus Set: A validation study
Published online: 30 September 2015
Psychonomic Society, Inc
('2625704', 'Delia Pigat', 'delia pigat')
('2391819', 'Shahar Tal', 'shahar tal')
('2100443', 'Ofer Golan', 'ofer golan')
('1884685', 'Simon Baron-Cohen', 'simon baron-cohen')
('3343472', 'Daniel Lundqvist', 'daniel lundqvist')
816eff5e92a6326a8ab50c4c50450a6d02047b5efLRR: Fast Low-Rank Representation Using
Frobenius Norm
Low Rank Representation (LRR) intends to find the representation
with lowest-rank of a given data set, which can be formulated as a
rank minimization problem. Since the rank operator is non-convex and
discontinuous, most of the recent works use the nuclear norm as a convex
relaxation. This letter theoretically shows that under some conditions,
Frobenius-norm-based optimization problem has an unique solution that
is also a solution of the original LRR optimization problem. In other
words, it is feasible to apply Frobenius-norm as a surrogate of the
nonconvex matrix rank function. This replacement will largely reduce the
time-costs for obtaining the lowest-rank solution. Experimental results
show that our method (i.e., fast Low Rank Representation, fLRR),
performs well in terms of accuracy and computation speed in image
clustering and motion segmentation compared with nuclear-norm-based
LRR algorithm.
Introduction: Given a data set X ∈ Rm×n(m < n) composed of column
vectors, let A be a data set composed of vectors with the same dimension
as those in X. Both X and A can be considered as matrices. A linear
representation of X with respect to A is a matrix Z that satisfies the
equation X = AZ. The data set A is called a dictionary. In general, this
linear matrix equation will have infinite solutions, and any solution can be
considered to be a representation of X associated with the dictionary A. To
obtain an unique Z and explore the latent structure of the given data set,
various assumptions could be enforced over Z.
Liu et al. recently proposed Low Rank Representation (LRR) [1] by
assuming that data are approximately sampled from an union of low-rank
subspaces. Mathematically, LRR aims at solving
min rank(Z)
s.t. X = AZ,
(1)
where rank(Z) could be defined as the number of nonzero eigenvalues of
the matrix Z. Clearly, (1) is non-convex and discontinuous, whose convex
relaxation is as follows,
min kZk∗
s.t. X = AZ,
(2)
where kZk∗ is the nuclear norm, which is a convex and continuous
optimization problem.
Considering the possible corruptions, the objective function of LRR is
min kZk∗ + λkEkp
s.t. X = AZ + E,
(3)
where k · kp could be ℓ1-norm for describing sparse corruption or ℓ2,1-
norm for characterizing sample-specified corruption.
The above nuclear-norm-based optimization problems are generally
solved using Augmented Lagrange Multiplier algorithm (ALM) [2] which
requires repeatedly performing Single Value Decomposition (SVD) over
Z. Hence, this optimization program is inefficient.
Beyond the nuclear-norm, do other norms exist that can be used as
a surrogates for rank-minimization problem in LRR? Can we develop
a fast algorithm to calculate LRR? This letter addresses these problems
by theoretically showing the equivalence between the solutions of a
Frobenius-norm-based problem and the original LRR problem. And we
further develop fast Low Rank Representation (fLRR) based on the
theoretical results.
Theoretical Analysis: In the following analyses, Theorem 1 and
Theorem 3 prove that Frobenius-norm-based problem is a surrogate of
the rank-minimization problem of LRR in the case of clean data and
corrupted ones, respectively. Theorem 2 shows that our Frobenius-norm-
based method could produce a block-diagonal Z under some conditions.
This property is helpful to subspace clustering.
Let A ∈ Rm×n be a matrix with rank r. The full SVD and skinny
SVD of A are A = U ΣV T and A = UrΣrV T
r , where U and V are two
orthogonal matrices with the size of m × m and n × n, respectively. In
addition, Σ is an m × n rectangular diagonal matrix, its diagonal elements
are nonnegative real numbers. Σr is a r × r diagonal matrix with singular
values located on the diagonal in decreasing order, Ur and Vr consist of the
first r columns of U and V , respectively. Clearly, Ur and Vr are column
orthogonal matrices, i.e., U T
r Vr = Ir, where Ir denotes the
r Ur = Ir, V T
identity matrix with the size of r × r. The pseudoinverse of A is defined
by A† = VrΣ−1
r U T
r .
Given a matrix M ∈ Rm×n, the Frobenius norm of M is defined by
kM kF =ptrace (M T M ) =qPmin{m,n}
value of M . Clearly, kM kF = 0 if and only if M = 0.
i=1
σ2
i , where σi is a singular
Lemma 1: Suppose P is a column orthogonal matrix, i.e., P T P = I. Then,
kP M kF = kM kF .
Lemma 2: For the matrices M and N with same number of columns, it
holds that
= kM k2
F + kN k2
F .
(4)
N (cid:21)(cid:13)(cid:13)(cid:13)(cid:13)
(cid:13)(cid:13)(cid:13)(cid:13)
(cid:20) M
The proofs of the above two lemmas are trivial.
Theorem 1:
minimization problem
Suppose
that X ∈ span{A},
the Frobenius norm
min kZkF
s.t. X = AZ,
(5)
has an unique solution Z ∗ = A†X which is also the lowest-rank solution
of LRR in terms of (1).
Proof: Let the full and skinny SVDs of A be A = U ΣV T and A =
r U T
UrΣrV T
r .
r , respectively. Then, the pseudoinverse of A is A† = VrΣ−1
Defining Vc by V T =(cid:20) V T
V T
(cid:21) and V T
c Vr = 0. Moreover, it can be easily
checked that Z ∗ satisfies X = AZ ∗ owing to X ∈ span{A}.
To prove that Z ∗ is the unique solution of the optimization problem
(5), two steps are required. First, we will prove that, for any solution Z of
X = AZ, it must hold that kZkF ≥ kZ ∗kF . Using Lemma 1, we have
kZkF = (cid:13)(cid:13)(cid:13)(cid:13)
= (cid:13)(cid:13)(cid:13)(cid:13)
V T
(cid:20) V T
(cid:20) V T
(cid:21) [Z ∗ + (Z − Z ∗)](cid:13)(cid:13)(cid:13)(cid:13)F
c (Z − Z ∗) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13)F
r (Z − Z ∗)
r Z ∗ + V T
c Z ∗ + V T
V T
As A (Z − Z ∗) = 0,
r (Z − Z ∗) = 0. Denote B = Σ−1
V T
V T
c Vr = 0, we have V T
i.e., UrΣrV T
r U T
c VrB = 0. Then,
r (Z − Z ∗) = 0,
r X,
follows that
then Z ∗ = VrB. Because
it
c Z ∗ = V T
(cid:20)
kZkF =(cid:13)(cid:13)(cid:13)(cid:13)
V T
c (Z − Z ∗) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13)F
By Lemma 2,
kZk2
F = kBk2
F + kV T
c (Z − Z ∗)k2
F ,
then, kZkF ≥ kBkF .
By Lemma 1,
kBkF = kVrBkF = kZ ∗kF ,
(6)
(7)
(8)
thus, kZkF ≥ kZ ∗kF for any solution Z of X = AZ.
In the second step, we will prove that if there exists another solution Z
of (5), Z = Z ∗ must hold. Clearly, Z is a solution of (5) which implies that
X = AZ and kZkF = kZ ∗kF . From (7) and (8),
kZk2
F + kV T
F = kZ ∗k2
Since kZkF = kZ ∗kF ,
c (Z − Z ∗) k2
F .
c (Z − Z ∗) kF = 0,
r (Z − Z ∗) = 0, this gives
and so V T
V T (Z − Z ∗) = 0. Because V is an orthogonal matrix, it must hold
that Z = Z ∗. The above proves that Z ∗ is the unique solution of the
optimization problem (5).
c (Z − Z ∗) = 0. Together with V T
it must hold that kV T
(9)
Next, we prove that Z ∗ is also a solution of the LRR optimization
problem (1). Clearly, for any solution Z of X = AZ,
it holds that
rank(Z) ≥ rank(AZ) = rank(X). On the other hand, rank(Z ∗) =
rank(A†X) ≤ rank(X). Thus, rank(Z ∗) = rank(X). This shows that
Z ∗ is the lowest-rank solution of the LRR optimization problem (1). The
proof is complete.
(cid:4)
In the following, Theorem 2 will show that the optimal Z of (5) will
be block-diagonal if the data are sampled from a set of independent
subspaces {S1, S2, · · · , Sk}, where the dimensionality of Si is ri and
i = {1, 2, · · · , k}. Note that, {S1, S2, · · · , Sk} are independent if and
only if SiTPj6=i Sj = {0}. Suppose that X = [X1, X2, · · · , Xk] and
A = [A1, A2, · · · , Ak], where Ai and Xi contain mi and ni data points
ELECTRONICS LETTERS 12th December 2011 Vol. 00 No. 00
('2235162', 'Haixian Zhang', 'haixian zhang')
('4340744', 'Xi Peng', 'xi peng')
8149c30a86e1a7db4b11965fe209fe0b75446a8cSemi-Supervised Multiple Instance Learning based
Domain Adaptation for Object Detection
Siemens Corporate Research
Siemens Corporate Research
Siemens Corporate Research
Amit Kale
Bangalore
Bangalore
{chhaya.methani,
Bangalore
rahul.thota,
('2970569', 'Chhaya Methani', 'chhaya methani')
('31516659', 'Rahul Thota', 'rahul thota')
kale.amit}@siemens.com
81da427270c100241c07143885ba3051ec4a2ecbLearning the Synthesizability of Dynamic Texture Samples∗
State Key Lab. LIESMARS, Wuhan University, China
2Computer Vision Lab., ETH Zurich, Switzerland
February 6, 2018
('1706687', 'Feng Yang', 'feng yang')
('39943835', 'Gui-Song Xia', 'gui-song xia')
('1778526', 'Dengxin Dai', 'dengxin dai')
('1733213', 'Liangpei Zhang', 'liangpei zhang')
{guisong.xia, fengyang, zlp62}@whu.edu.cn
dai@vision.ee.ethz.ch
861c650f403834163a2c27467a50713ceca37a3eProbabilistic Elastic Part Model for Unsupervised Face Detector Adaptation
Stevens Institute of Technology
Hoboken, NJ 07030
Adobe Systems Inc.
San Jose, CA 95110
('3131569', 'Haoxiang Li', 'haoxiang li')
('1745420', 'Gang Hua', 'gang hua')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
('1706007', 'Jianchao Yang', 'jianchao yang')
{hli18, ghua}@stevens.edu
{zlin, jbrandt, jiayang}@adobe.com
86614c2d2f6ebcb9c600d4aef85fd6bf6eab6663Benchmarks for Cloud Robotics
Arjun Singh
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2016-142
http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-142.html
August 12, 2016
86b69b3718b9350c9d2008880ce88cd035828432Improving Face Image Extraction by Using Deep Learning Technique
National Library of Medicine, NIH, Bethesda, MD
('1726787', 'Zhiyun Xue', 'zhiyun xue')
('1721328', 'Sameer Antani', 'sameer antani')
('1691151', 'L. Rodney Long', 'l. rodney long')
('1705831', 'Dina Demner-Fushman', 'dina demner-fushman')
('1692057', 'George R. Thoma', 'george r. thoma')
86904aee566716d9bef508aa9f0255dc18be3960Learning Anonymized Representations with
Adversarial Neural Networks
('1743922', 'Pablo Piantanida', 'pablo piantanida')
('1751762', 'Yoshua Bengio', 'yoshua bengio')
('1694313', 'Pierre Duhamel', 'pierre duhamel')
86f191616423efab8c0d352d986126a964983219Visual to Sound: Generating Natural Sound for Videos in the Wild
University of North Carolina at Chapel Hill, 2Adobe Research
('49455017', 'Yipin Zhou', 'yipin zhou')
('8056043', 'Zhaowen Wang', 'zhaowen wang')
('2442612', 'Chen Fang', 'chen fang')
('30190128', 'Trung Bui', 'trung bui')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
867e709a298024a3c9777145e037e239385c0129 INTERNATIONAL JOURNAL
OF PROFESSIONAL ENGINEERING STUDIES Volume VIII /Issue 2 / FEB 2017
ANALYTICAL REPRESENTATION OF UNDERSAMPLED FACE
RECOGNITION APPROACH BASED ON DICTIONARY LEARNING
AND SPARSE REPRESENTATION
(M.Tech)1, Assistant Professor2, Assistant Professor3, HOD of CSE Department4
('32628937', 'Murala Sandeep', 'murala sandeep')
('1702980', 'Ranga Reddy', 'ranga reddy')
869a2fbe42d3fdf40ed8b768edbf54137be7ac71Relative Attributes for Enhanced Human-Machine Communication
Toyota Technological Institute, Chicago
Indraprastha Institute of Information Technology, Delhi
University of Texas, Austin
('1713589', 'Devi Parikh', 'devi parikh')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
('2076800', 'Amar Parkash', 'amar parkash')
('1794409', 'Kristen Grauman', 'kristen grauman')
86c5478f21c4a9f9de71b5ffa90f2a483ba5c497Kernel Selection using Multiple Kernel Learning and Domain
Adaptation in Reproducing Kernel Hilbert Space, for Face
Recognition under Surveillance Scenario
Indian Institute of Technology, Madras, Chennai 600036, INDIA
Face Recognition (FR) has been the interest to several researchers over the past few decades due to its passive nature of biometric
authentication. Despite high accuracy achieved by face recognition algorithms under controlled conditions, achieving the same
performance for face images obtained in surveillance scenarios, is a major hurdle. Some attempts have been made to super-resolve
the low-resolution face images and improve the contrast, without considerable degree of success. The proposed technique in this
paper tries to cope with the very low resolution and low contrast face images obtained from surveillance cameras, for FR under
surveillance conditions. For Support Vector Machine classification, the selection of appropriate kernel has been a widely discussed
issue in the research community. In this paper, we propose a novel kernel selection technique termed as MFKL (Multi-Feature
Kernel Learning) to obtain the best feature-kernel pairing. Our proposed technique employs a effective kernel selection by Multiple
Kernel Learning (MKL) method, to choose the optimal kernel to be used along with unsupervised domain adaptation method in the
Reproducing Kernel Hilbert Space (RKHS), for a solution to the problem. Rigorous experimentation has been performed on three
real-world surveillance face datasets : FR SURV [33], SCface [20] and ChokePoint [44]. Results have been shown using Rank-1
Recognition Accuracy, ROC and CMC measures. Our proposed method outperforms all other recent state-of-the-art techniques by
a considerable margin.
Index Terms—Kernel Selection, Surveillance, Multiple Kernel Learning, Domain Adaptation, RKHS, Hallucination
I. INTRODUCTION
Face Recognition (FR) is a classical problem which is far
from being solved. Face Recognition has a clear advantage
of being natural and passive over other biometric techniques
requiring co-operative subjects. Most face recognition algo-
rithms perform well under a controlled environment. A face
recognition system trained at a certain resolution, illumination
and pose, recognizes faces under similar conditions with very
high accuracy. In contrary, if the face of the same subject is
presented with considerable change in environmental condi-
tions, then such a face recognition system fails to achieve a
desired level of accuracy. So, we aim to find a solution to the
face recognition under unconstrained environment.
Face images obtained by an outdoor panoramic surveillance
camera, are often confronted with severe degradations (e.g.,
low-resolution, blur, low-contrast, interlacing and noise). This
significantly limits the performance of face recognition sys-
tems used for binding “security with surveillance” applica-
tions. Here, images used for training are usually available be-
forehand which are taken under a well controlled environment
in an indoor setup (laboratory, control room), whereas the
images used for testing are captured when a subject comes
under a surveillance scene. With ever increasing demands
to combine “security with surveillance” in an integrated and
automated framework, it is necessary to analyze samples of
face images of subjects acquired by a surveillance camera
from a long distance. Hence the subject must be accurately
recognized from a low resolution, blurred and degraded (low
contrast, aliasing, noise) face image, as obtained from the
surveillance camera. These face images are difficult to match
because they are often captured under non-ideal conditions.
Thus, face recognition in surveillance scenario is an impor-
tant and emerging research area which motivates the work
presented in this paper.
Performance of most classifiers degrade when both the
resolution and contrast of face templates used for recognition
are low. There have been many advancement in this area
during the past decade, where attempts have been made to
deal with this problem under an unconstrained environment.
For surveillance applications, a face recognition system must
recognize a face in an unconstrained environment without the
notice of the subject. Degradation of faces is quite evident in
the surveillance scenario due to low-resolution and camera-
blur. Variations in the illuminating conditions of the faces
not only reduces the recognition accuracy but occasionally
degrades the performance of face detection which is the first
step of face recognition. The work presented in this paper deals
with such issues involved in FR under surveillance conditions.
In the work presented in this paper, the face samples from
both gallery and probe are initially passed through a robust
face detector, the Chehra face tracker, to find a tightly cropped
face image. A domain adaptation (DA) based algorithm,
formulated using eigen-domain transformation is designed to
bridge the gap between the features obtained from the gallery
and the probe samples. A novel Multiple kernel Learning
(MKL) based learning method, termed MFKL (Multi-Feature
Kernel Learning), is then used to obtain an optimal combi-
nation (pairing) of the feature and the kernel for FR. The
novelty of the work presented in this paper is the optimal
pairing of feature and kernel to provide best performance with
DA based learning for FR. Results of performance analysis on
('2643208', 'Samik Banerjee', 'samik banerjee')
('1680398', 'Sukhendu Das', 'sukhendu das')
86c053c162c08bc3fe093cc10398b9e64367a100Cascade of Forests for Face Alignment ('2966679', 'Heng Yang', 'heng yang')
('2876552', 'Changqing Zou', 'changqing zou')
('1744405', 'Ioannis Patras', 'ioannis patras')
86b985b285c0982046650e8d9cf09565a939e4f9
861802ac19653a7831b314cd751fd8e89494ab12Time-of-Flight and Depth Imaging. Sensors, Algorithms
and Applications: Dagstuhl Seminar 2012 and GCPR
Workshop on Imaging New Modalities (Lecture ... Vision,
Pattern Recognition, and Graphics)
Publisher: Springer; 2013 edition
(November 8, 2013)
Language: English
Pages: 320
ISBN: 978-3642449635
Size: 20.46 MB
Format: PDF / ePub / Kindle
Cameras for 3D depth imaging, using
either time-of-flight (ToF) or
structured light sensors, have received
a lot of attention recently and have
been improved considerably over the
last few years. The present
techniques...
('1727057', 'Marcin Grzegorzek', 'marcin grzegorzek')
('1680185', 'Christian Theobalt', 'christian theobalt')
('39897382', 'Reinhard Koch', 'reinhard koch')
('1758212', 'Andreas Kolb', 'andreas kolb')
86ed5b9121c02bcf26900913f2b5ea58ba23508fActions ⇠ Transformations
Carnegie Mellon University
University of Washington
The Allen Institute for AI
('39849136', 'Xiaolong Wang', 'xiaolong wang')
('2270286', 'Ali Farhadi', 'ali farhadi')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
861b12f405c464b3ffa2af7408bff0698c6c9bf0International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 3 Issue: 5
3337 - 3342
_______________________________________________________________________________________________
An Effective Technique for Removal of Facial Dupilcation by SBFA
Computer Department,
GHRCEM,
Pune, India
Computer Department,
GHRCEM,
Pune, India
('2947776', 'Ayesha Butalia', 'ayesha butalia')deepikapatil941@gmail.com
ayeshabutalia@gmail.com
86b6afc667bb14ff4d69e7a5e8bb2454a6bbd2cdYUE et al.: ATTENTIONAL ALIGNMENT NETWORK
Attentional Alignment Network
Beihang University, Beijing, China
2 The Key Laboratory of Advanced
Technologies for Near Space
Information Systems
Ministry of
Technology of China
University of Texas at Arlington
TX, USA
Shanghai Jiao Tong University
Shanghai, China
Industry and Information
('35310815', 'Lei Yue', 'lei yue')
('6050999', 'Xin Miao', 'xin miao')
('3127895', 'Pengbo Wang', 'pengbo wang')
('1740430', 'Baochang Zhang', 'baochang zhang')
('34798935', 'Xiantong Zhen', 'xiantong zhen')
('40916581', 'Xianbin Cao', 'xianbin cao')
yuelei@buaa.edu.cn
xin.miao@mavs.uta.edu
wangpengbo_vincent@sjtu.edu.cn
bczhang@buaa.edu.cn
zhenxt@buaa.edu.cn
xbcao@buaa.edu.cn
862d17895fe822f7111e737cbcdd042ba04377e8Semi-Latent GAN: Learning to generate and modify facial images from
attributes
The school of Data Science, Fudan University
† Disney Research,
('11740128', 'Weidong Yin', 'weidong yin')
('35782003', 'Yanwei Fu', 'yanwei fu')
('14517812', 'Leonid Sigal', 'leonid sigal')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
yanweifu@fudan.edu.cn
86d0127e1fd04c3d8ea78401c838af621647dc95Facial Attribute Prediction
College of Information and Engineering, Hunan University, Changsha, China
School of Computer Science, National University of Defense Technology, Changsha, China
University of Texas at San Antonio, USA
('48664471', 'Mingxing Duan', 'mingxing duan')
('50842217', 'Qi Tian', 'qi tian')
duanmingxing16@nudt.edu.cn, lkl@hnu.edu.cn, qi.tian@utsa.edu
86e1bdbfd13b9ed137e4c4b8b459a3980eb257f6The Kinetics Human Action Video Dataset
Jo˜ao Carreira
Paul Natsev
('21028601', 'Will Kay', 'will kay')
('34838386', 'Karen Simonyan', 'karen simonyan')
('11809518', 'Brian Zhang', 'brian zhang')
('38961760', 'Chloe Hillier', 'chloe hillier')
('2259154', 'Sudheendra Vijayanarasimhan', 'sudheendra vijayanarasimhan')
('39045746', 'Fabio Viola', 'fabio viola')
('1691808', 'Tim Green', 'tim green')
('2830305', 'Trevor Back', 'trevor back')
('2573615', 'Mustafa Suleyman', 'mustafa suleyman')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
wkay@google.com
joaoluis@google.com
simonyan@google.com
brianzhang@google.com
chillier@google.com
svnaras@google.com
fviola@google.com
tfgg@google.com
back@google.com
natsev@google.com
mustafasul@google.com
zisserman@google.com
86b6de59f17187f6c238853810e01596d37f63cd(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 7, No. 3, 2016
Competitive Representation Based Classification
Using Facial Noise Detection
Chongqing Key Laboratory of Computational Intelligence
College of Computer Science and Technology, Chongqing
Chongqing Key Laboratory of Computational Intelligence
College of Computer Science and Technology, Chongqing
University of Posts and Telecommunications
University of Posts and Telecommunications
Chongqing, China
Chongqing, China
Chongqing Key Laboratory of Computational Intelligence
College of Computer Science and Technology, Chongqing
Chongqing Key Laboratory of Computational Intelligence
College of Computer Science and Technology, Chongqing
University of Posts and Telecommunications
University of Posts and Telecommunications
Chongqing, China
Chongqing, China
('1779859', 'Tao Liu', 'tao liu')
('32611393', 'Ying Liu', 'ying liu')
('38837555', 'Cong Li', 'cong li')
('40032263', 'Chao Li', 'chao li')
86b105c3619a433b6f9632adcf9b253ff98aee871­4244­0367­7/06/$20.00 ©2006 IEEE
1013
ICME 2006
86f3552b822f6af56cb5079cc31616b4035ccc4eTowards Miss Universe Automatic Prediction: The Evening Gown Competition
University of Queensland, Brisbane, Australia
(cid:5) Data61, CSIRO, Australia
('1850202', 'Johanna Carvajal', 'johanna carvajal')
('2331880', 'Arnold Wiliem', 'arnold wiliem')
('1781182', 'Conrad Sanderson', 'conrad sanderson')
86a8b3d0f753cb49ac3250fa14d277983e30a4b7Exploiting Unlabeled Ages for Aging Pattern Analysis on A Large Database
West Virginia University, Morgantown, WV
('1720735', 'Chao Zhang', 'chao zhang')
('1822413', 'Guodong Guo', 'guodong guo')
cazhang@mix.wvu.edu, guodong.guo@mail.wvu.edu
860588fafcc80c823e66429fadd7e816721da42aUnsupervised Discovery of Object Landmarks as Structural Representations
University of Michigan, Ann Arbor
2Google Brain
('1692992', 'Yuting Zhang', 'yuting zhang')
('1857914', 'Yijie Guo', 'yijie guo')
('50442731', 'Yixin Jin', 'yixin jin')
('49513553', 'Yijun Luo', 'yijun luo')
('46915665', 'Zhiyuan He', 'zhiyuan he')
('1697141', 'Honglak Lee', 'honglak lee')
{yutingzh, guoyijie, jinyixin, lyjtour, zhiyuan, honglak}@umich.edu
honglak@google.com
86b51bd0c80eecd6acce9fc538f284b2ded5bcdd
8699268ee81a7472a0807c1d3b1db0d0ab05f40d
86374bb8d309ad4dbde65c21c6fda6586ae4147aDetect-and-Track: Efficient Pose Estimation in Videos
The Robotics Institute, Carnegie Mellon University
Dartmouth College
2Facebook
https://rohitgirdhar.github.io/DetectAndTrack
('3102850', 'Rohit Girdhar', 'rohit girdhar')
('2082991', 'Georgia Gkioxari', 'georgia gkioxari')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
('2210374', 'Manohar Paluri', 'manohar paluri')
869583b700ecf33a9987447aee9444abfe23f343
72282287f25c5419dc6fd9e89ec9d86d660dc0b5A Rotation Invariant Latent Factor Model for
Moveme Discovery from Static Poses
California Institute of Technology, Pasadena, CA, USA
('3339867', 'Matteo Ruggero Ronchi', 'matteo ruggero ronchi')
('14834454', 'Joon Sik Kim', 'joon sik kim')
('1740159', 'Yisong Yue', 'yisong yue')
{mronchi, jkim5, yyue}@caltech.edu
72a87f509817b3369f2accd7024b2e4b30a1f588Fault diagnosis of a railway device using semi-supervised
independent factor analysis with mixing constraints
To cite this version:
using semi-supervised independent factor analysis with mixing constraints. Pattern Analysis and
Applications, Springer Verlag, 2012, 15 (3), pp.313-326.
HAL Id: hal-00750589
https://hal.archives-ouvertes.fr/hal-00750589
Submitted on 11 Nov 2012
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('3202810', 'Etienne Côme', 'etienne côme')
('1707103', 'Latifa Oukhellou', 'latifa oukhellou')
('1710347', 'Thierry Denoeux', 'thierry denoeux')
('2688359', 'Patrice Aknin', 'patrice aknin')
('3202810', 'Etienne Côme', 'etienne côme')
('1707103', 'Latifa Oukhellou', 'latifa oukhellou')
('1710347', 'Thierry Denoeux', 'thierry denoeux')
('2688359', 'Patrice Aknin', 'patrice aknin')
72a00953f3f60a792de019a948174bf680cd6c9fStat Comput (2007) 17:57–70
DOI 10.1007/s11222-006-9004-9
Understanding the role of facial asymmetry in human face
identification
Received: May 2005 / Accepted: September 2006 / Published online: 30 January 2007
C(cid:1) Springer Science + Business Media, LLC 2007
('2046854', 'Sinjini Mitra', 'sinjini mitra')
726b8aba2095eef076922351e9d3a724bb71cb51
721b109970bf5f1862767a1bec3f9a79e815f79a
727ecf8c839c9b5f7b6c7afffe219e8b270e7e15LEVERAGING GEO-REFERENCED DIGITAL PHOTOGRAPHS
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
July 2005
('1687465', 'Mor Naaman', 'mor naaman')
72a5e181ee8f71b0b153369963ff9bfec1c6b5b0Expression recognition in videos using a weighted
component-based feature descriptor
1. Machine Vision Group, Department of Electrical and Information Engineering,
University of Oulu, Finland
Research Center for Learning Science, Southeast University, China
http://www.ee.oulu.fi/mvg
('18780812', 'Xiaohua Huang', 'xiaohua huang')
('1757287', 'Guoying Zhao', 'guoying zhao')
('40608983', 'Wenming Zheng', 'wenming zheng')
{huang.xiaohua,gyzhao,mkp}@ee.oulu.fi
wenming_zheng@seu.edu.cn
72ecaff8b57023f9fbf8b5b2588f3c7019010ca7Facial Keypoints Detection ('27744156', 'Shenghao Shi', 'shenghao shi')
72591a75469321074b072daff80477d8911c3af3Group Component Analysis for Multi-block Data:
Common and Individual Feature Extraction
('1764724', 'Guoxu Zhou', 'guoxu zhou')
('1747156', 'Andrzej Cichocki', 'andrzej cichocki')
('38741479', 'Yu Zhang', 'yu zhang')
7224d58a7e1f02b84994b60dc3b84d9fe6941ff5When Face Recognition Meets with Deep Learning: an Evaluation of
Convolutional Neural Networks for Face Recognition
Centre for Vision, Speech and Signal Processing, University of Surrey, UK
Electronic Engineering and Computer Science, Queen Mary University of London, UK
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition, Chinese Academy of Sciences, China♠
('38819702', 'Guosheng Hu', 'guosheng hu')
('2653152', 'Yongxin Yang', 'yongxin yang')
('1716143', 'Dong Yi', 'dong yi')
('1748684', 'Josef Kittler', 'josef kittler')
('34679741', 'Stan Z. Li', 'stan z. li')
{g.hu,j.kittler,w.christmas}@surrey.ac.uk,{yongxin.yang,t.hospedales}@qmul.ac.uk, {szli,dyi}@cbsr.ia.ac.cn
729dbe38538fbf2664bc79847601f00593474b05
729a9d35bc291cc7117b924219bef89a864ce62cRecognizing Material Properties from Images ('40116153', 'Gabriel Schwartz', 'gabriel schwartz')
('1708819', 'Ko Nishino', 'ko nishino')
72e10a2a7a65db7ecdc7d9bd3b95a4160fab4114Face Alignment using Cascade Gaussian Process Regression Trees
Korea Advanced institute of Science and Technology
Face alignment is a task to locate fiducial facial landmark points, such as eye
corners, nose tip, mouth corners, and chin, in a face image. Shape regression
has become an accurate, robust, and fast framework for face alignment [2,
In shape regression, face shape s = (x1,y1,··· ,xp,yp)(cid:62), that is a
4, 5].
concatenation of p facial landmark coordinates {(xi,yi)}p
i=1, is initialized
and iteratively updated through a cascade regression trees (CRT) as shown
in Figure 1. Each tree estimates the shape increment from the current shape
estimate, and the final shape estimate is given by a cumulated sum of the
outputs of the trees to the initial estimate as follows:
ˆsT = ˆs0 +
t=1
f t (xt;θ t ),
(1)
where T is the number of stages, t is an index that denotes the stage, ˆst is a
shape estimate, xt is a feature vector that is extracted from an input image
I, and f t (·;·) is a tree that is parameterized by θ t. Starting from the rough
initial shape estimate ˆs0, each stage iteratively updates the shape estimate
by ˆst = ˆst−1 + f t (xt;θ t ).
The two key elements of CRT-based shape regression that impact to the
prediction performance are gradient boosting [3] for learning the CRT and
the shape-indexed features [2] which the trees are based. In gradient boost-
ing, each stage iteratively fits training data in a greedy stage-wise manner by
reducing the regression residuals that are defined as the differences between
the ground truth shapes and shape estimates. The shape-indexed features
are extracted from the pixel coordinates referenced by the shape estimate.
The shape-indexed features are extremely cheap to compute and are robust
against geometric variations.
Instead of using gradient boosting, we propose cascade Gaussian pro-
cess regression trees (cGPRT) that can be incorporated as a learning method
for a CRT prediction framework. The cGPRT is constructed by combining
Gaussian process regression trees (GPRT) in a cascade stage-wise manner.
Given training samples S = (s1,··· ,sN )(cid:62) and Xt = (x1,··· ,xN )(cid:62), GPRT
models the relationship between inputs and outputs by a regression function
f (x) drawn from a Gaussian process with independent additive noise εi,
i = 1,··· ,N,
si = f (xi) + εi,
f (x) ∼ GP(0,k(x,x(cid:48))),
εi ∼ N (0,σ 2
n ).
A kernel k(x,x(cid:48)) in GPRT is defined by a set of M number of trees:
k(x,x(cid:48)) = σ 2
κm(x,x(cid:48)) =
m=1
(cid:26) 1
κm(x,x(cid:48)),
if τm(x) = τm(x(cid:48))
otherwise,
(2)
(3)
(4)
(5)
(6)
where σ 2
k is the scaling parameter that represents the kernel power, and τ is
a split function takes an input x and computes the leaf index b ∈ {1,··· ,B}.
Given an input x∗, distribution over its predictive variable f∗ is given as
¯f∗ =
i=1
αik(xi,x∗),
(7)
where α = (α1,··· ,αN )(cid:62) is given by K−1
n IN,
and K is a covariance matrix of which K(i, j) is computed from the i-th and
j-th row vector of X. Computation of Equation (7) is in O(N); however, this
can be more efficient as follows:
s S. Here, Ks is given by K+σ 2
¯f∗ =
m=1
¯αm,τm(x∗),
(8)
('2350325', 'Donghoon Lee', 'donghoon lee')
('2857402', 'Hyunsin Park', 'hyunsin park')
72160aae43cd9b2c3aae5574acc0d00ea0993b9eBoosting Facial Expression Recognition in a Noisy Environment
Using LDSP-Local Distinctive Star Pattern
1 Department of Computer Science and Engineering
Stamford University Bangladesh, Dhaka-1209, Bangladesh
2 Department of Computer Science and Engineering
Stamford University Bangladesh, Dhaka-1209, Bangladesh
3 Department of Computer Science and Engineering
Stamford University Bangladesh, Dhaka-1209, Bangladesh
('7484236', 'Mohammad Shahidul Islam', 'mohammad shahidul islam')
('7497618', 'Tarin Kazi', 'tarin kazi')
72cbbdee4f6eeee8b7dd22cea6092c532271009fAdversarial Occlusion-aware Face Detection
1National Laboratory of Pattern Recognition, CASIA
2Center for Research on Intelligent Perception and Computing, CASIA
University of Chinese Academy of Sciences, Beijing 100190, China
('3065234', 'Yujia Chen', 'yujia chen')
('3051419', 'Lingxiao Song', 'lingxiao song')
('1705643', 'Ran He', 'ran he')
721d9c387ed382988fce6fa864446fed5fb23173
72c0c8deb9ea6f59fde4f5043bff67366b86bd66Age progression in Human Faces : A Survey ('34713849', 'Narayanan Ramanathan', 'narayanan ramanathan')
('9215658', 'Rama Chellappa', 'rama chellappa')
721e5ba3383b05a78ef1dfe85bf38efa7e2d611dBULAT, TZIMIROPOULOS: CONVOLUTIONAL AGGREGATION OF LOCAL EVIDENCE
Convolutional aggregation of local evidence
for large pose face alignment
Computer Vision Laboratory
University of Nottingham
Nottingham, UK
('3458121', 'Adrian Bulat', 'adrian bulat')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
adrian.bulat@nottingham.ac.uk
yorgos.tzimiropoulos@nottingham.ac.uk
72f4aaf7e2e3f215cd8762ce283988220f182a5bTurk J Elec Eng & Comp Sci, Vol.18, No.4, 2010, c(cid:2) T ¨UB˙ITAK
doi:10.3906/elk-0906-48
Active illumination and appearance model for face
alignment
Institute of Informatics, Istanbul Technical University, Istanbul, 34469, TURKEY
Istanbul Technical University, Istanbul, 34469, TURKEY
DTU Informatics, Technical University of Denmark, DK-2800 Kgs. Lyngby, DENMARK
('2061450', 'Fatih KAHRAMAN', 'fatih kahraman')
('1762901', 'Sune DARKNER', 'sune darkner')
('2134834', 'Rasmus LARSEN', 'rasmus larsen')
e-mail: kahraman@be.itu.edu.tr
e-mail: gokmen@itu.edu.tr
e-mail: {sda, rl}@imm.dtu.dk
72a55554b816b66a865a1ec1b4a5b17b5d3ba784Real-Time Face Identification
via CNN
and Boosted Hashing Forest
State Research Institute of Aviation Systems (GosNIIAS), Moscow, Russia
IEEE Computer Society Workshop on Biometrics
In conjunction with CVPR 2016, June 26, 2016
('2966131', 'Yury Vizilter', 'yury vizilter')
('5669812', 'Vladimir Gorbatsevich', 'vladimir gorbatsevich')
('34296728', 'Andrey Vorotnikov', 'andrey vorotnikov')
('7729536', 'Nikita Kostromov', 'nikita kostromov')
viz@gosniias.ru, gvs@gosniias.ru, vorotnikov@gosniias.ru, nikita-kostromov@yandex.ru
72450d7e5cbe79b05839c30a4f0284af5aa80053Natural Facial Expression Recognition Using Dynamic
and Static Schemes
1 Computer Vision Center, 08193 Bellaterra, Barcelona, Spain
2 IKERBASQUE, Basque Foundation for Science
University of the Basque Country, San Sebastian, Spain
('3262395', 'Bogdan Raducanu', 'bogdan raducanu')
('1803584', 'Fadi Dornaika', 'fadi dornaika')
bogdan@cvc.uab.es
fadi dornaika@ehu.es
72bf9c5787d7ff56a1697a3389f11d14654b4fcfRobustFaceRecognitionUsing
SymmetricShape-from-Shading
W.Zhao
RamaChellappa
CenterforAutomationResearchand
ElectricalandComputerEngineeringDepartment
UniversityofMaryland
CollegePark, MD
ThesupportoftheO(cid:14)ceofNavalResearchunderGrantN- --isgratefullyacknowledged.DRAFT
Email:fwyzhao,ramag@cfar.umd.edu
725c3605c2d26d113637097358cd4c08c19ff9e1Deep Reasoning with Knowledge Graph for Social Relationship Understanding
School of Data and Computer Science, Sun Yat-sen University, China
2 SenseTime Research, China
('29988001', 'Zhouxia Wang', 'zhouxia wang')
('1765674', 'Tianshui Chen', 'tianshui chen')
('12254824', 'Weihao Yu', 'weihao yu')
('47413456', 'Hui Cheng', 'hui cheng')
('1737218', 'Liang Lin', 'liang lin')
zhouzi1212,tianshuichen,jimmy.sj.ren,weihaoyu6@gmail.com,
chengh9@mail.sysu.edu.cn, linliang@ieee.org
445461a34adc4bcdccac2e3c374f5921c93750f8Emotional Expression Classification using Time-Series Kernels∗ ('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1733113', 'Takeo Kanade', 'takeo kanade')
1E¨otv¨os Lor´and University, Budapest, Hungary, {andras.lorincz,szzoli}@elte.hu
2Carnegie Mellon University, Pittsburgh, PA, laszlo.jeni@ieee.org,tk@cs.cmu.edu
3University of Pittsburgh, Pittsburgh, PA, jeffcohn@cs.cmu.edu
4414a328466db1e8ab9651bf4e0f9f1fe1a163e41164
© EURASIP, 2010 ISSN 2076-1465
18th European Signal Processing Conference (EUSIPCO-2010)
INTRODUCTION
442f09ddb5bb7ba4e824c0795e37cad754967208
443acd268126c777bc7194e185bec0984c3d1ae7Retrieving Relative Soft Biometrics
for Semantic Identification
School of Electronics and Computer Science,
University of Southampton, United Kingdom
('3408521', 'Daniel Martinho-Corbishley', 'daniel martinho-corbishley')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('3000521', 'John N. Carter', 'john n. carter')
{dmc,msn,jnc}@ecs.soton.ac.uk
44f23600671473c3ddb65a308ca97657bc92e527Convolutional Two-Stream Network Fusion for Video Action Recognition
Graz University of Technology
Graz University of Technology
University of Oxford
('2322150', 'Christoph Feichtenhofer', 'christoph feichtenhofer')
('1718587', 'Axel Pinz', 'axel pinz')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
feichtenhofer@tugraz.at
axel.pinz@tugraz.at
az@robots.ox.ac.uk
4439746eeb7c7328beba3f3ef47dc67fbb52bcb3An Efficient Face Detection Method Using Adaboost and Facial Parts
Computer, IT and Electronic department
Azad University of Qazvin
Tehran, Iran
('2514753', 'Yasaman Heydarzadeh', 'yasaman heydarzadeh')
('2514753', 'Yasaman Heydarzadeh', 'yasaman heydarzadeh')
('1681854', 'Abolfazl Toroghi Haghighat', 'abolfazl toroghi haghighat')
heydarzadeh@ qiau.ac.ir , haghighat@qiau.ac.ir
446a99fdedd5bb32d4970842b3ce0fc4f5e5fa03A Pose-Adaptive Constrained Local Model For
Accurate Head Pose Tracking
Eikeo
11 rue Leon Jouhaux,
F-75010, Paris, France
Sorbonne Universit´es
UPMC Univ Paris 06
CNRS UMR 7222, ISIR
F-75005, Paris, France
Eikeo
11 rue Leon Jouhaux,
F-75010, Paris, France
('2416620', 'Lucas Zamuner', 'lucas zamuner')
('2521061', 'Kevin Bailly', 'kevin bailly')
('2254216', 'Erwan Bigorgne', 'erwan bigorgne')
lucas.zamuner@eikeo.com
kevin.bailly@upmc.fr
erwan.bigorgne@eikeo.com
4467a1ae8ddf0bc0e970c18a0cdd67eb83c8fd6fLearning features from Improved Dense Trajectories using deep convolutional
networks for Human Activity Recognition
University Drive
Burnaby, BC
Canada V5A 1S6
Sportlogiq Inc.
780 Avenue Brewster,
Montreal QC,
Canada H4C 1A8
University Drive
Burnaby, BC
Canada V5A 1S6
('2716937', 'Srikanth Muralidharan', 'srikanth muralidharan')
('2190580', 'Simon Fraser', 'simon fraser')
('15695326', 'Mehrsan Javan', 'mehrsan javan')
('10771328', 'Greg Mori', 'greg mori')
('2190580', 'Simon Fraser', 'simon fraser')
smuralid@sfu.ca
mehrsan@sportlogiq.com
mori@cs.sfu.ca
44b1399e8569a29eed0d22d88767b1891dbcf987This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Learning Multi-modal Latent Attributes
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('1700927', 'Tao Xiang', 'tao xiang')
('2073354', 'Shaogang Gong', 'shaogang gong')
44f48a4b1ef94a9104d063e53bf88a69ff0f55f3Automatically Building Face Datasets of New Domains
from Weakly Labeled Data with Pretrained Models
Sun Yat-sen University
('2442939', 'Shengyong Ding', 'shengyong ding')
('4080607', 'Junyu Wu', 'junyu wu')
('1723992', 'Wei Xu', 'wei xu')
('38255852', 'Hongyang Chao', 'hongyang chao')
446dc1413e1cfaee0030dc74a3cee49a47386355Recent Advances in Zero-shot Recognition ('35782003', 'Yanwei Fu', 'yanwei fu')
('1700927', 'Tao Xiang', 'tao xiang')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
('14517812', 'Leonid Sigal', 'leonid sigal')
('2073354', 'Shaogang Gong', 'shaogang gong')
44a3ec27f92c344a15deb8e5dc3a5b3797505c06A Taxonomy of Part and Attribute Discovery
Techniques
('35208858', 'Subhransu Maji', 'subhransu maji')
44aeda8493ad0d44ca1304756cc0126a2720f07bFace Alive Icons ('1685323', 'Xin Li', 'xin li')
('2304980', 'Chieh-Chih Chang', 'chieh-chih chang')
('1679040', 'Shi-Kuo Chang', 'shi-kuo chang')
1University of Pittsburgh, USA,{flying, chang}@cs.pitt.edu
2Industrial Technology Research Institute, Taiwan, chieh@itri.org.tw
449b1b91029e84dab14b80852e35387a9275870e
44078d0daed8b13114cffb15b368acc467f96351
44d23df380af207f5ac5b41459c722c87283e1ebHuman Attribute Recognition by Deep
Hierarchical Contexts
The Chinese University of Hong Kong
('47002704', 'Yining Li', 'yining li')
('2000034', 'Chen Huang', 'chen huang')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{ly015,chuang,ccloy,xtang}@ie.cuhk.edu.hk
44c9b5c55ca27a4313daf3760a3f24a440ce17adRevisiting hand-crafted feature for action recognition:
a set of improved dense trajectories
Hiroshima University, Japan
ENSICAEN, France
Hiroshima University, Japan
('2223849', 'Kenji Matsui', 'kenji matsui')
('1744862', 'Toru Tamaki', 'toru tamaki')
('30171131', 'Gwladys Auffret', 'gwladys auffret')
('1688940', 'Bisser Raytchev', 'bisser raytchev')
('1686272', 'Kazufumi Kaneda', 'kazufumi kaneda')
44dd150b9020b2253107b4a4af3644f0a51718a3An Analysis of the Sensitivity of Active Shape
Models to Initialization when Applied to Automatic
Facial Landmarking
('2363348', 'Keshav Seshadri', 'keshav seshadri')
('1794486', 'Marios Savvides', 'marios savvides')
447d8893a4bdc29fa1214e53499ffe67b28a6db5('35734434', 'Maxime BERTHE', 'maxime berthe')
44f65e3304bdde4be04823fd7ca770c1c05c2cefSIViP
DOI 10.1007/s11760-009-0125-4
ORIGINAL PAPER
On the use of phase of the Fourier transform for face recognition
under variations in illumination
Received: 17 November 2008 / Revised: 20 February 2009 / Accepted: 7 July 2009
© Springer-Verlag London Limited 2009
('2627097', 'Anil Kumar Sao', 'anil kumar sao')
44fbbaea6271e47ace47c27701ed05e15da8f7cf588306 PSSXXX10.1177/0956797615588306Kret et al.Effect of Pupil Size on Trust
research-article2015
Research Article
Pupil Mimicry Correlates With Trust in
In-Group Partners With Dilating Pupils
1 –10
© The Author(s) 2015
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0956797615588306
pss.sagepub.com
M. E. Kret1,2, A. H. Fischer1,2, and C. K. W. De Dreu1,2,3
University of Amsterdam; 2Amsterdam Brain and Cognition Center, University of
Amsterdam; and 3Center for Experimental Economics and Political Decision Making, University of Amsterdam
44eb4d128b60485377e74ffb5facc0bf4ddeb022
448ed201f6fceaa6533d88b0b29da3f36235e131
441bf5f7fe7d1a3939d8b200eca9b4bb619449a9Head Pose Estimation in the Wild using Approximate View Manifolds
University of Florida
Gainesville, FL, USA
University of Florida
Gainesville, FL, USA
('30900274', 'Kalaivani Sundararajan', 'kalaivani sundararajan')
('2171076', 'Damon L. Woodard', 'damon l. woodard')
kalaivani.s@ufl.edu
dwoodard@ufl.edu
447a5e1caf847952d2bb526ab2fb75898466d1bcUnder review as a conference paper at ICLR 2018
LEARNING NON-LINEAR TRANSFORM WITH DISCRIM-
INATIVE AND MINIMUM INFORMATION LOSS PRIORS
Anonymous authors
Paper under double-blind review
449808b7aa9ee6b13ad1a21d9f058efaa400639aRecovering 3D Facial Shape via Coupled 2D/3D Space Learning
1Key Lab of Intelligent Information Processing of CAS,
Institute of Computing Technology, CAS, Beijing 100190, China
Graduate University of CAS, 100190, Beijing, China
System Research Center, NOKIA Research Center, Beijing, 100176, China
Institute of Digital Media, Peking University, Beijing, 100871, China
('3079475', 'Annan Li', 'annan li')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
('1695600', 'Xiujuan Chai', 'xiujuan chai')
('1698902', 'Wen Gao', 'wen gao')
{anli,sgshan,xlchen,wgao}@jdl.ac.cn
ext-xiujuan.chai@nokia.com
2a7bca56e2539c8cf1ae4e9da521879b7951872dExploiting Unrelated Tasks in Multi-Task Learning
Anonymous Author 1
Unknown Institution 1
Anonymous Author 2
Unknown Institution 2
Anonymous Author 3
Unknown Institution 3
2a65d7d5336b377b7f5a98855767dd48fa516c0fFast Supervised LDA for Discovering Micro-Events in
Large-Scale Video Datasets
Multimedia Understanding Group
Aristotle University of Thessaloniki, Greece
('3493855', 'Angelos Katharopoulos', 'angelos katharopoulos')
('3493472', 'Despoina Paschalidou', 'despoina paschalidou')
('1789830', 'Christos Diou', 'christos diou')
('1708199', 'Anastasios Delopoulos', 'anastasios delopoulos')
{katharas, pdespoin}@auth.gr; diou@mug.ee.auth.gr; adelo@eng.auth.gr
2af2b74c3462ccff3a6881ff7cf4f321b3242fa9Chen ZN, Ngo CW, Zhang W et al. Name-face association in Web videos: A large-scale dataset, baselines, and open issues.
1468-z
Name-Face Association in Web Videos: A Large-Scale Dataset,
Baselines, and Open Issues
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
City University of Hong Kong, Hong Kong, China
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
School of Computer Science, Fudan University, Shanghai 200433, China
Received February 24, 2014; revised July 3, 2014.
('1751681', 'Chong-Wah Ngo', 'chong-wah ngo')
('40538946', 'Wei Zhang', 'wei zhang')
('1778024', 'Juan Cao', 'juan cao')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
E-mail: zhineng.chen@ia.ac.cn; cscwngo@cityu.edu.hk; wzhang34-c@my.cityu.edu.hk; caojuan@ict.ac.cn; ygj@fudan.edu.cn
2aaa6969c03f435b3ea8431574a91a0843bd320b
2af620e17d0ed67d9ccbca624250989ce372e255Meta-Class Features for Large-Scale Object Categorization on a Budget
Dartmouth College
Hanover, NH, U.S.A.
('34338883', 'Alessandro Bergamo', 'alessandro bergamo')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
{aleb, lorenzo}@cs.dartmouth.edu
2a35d20b2c0a045ea84723f328321c18be6f555con Converting Supervised Classification to Semi-supervised Classification
Boost Picking: A Universal Method
Beijing Institute of Technology, Beijing 100081 CHINA
North China University of Technology, Beijing 100144 CHINA
Beijing Institute of Technology, Beijing 100081 CHINA
Beijing Institute of Technology, Beijing 100081 CHINA
('1742846', 'Fuqiang Liu', 'fuqiang liu')
('33179404', 'Fukun Bi', 'fukun bi')
('3148439', 'Yiding Yang', 'yiding yang')
('36522003', 'Liang Chen', 'liang chen')
2ad7cef781f98fd66101fa4a78e012369d064830
2ad29b2921aba7738c51d9025b342a0ec770c6ea
2a9b398d358cf04dc608a298d36d305659e8f607Facial Action Unit Recognition with Sparse Representation
University of Denver, Denver, CO
University of Pittsburgh, Pittsburgh, PA
facial
image exhibiting
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')
('5510802', 'Mu Zhou', 'mu zhou')
('1837267', 'Kevin L. Veon', 'kevin l. veon')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
Emails: mmahoor@du.edu, mu.zhou09fall@gmail.com, kevin.veon@du.edu, seyedmohammad.mavadati@du.edu, and jeffcohn@pitt.edu
2a0efb1c17fbe78470acf01e4601a75735a805ccIllumination-InsensitiveFaceRecognitionUsing
SymmetricShape-from-Shading
WenYiZhao
RamaChellappa
CenterforAutomationResearch
UniversityofMaryland, CollegePark, MD
Email:fwyzhao,ramag@cfar.umd.edu
2a6bba2e81d5fb3c0fd0e6b757cf50ba7bf8e924
2ac21d663c25d11cda48381fb204a37a47d2a574Interpreting Hand-Over-Face Gestures
University of Cambridge
('2022940', 'Marwa Mahmoud', 'marwa mahmoud')
('39626495', 'Peter Robinson', 'peter robinson')
2a4153655ad1169d482e22c468d67f3bc2c49f12Face Alignment Across Large Poses: A 3D Solution
1 Center for Biometrics and Security Research & National Laboratory of Pattern Recognition,
Institute of Automation, Chinese Academy of Sciences
Michigan State University
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('1718623', 'Zhen Lei', 'zhen lei')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('1704812', 'Hailin Shi', 'hailin shi')
('34679741', 'Stan Z. Li', 'stan z. li')
{xiangyu.zhu,zlei,hailin.shi,szli}@nlpr.ia.ac.cn
liuxm@msu.edu
2aa2b312da1554a7f3e48f71f2fce7ade6d5bf40Estimating Sheep Pain Level Using Facial Action Unit Detection
Computer Laboratory, University of Cambridge, Cambridge, UK
('9871228', 'Yiting Lu', 'yiting lu')
('2022940', 'Marwa Mahmoud', 'marwa mahmoud')
('39626495', 'Peter Robinson', 'peter robinson')
2aec012bb6dcaacd9d7a1e45bc5204fac7b63b3cRobust Registration and Geometry Estimation from Unstructured
Facial Scans
('19214361', 'Maxim Bazik', 'maxim bazik')
2ae139b247057c02cda352f6661f46f7feb38e45Combining Modality Specific Deep Neural Networks for
Emotion Recognition in Video
1École Polytechique de Montréal, Université de Montréal, Montréal, Canada
2Laboratoire d’Informatique des Systèmes Adaptatifs, Université de Montréal, Montréal, Canada
('3127597', 'Samira Ebrahimi Kahou', 'samira ebrahimi kahou')
('2900675', 'Xavier Bouthillier', 'xavier bouthillier')
('2558801', 'Pierre Froumenty', 'pierre froumenty')
('1710604', 'Roland Memisevic', 'roland memisevic')
('1724875', 'Pascal Vincent', 'pascal vincent')
('1751762', 'Yoshua Bengio', 'yoshua bengio')
{samira.ebrahimi-kahou, christopher.pal, pierre.froumenty}@polymtl.ca
{bouthilx, gulcehrc, memisevr, vincentp, courvila, bengioy}@iro.umontreal.ca
2a3e19d7c54cba3805115497c69069dd5a91da65Looking at Hands in Autonomous Vehicles:
A ConvNet Approach using Part Affinity Fields
LISA: Laboratory for Intelligent & Safe Automobiles
University of California San Diego
('2812409', 'Kevan Yuen', 'kevan yuen')
('1713989', 'Mohan M. Trivedi', 'mohan m. trivedi')
kcyuen@eng.ucsd.edu, mtrivedi@eng.ucsd.edu
2af19b5ff2ca428fa42ef4b85ddbb576b5d9a5ccMulti-Region Probabilistic Histograms
for Robust and Scalable Identity Inference (cid:63)
NICTA, PO Box 6020, St Lucia, QLD 4067, Australia
University of Queensland, School of ITEE, QLD 4072, Australia
('1781182', 'Conrad Sanderson', 'conrad sanderson')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
2a14b6d9f688714dc60876816c4b7cf763c029a9Combining Multiple Sources of Knowledge in Deep CNNs for Action Recognition
University of North Carolina at Chapel Hill
('2155311', 'Eunbyung Park', 'eunbyung park')
('1682965', 'Xufeng Han', 'xufeng han')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
{eunbyung,xufeng,tlberg,aberg}@cs.unc.edu
2a88541448be2eb1b953ac2c0c54da240b47dd8aDiscrete Graph Hashing
IBM T. J. Watson Research Center
Columbia University
(cid:2)Google Research
('39059457', 'Wei Liu', 'wei liu')
('2794322', 'Sanjiv Kumar', 'sanjiv kumar')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
weiliu@us.ibm.com
cm3052@columbia.edu
sfchang@ee.columbia.edu
sanjivk@google.com
2a5903bdb3fdfb4d51f70b77f16852df3b8e5f83121
The Effect of Computer-Generated Descriptions
on Photo-Sharing Experiences of People With
Visual Impairments
Like sighted people, visually impaired people want to share photographs on social networking services, but
find it difficult to identify and select photos from their albums. We aimed to address this problem by
incorporating state-of-the-art computer-generated descriptions into Facebook’s photo-sharing feature. We
interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed a
photo description feature for the Facebook mobile application. We evaluated this feature with six
participants in a seven-day diary study. We found that participants used the descriptions to recall and
organize their photos, but they hesitated to upload photos without a sighted person’s input. In addition to
basic information about photo content, participants wanted to know more details about salient objects and
people, and whether the photos reflected their personal aesthetic. We discuss these findings from the lens of
self-disclosure and self-presentation theories and propose new computer vision research directions that will
better support visual content sharing by visually impaired people.
CCS Concepts: • Information interfaces and presentations → Multimedia and information systems; •
Social and professional topics → People with disabilities
KEYWORDS
Visual impairments; computer-generated descriptions; SNSs; photo sharing; self-disclosure; self-presentation
ACM Reference format:
The Effect of Computer-Generated Descriptions On Photo-Sharing Experiences of People With Visual
Impairments. Proc. ACM Hum.-Comput. Interact. 1, CSCW. 121 (November 2017), 22 pages.
DOI: 10.1145/3134756
1 INTRODUCTION
Sharing memories and experiences via photos is a common way to engage with others on social networking
services (SNSs) [39,46,51]. For instance, Facebook users uploaded more than 350 million photos a day [24]
and Twitter, which initially supported only text in tweets, now has more than 28.4% of tweets containing
images [39]. Visually impaired people (both blind and low vision) have a strong presence on SNS and are
interested in sharing photos [50]. They take photos for the same reasons that sighted people do: sharing
daily moments with their sighted friends and family [30,32]. A prior study showed that visually impaired
people shared a relatively large number of photos on Facebook—only slightly less than their sighted
counterparts [50].

PACM on Human-Computer Interaction, Vol. 1, No. 2, Article 121. Publication date: November 2017
('2582568', 'Yuhang Zhao', 'yuhang zhao')
('1968133', 'Shaomei Wu', 'shaomei wu')
('39685591', 'Lindsay Reynolds', 'lindsay reynolds')
('3283573', 'Shiri Azenkot', 'shiri azenkot')
2a02355c1155f2d2e0cf7a8e197e0d0075437b19
2a171f8d14b6b8735001a11c217af9587d095848Learning Social Relation Traits from Face Images
The Chinese University of Hong Kong
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('1693209', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
zz013@ie.cuhk.edu.hk, pluo@ie.cuhk.edu.hk, ccloy@ie.cuhk.edu.hk, xtang@ie.cuhk.edu.hk
2aea27352406a2066ddae5fad6f3f13afdc90be9
2a0623ae989f2236f5e1fe3db25ab708f5d029553D Face Modelling for 2D+3D Face Recognition
J.R. Tena Rodr´ıguez
Submitted for the Degree of
Doctor of Philosophy
from the
University of Surrey
Centre for Vision, Speech and Signal Processing
School of Electronics and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
November 2007
c(cid:13) J.R. Tena Rodr´ıguez 2007
2ad0ee93d029e790ebb50574f403a09854b65b7eAcquiring Linear Subspaces for Face
Recognition under Variable Lighting
David Kriegman, Senior Member, IEEE
('2457452', 'Kuang-chih Lee', 'kuang-chih lee')
('1788818', 'Jeffrey Ho', 'jeffrey ho')
2afdda6fb85732d830cea242c1ff84497cd5f3cbFace Image Retrieval by Using Haar Features
Institute ofInformation Science, Academia Sinica, Taipei, Taiwan
Graduate Institute ofNetworking and Multimedia, National Taiwan University, Taipei, Taiwan
Tamkang University, Taipei, Taiwan
('2609751', 'Bau-Cheng Shen', 'bau-cheng shen')
('1720473', 'Chu-Song Chen', 'chu-song chen')
('1679560', 'Hui-Huang Hsu', 'hui-huang hsu')
{bcshen, song} @ iis.sinica. edu. tw, h_hsu@mail. tku. edu. tw
2ab034e1f54c37bfc8ae93f7320160748310dc73Siamese Capsule Networks
James O’ Neill
Department of Computer Science
University of Liverpool
Liverpool, L69 3BX
james.o-neill@liverpool.ac.uk
2ff9618ea521df3c916abc88e7c85220d9f0ff06Facial Tic Detection Using Computer Vision
Christopher D. Leveille
March 20, 2014
('40579411', 'Aaron Cass', 'aaron cass')
2fda461869f84a9298a0e93ef280f79b9fb76f94OpenFace: an open source facial behavior analysis toolkit
Tadas Baltruˇsaitis
('39626495', 'Peter Robinson', 'peter robinson')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
Tadas.Baltrusaitis@cl.cam.ac.uk
Peter.Robinson@cl.cam.ac.uk
morency@cs.cmu.edu
2ff9ffedfc59422a8c7dac418a02d1415eec92f1Face Verification Using Boosted Cross-Image Features
University of Central Florida
University of California, Berkeley
Orlando, FL
Berkeley, CA
University of Central Florida
Orlando, FL
('1720307', 'Dong Zhang', 'dong zhang')
('2405613', 'Omar Oreifej', 'omar oreifej')
('1745480', 'Mubarak Shah', 'mubarak shah')
dzhang@cs.ucf.edu
oreifej@eecs.berkeley.edu
shah@crcv.ucf.edu
2fdce3228d384456ea9faff108b9c6d0cf39e7c7
2ffcd35d9b8867a42be23978079f5f24be8d3e35
ISSN XXXX XXXX © 2018 IJESC


Research Article Volume 8 Issue No.6
Satellite based Image Processing using Data mining
E.Malleshwari1, S.Nirmal Kumar2, J.Dhinesh3
Professor1, Assistant Professor2, PG Scholar3
Department of Information Technology1, 2, Master of Computer Applications3
Vel Tech High Tech Dr Rangarajan Dr Sakunthala Engineering College, Avadi, Chennai, India
2f7e9b45255c9029d2ae97bbb004d6072e70fa79Noname manuscript No.
(will be inserted by the editor)
cvpaper.challenge in 2015
A review of CVPR2015 and DeepSurvey
Nakamura
Received: date / Accepted: date
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('29998543', 'Hironori Hoshino', 'hironori hoshino')
('3407486', 'Takaaki Imanari', 'takaaki imanari')
2f53b97f0de2194d588bc7fb920b89cd7bcf7663Facial Expression Recognition Using Sparse
Gaussian Conditional Random Field
School of Electrical and Computer Engineering
School of Electrical and Computer Engineering
Shiraz University
Shiraz, Iran
Shiraz University
Shiraz, Iran
('37514045', 'Mohammadamin Abbasnejad', 'mohammadamin abbasnejad')
('2229932', 'Mohammad Ali Masnadi-Shirazi', 'mohammad ali masnadi-shirazi')
Email: amin.abbasnejad@gmail.com
Email: mmasnadi@shirazu.ac.ir
2f16baddac6af536451b3216b02d3480fc361ef4Web-Scale Training for Face
Identification
1 Facebook AI Research
Tel Aviv University
('2909406', 'Ming Yang', 'ming yang')
('2188620', 'Yaniv Taigman', 'yaniv taigman')
2f489bd9bfb61a7d7165a2f05c03377a00072477JIA, YANG: STRUCTURED SEMI-SUPERVISED FOREST
Structured Semi-supervised Forest for
Facial Landmarks Localization with Face
Mask Reasoning
1 Department of Computer Science
The Univ. of Hong Kong, HK
2 School of EECS
Queen Mary Univ. of London, UK
Angran Lin1
('34760532', 'Xuhui Jia', 'xuhui jia')
('2966679', 'Heng Yang', 'heng yang')
('40392393', 'Kwok-Ping Chan', 'kwok-ping chan')
('1744405', 'Ioannis Patras', 'ioannis patras')
xhjia@cs.hku.hk
heng.yang@qmul.ac.uk
arlin@cs.hku.hk
kpchan@cs.hku.hk
i.patras@qmul.ac.uk
2f2aa67c5d6dbfaf218c104184a8c807e8b29286Video Analytics for Surveillance Camera Networks
(Invited Paper)
Interactive and Digital Media Institute
National University of Singapore, Singapore
('1986874', 'Lekha Chaisorn', 'lekha chaisorn')
('3026404', 'Yongkang Wong', 'yongkang wong')
2f16459e2e24dc91b3b4cac7c6294387d4a0eacf
2f59f28a1ca3130d413e8e8b59fb30d50ac020e2Children Gender Recognition Under Unconstrained
Conditions Based on Contextual Information
Joint Research Centre, European Commission, Ispra, Italy
('3309307', 'Riccardo Satta', 'riccardo satta')
('1907426', 'Javier Galbally', 'javier galbally')
('2730666', 'Laurent Beslay', 'laurent beslay')
Email: {riccardo.satta,javier.galbally,laurent.beslay}@jrc.ec.europa.eu
2f78e471d2ec66057b7b718fab8bfd8e5183d8f4SOFTWARE ENGINEERING
VOLUME: 14 | NUMBER: 5 | 2016 | DECEMBER
An Investigation of a New Social Networks
Contact Suggestion Based on Face Recognition
Algorithm
1Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical & Electronics
Engineering, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Ho Chi Minh City, Vietman
2Department of Computer Science, Faculty of Electrical Engineering and Computer Science,
VSB Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava, Czech Republic
DOI: 10.15598/aeee.v14i5.1116
('1681072', 'Ivan ZELINKA', 'ivan zelinka')
('1856530', 'Petr SALOUN', 'petr saloun')
('2053234', 'Jakub STONAWSKI', 'jakub stonawski')
('2356663', 'Adam ONDREJKA', 'adam ondrejka')
ivan.zelinka@tdt.edu.vn, petr.saloun@vsb.cz, stonawski.jakub@gmail.com, adam.ondrejka@gmail.com
2fc43c2c3f7ad1ca7a1ce32c5a9a98432725fb9aHierarchical Video Generation from Orthogonal
Information: Optical Flow and Texture
The University of Tokyo
The University of Tokyo
The University of Tokyo
The University of Tokyo / RIKEN
('8197937', 'Katsunori Ohnishi', 'katsunori ohnishi')
('48333400', 'Shohei Yamamoto', 'shohei yamamoto')
('3250559', 'Yoshitaka Ushiku', 'yoshitaka ushiku')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
ohnishi@mi.t.u-tokyo.ac.jp
yamamoto@mi.t.u-tokyo.ac.jp
ushiku@mi.t.u-tokyo.ac.jp
harada@mi.t.u-tokyo.ac.jp
2f88d3189723669f957d83ad542ac5c2341c37a5Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 9/13/2018
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Attribute-correlatedlocalregionsfordeeprelativeattributeslearningFenZhangXiangweiKongZeJiaFenZhang,XiangweiKong,ZeJia,“Attribute-correlatedlocalregionsfordeeprelativeattributeslearning,”J.Electron.Imaging27(4),043021(2018),doi:10.1117/1.JEI.27.4.043021.
2fda164863a06a92d3a910b96eef927269aeb730Names and Faces in the News
Computer Science Division
U.C. Berkeley
Berkeley, CA 94720
('1685538', 'Tamara L. Berg', 'tamara l. berg')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('34497462', 'Jaety Edwards', 'jaety edwards')
('1965929', 'Michael Maire', 'michael maire')
('6714943', 'Ryan White', 'ryan white')
daf@cs.berkeley.edu
2fa057a20a2b4a4f344988fee0a49fce85b0dc33
2f8ef26bfecaaa102a55b752860dbb92f1a11dc6A Graph Based Approach to Speaker Retrieval in Talk
Show Videos with Transcript-Based Supervision
('1859487', 'Yina Han', 'yina han')
('1774346', 'Guizhong Liu', 'guizhong liu')
('1692389', 'Hichem Sahbi', 'hichem sahbi')
('1693574', 'Gérard Chollet', 'gérard chollet')
2f17f6c460e02bd105dcbf14c9b73f34c5fb59bdArticle
Robust Face Recognition Using the Deep C2D-CNN
Model Based on Decision-Level Fusion
School of Electronic and Information, Yangtze University, Jingzhou 434023, China
National Demonstration Center for Experimental Electrical and Electronic Education, Yangtze University
Jingzhou 434023, China
† These authors contributed equally to this work.
Received: 20 May 2018; Accepted: 25 June 2018; Published: 28 June 2018
('1723081', 'Jing Li', 'jing li')
('48216473', 'Tao Qiu', 'tao qiu')
('41208300', 'Chang Wen', 'chang wen')
('36203475', 'Kai Xie', 'kai xie')
201501479@yangtzeu.edu.cn (J.L.); 500646@yangtzeu.edu.cn (K.X.); wenfangqing@yangtzeu.edu.cn (F-Q.W.)
School of Computer Science, Yangtze University, Jingzhou 434023, China; 201603441@yangtzeu.edu.cn
* Correspondence: 400100@yangtzeu.edu.cn; Tel.: +86-136-9731-5482
2f184c6e2c31d23ef083c881de36b9b9b6997ce9Polichotomies on Imbalanced Domains
by One-per-Class Compensated Reconstruction Rule
Integrated Research Centre, Universit´a Campus Bio-Medico of Rome, Rome, Italy
('1720099', 'Paolo Soda', 'paolo soda'){r.dambrosio,p.soda}@unicampus.it
2f9c173ccd8c1e6b88d7fb95d6679838bc9ca51d
2f8183b549ec51b67f7dad717f0db6bf342c9d02
2f13dd8c82f8efb25057de1517746373e05b04c4EVALUATION OF STATE-OF-THE-ART ALGORITHMS FOR REMOTE FACE
RECOGNITION
University
of Maryland, College Park, MD 20742, USA
('38811046', 'Jie Ni', 'jie ni')
('9215658', 'Rama Chellappa', 'rama chellappa')
2fa1fc116731b2b5bb97f06d2ac494cb2b2fe475A novel approach to personal photo album representation
and management
Universit`a di Palermo - Dipartimento di Ingegneria Informatica
Viale delle Scienze, 90128, Palermo, Italy
('1762753', 'Edoardo Ardizzone', 'edoardo ardizzone')
('9127836', 'Marco La Cascia', 'marco la cascia')
('1698741', 'Filippo Vella', 'filippo vella')
2f2406551c693d616a840719ae1e6ea448e2f5d3Age Estimation from Face Images:
Human vs. Machine Performance
Pattern Recognition & Image Processing Laboratory
Michigan State University
('34393045', 'Hu Han', 'hu han')
('40653304', 'Charles Otto', 'charles otto')
('6680444', 'Anil K. Jain', 'anil k. jain')
2f882ceaaf110046e63123b495212d7d4e99f33dHigh Frequency Component Compensation based Super-resolution
Algorithm for Face Video Enhancement
CVRR Lab, UC San Diego, La Jolla, CA 92093, USA
('1807917', 'Junwen Wu', 'junwen wu')
2f95340b01cfa48b867f336185e89acfedfa4d92Face Expression Recognition with a 2-Channel
Convolutional Neural Network

Vogt-K¨olln-Straße 30, 22527 Hamburg, Germany
http://www.informatik.uni-hamburg.de/WTM/
('2283866', 'Dennis Hamester', 'dennis hamester')
('1736513', 'Stefan Wermter', 'stefan wermter')
{hamester,barros,wermter}@informatik.uni-hamburg.de
2f7fc778e3dec2300b4081ba2a1e52f669094fcdAction Representation Using Classifier Decision Boundaries
3 Fatih Porikli1
1Data61/CSIRO,
2Australian Centre for Robotic Vision
The Australian National University, Canberra, Australia
('36541522', 'Jue Wang', 'jue wang')
('2691929', 'Anoop Cherian', 'anoop cherian')
('2377076', 'Stephen Gould', 'stephen gould')
firstname.lastname@anu.edu.au
2fea258320c50f36408032c05c54ba455d575809
2f0e5a4b0ef89dd2cf55a4ef65b5c78101c8bfa1Facial Expression Recognition Using a Hybrid CNN–SIFT Aggregator
Mundher Ahmed Al-Shabi
Tee Connie
Faculty of Information Science and Technology (FIST)
Multimedia University
Melaka, Malaysia
('1700590', 'Wooi Ping Cheah', 'wooi ping cheah')
2faa09413162b0a7629db93fbb27eda5aeac54caNISTIR 7674
Quantifying How Lighting and Focus
Affect Face Recognition Performance
Phillips, P. J.
Beveridge, J. R.
Draper, B.
Bolme, D.
Givens, G. H.
Lui, Y. M.
1
2f5e057e35a97278a9d824545d7196c301072ebfCapturing long-tail distributions of object subcategories
University of California, Irvine
Google Inc.
University of California, Irvine
('32542103', 'Xiangxin Zhu', 'xiangxin zhu')
('1838674', 'Dragomir Anguelov', 'dragomir anguelov')
('1770537', 'Deva Ramanan', 'deva ramanan')
xzhu@ics.uci.edu
dragomir@google.com
dramanan@ics.uci.edu
2f04ba0f74df046b0080ca78e56898bd4847898bAggregate Channel Features for Multi-view Face Detection
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, China
('1716231', 'Bin Yang', 'bin yang')
('1721677', 'Junjie Yan', 'junjie yan')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
{jjyan,zlei,szli}@nlpr.ia.ac.cn
yb.derek@gmail.com
433bb1eaa3751519c2e5f17f47f8532322abbe6d
4300fa1221beb9dc81a496cd2f645c990a7ede53
43010792bf5cdb536a95fba16b8841c534ded316Towards General Motion-Based Face Recognition
School of Computing, National University of Singapore, Singapore
('2268503', 'Ning Ye', 'ning ye')
('1715286', 'Terence Sim', 'terence sim')
{yening,tsim}@comp.nus.edu.sg
43bb20ccfda7b111850743a80a5929792cb031f0PhD Dissertation
International Doctorate School in Information and
Communication Technologies
DISI - University of Trento
Discrimination of Computer Generated
versus Natural Human Faces
Advisor:
Prof. Giulia Boato
Universit`a degli Studi di Trento
Co-Advisor:
Prof. Francesco G. B. De Natale
Universit`a degli Studi di Trento
February 2014
('2598811', 'Duc-Tien Dang-Nguyen', 'duc-tien dang-nguyen')
438c4b320b9a94a939af21061b4502f4a86960e3Reconstruction-Based Disentanglement for Pose-invariant Face Recognition
Rutgers, The State University of New Jersey
University of California, San Diego
‡ NEC Laboratories America
('4340744', 'Xi Peng', 'xi peng')
('39960064', 'Xiang Yu', 'xiang yu')
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
{xipeng.cs, dnm}@rutgers.edu, {xiangyu,ksohn,manu}@nec-labs.com
439ac8edfa1e7cbc65474cab544a5b8c4c65d5dbSIViP (2011) 5:401–413
DOI 10.1007/s11760-011-0244-6
ORIGINAL PAPER
Face authentication with undercontrolled pose and illumination
Received: 15 September 2010 / Revised: 14 December 2010 / Accepted: 17 February 2011 / Published online: 7 August 2011
© Springer-Verlag London Limited 2011
('1763890', 'Maria De Marsico', 'maria de marsico')
43f6953804964037ff91a4f45d5b5d2f8edfe4d5Multi-Feature Fusion in Advanced Robotics Applications
Institut für Informatik
Technische Universität München
D-85748 Garching, Germany
('1725709', 'Zahid Riaz', 'zahid riaz')
('1685773', 'Christoph Mayer', 'christoph mayer')
('1746229', 'Michael Beetz', 'michael beetz')
('1699132', 'Bernd Radig', 'bernd radig')
{riaz,mayerc,beetz,radig}@in.tum.de
439ec47725ae4a3660e509d32828599a495559bfFacial Expressions Tracking and Recognition: Database Protocols for Systems Validation
and Evaluation
43e99b76ca8e31765d4571d609679a689afdc99eLearning Dense Facial Correspondences in Unconstrained Images
University of Southern California
2Adobe Research
3Pinscreen
USC Institute for Creative Technologies
('9965153', 'Ronald Yu', 'ronald yu')
('2059597', 'Shunsuke Saito', 'shunsuke saito')
('3131569', 'Haoxiang Li', 'haoxiang li')
('39686979', 'Duygu Ceylan', 'duygu ceylan')
('1706574', 'Hao Li', 'hao li')
4377b03bbee1f2cf99950019a8d4111f8de9c34aSelective Encoding for Recognizing Unreliably Localized Faces
Institute for Advanced Computer Studies
University of Maryland, College Park, MD
('40592297', 'Ang Li', 'ang li')
('2852035', 'Vlad I. Morariu', 'vlad i. morariu')
('1693428', 'Larry S. Davis', 'larry s. davis')
{angli, morariu, lsd}@umiacs.umd.edu
43a03cbe8b704f31046a5aba05153eb3d6de4142Towards Robust Face Recognition from Video
Image Science and Machine Vision Group
Oak Ridge National Laboratory
Oak Ridge, TN 37831-6010
('3211433', 'Jeffery R. Price', 'jeffery r. price')
('2743462', 'Timothy F. Gee', 'timothy f. gee')
{pricejr, geetf}@ornl.gov
434bf475addfb580707208618f99c8be0c55cf95UNDER CONSIDERATION FOR PUBLICATION IN PATTERN RECOGNITION LETTERS
DeXpression: Deep Convolutional Neural
Network for Expression Recognition
German Research Center for Arti cial Intelligence (DFKI), Kaiserslautern, Germany
University of Kaiserslautern, Gottlieb-Daimler-Str., Kaiserslautern 67663, Germany
('20651567', 'Peter Burkert', 'peter burkert')
('3026604', 'Felix Trier', 'felix trier')
('6149779', 'Muhammad Zeshan Afzal', 'muhammad zeshan afzal')
('1703343', 'Andreas Dengel', 'andreas dengel')
('1743758', 'Marcus Liwicki', 'marcus liwicki')
p burkert11@cs.uni-kl.de, f
trier10@cs.uni-kl.de, afzal@iupr.com, andreas.dengel@dfki.de,
liwicki@dfki.uni-kl.de
43836d69f00275ba2f3d135f0ca9cf88d1209a87Ozaki et al. IPSJ Transactions on Computer Vision and
Applications (2017) 9:20
DOI 10.1186/s41074-017-0030-7
IPSJ Transactions on Computer
Vision and Applications
RESEARCH PAPER
Open Access
Effective hyperparameter optimization
using Nelder-Mead method in deep learning
('2167404', 'Yoshihiko Ozaki', 'yoshihiko ozaki')
('30735847', 'Masaki Yano', 'masaki yano')
('1703823', 'Masaki Onishi', 'masaki onishi')
4307e8f33f9e6c07c8fc2aeafc30b22836649d8cSupervised Earth Mover’s Distance Learning
and its Computer Vision Applications
Stanford University, CA, United States
('1716453', 'Fan Wang', 'fan wang')
('1744254', 'Leonidas J. Guibas', 'leonidas j. guibas')
435642641312364e45f4989fac0901b205c49d53Face Model Compression
by Distilling Knowledge from Neurons
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Key Lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('1693209', 'Ping Luo', 'ping luo')
('2042558', 'Zhenyao Zhu', 'zhenyao zhu')
('3243969', 'Ziwei Liu', 'ziwei liu')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{pluo,zz012,lz013,xtang}@ie.cuhk.edu.hk, {xgwang}@ee.cuhk.edu.hk
43aa40eaa59244c233f83d81f86e12eba8d74b59
4362368dae29cc66a47114d5ffeaf0534bf0159cUACEE International Journal of Artificial Intelligence and Neural Networks ISSN:- 2250-3749 (online)
Performance Analysis of FDA Based Face
Recognition Using Correlation, ANN and SVM
Department of Computer Engineering
Department of Computer Engineering
Department of Computer Engineering
Anand, INDIA
Anand, INDIA
Anand, INDIA
('9318822', 'Mahesh Goyani', 'mahesh goyani')
('40632096', 'Ronak Paun', 'ronak paun')
('40803051', 'Sardar Patel', 'sardar patel')
('40803051', 'Sardar Patel', 'sardar patel')
('40803051', 'Sardar Patel', 'sardar patel')
e- mail : mgoyani@gmail.com
e- mail : akashdhorajiya@gmail.com
e- mail : ronak_paun@yahoo.com
43e268c118ac25f1f0e984b57bc54f0119ded520
4350bb360797a4ade4faf616ed2ac8e27315968eMITSUBISHI ELECTRIC RESEARCH LABORATORIES
http://www.merl.com
Edge Suppression by Gradient Field
Transformation using Cross-Projection
Tensors
TR2006-058
June 2006
('1717566', 'Ramesh Raskar', 'ramesh raskar')
('9215658', 'Rama Chellappa', 'rama chellappa')
43476cbf2a109f8381b398e7a1ddd794b29a9a16A Practical Transfer Learning Algorithm for Face Verification
David Wipf
('2032273', 'Xudong Cao', 'xudong cao')
('1716835', 'Fang Wen', 'fang wen')
('3168114', 'Genquan Duan', 'genquan duan')
('40055995', 'Jian Sun', 'jian sun')
{xudongca,davidwip,fangwen,genduan,jiansun}@microsoft.com
4353d0dcaf450743e9eddd2aeedee4d01a1be78bLearning Discriminative LBP-Histogram Bins
for Facial Expression Recognition
Philips Research, High Tech Campus 36, Eindhoven 5656 AE, The Netherlands
('10795229', 'Caifeng Shan', 'caifeng shan')
('3006670', 'Tommaso Gritti', 'tommaso gritti')
{caifeng.shan, tommaso.gritti}@philips.com
437a720c6f6fc1959ba95e48e487eb3767b4e508
436d80cc1b52365ed7b2477c0b385b6fbbb51d3b
434d6726229c0f556841fad20391c18316806f73Detecting Visual Relationships with Deep Relational Networks
The Chinese University of Hong Kong
('38222190', 'Bo Dai', 'bo dai')
('2617419', 'Yuqi Zhang', 'yuqi zhang')
('1807606', 'Dahua Lin', 'dahua lin')
db014@ie.cuhk.edu.hk
zy016@ie.cuhk.edu.hk
dhlin@ie.cuhk.edu.hk
43b8b5eeb4869372ef896ca2d1e6010552cdc4d4Large-scale Supervised Hierarchical Feature Learning for Face Recognition
Intel Labs China
('35423937', 'Jianguo Li', 'jianguo li')
('6060281', 'Yurong Chen', 'yurong chen')
43ae4867d058453e9abce760ff0f9427789bab3a951
Graph Embedded Nonparametric Mutual
Information For Supervised
Dimensionality Reduction
('2784463', 'Dimitrios Bouzas', 'dimitrios bouzas')
('2965236', 'Nikolaos Arvanitopoulos', 'nikolaos arvanitopoulos')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
435dc062d565ce87c6c20a5f49430eb9a4b573c4to appear.
Lighting Condition Adaptation
for Perceived Age Estimation
NEC Soft, Ltd., Japan
Tokyo Institute of Technology, Japan
NEC Soft, Ltd., Japan
('2163491', 'Kazuya Ueki', 'kazuya ueki')
('1719221', 'Masashi Sugiyama', 'masashi sugiyama')
('1853974', 'Yasuyuki Ihara', 'yasuyuki ihara')
430c4d7ad76e51d83bbd7ec9d3f856043f054915
438b88fe40a6f9b5dcf08e64e27b2719940995e0Building a Classi(cid:2)cation Cascade for Visual Identi(cid:2)cation from One Example
Computer Science, U.C. Berkeley
Computer Science, UMass Amherst
Computer Science, U.C. Berkeley
('3236352', 'Andras Ferencz', 'andras ferencz')
('1714536', 'Erik G. Learned-Miller', 'erik g. learned-miller')
('1689212', 'Jitendra Malik', 'jitendra malik')
ferencz@cs.berkeley.edu
elm@cs.umass.edu
malik@cs.berkeley.edu
433a6d6d2a3ed8a6502982dccc992f91d665b9b3Transferring Landmark Annotations for
Cross-Dataset Face Alignment
The Chinese University of Hong Kong
Tsinghua University
('2226254', 'Shizhan Zhu', 'shizhan zhu')
('40475617', 'Cheng Li', 'cheng li')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
438e7999c937b94f0f6384dbeaa3febff6d283b6Face Detection, Bounding Box Aggregation and Pose Estimation for Robust
Facial Landmark Localisation in the Wild
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, UK
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('1748684', 'Josef Kittler', 'josef kittler')
{z.feng, j.kittler, m.a.rana, p.huber}@surrey.ac.uk, wu xiaojun@jiangnan.edu.cn
43776d1bfa531e66d5e9826ff5529345b792def7Automatic Critical Event Extraction
and Semantic Interpretation
by Looking-Inside
Laboratory for Intelligent and Safe Automobiles
University of California, San Diego
Sept 17th, 2015
('1841835', 'Sujitha Martin', 'sujitha martin')
('1802326', 'Eshed Ohn-Bar', 'eshed ohn-bar')
('1713989', 'Mohan M. Trivedi', 'mohan m. trivedi')
43fb9efa79178cb6f481387b7c6e9b0ca3761da8Mixture of Parts Revisited: Expressive Part Interactions for Pose Estimation
Anoop R Katti
IIT Madras
Chennai, India
IIT Madras
Chennai, India
('1717115', 'Anurag Mittal', 'anurag mittal')akatti@cse.iitm.ac.in
amittal@cse.iitm.ac.in
432d8cba544bf7b09b0455561fea098177a85db1Published as a conference paper at ICLR 2017
TOWARDS A NEURAL STATISTICIAN
Harrison Edwards
School of Informatics
University of Edinburgh
Edinburgh, UK
Amos Storkey
School of Informatics
University of Edinburgh
Edinburgh, UK
H.L.Edwards@sms.ed.ac.uk
A.Storkey@ed.ac.uk
43ed518e466ff13118385f4e5d039ae4d1c000fbClassification of Occluded Objects using Fast Recurrent
Processing
Ozgur Yilmaza,∗
aTurgut Ozal University, Ankara Turkey
439647914236431c858535a2354988dde042ef4dFace Illumination Normalization on Large and Small Scale Features
School of Mathematics and Computational Science, Sun Yat-sen University, China
School of Information Science and Technology, Sun Yat-sen University, China
3 Guangdong Province Key Laboratory of Information Security, China,
Hong Kong Baptist University
('2002129', 'Xiaohua Xie', 'xiaohua xie')
('3333315', 'Wei-Shi Zheng', 'wei-shi zheng')
('1768574', 'Pong C. Yuen', 'pong c. yuen')
Email: sysuxiexh@gmail.com, wszheng@ieee.org, stsljh@mail.sysu.edu.cn, pcyuen@comp.hkbu.edu.hk
43d7d0d0d0e2d6cf5355e60c4fe5b715f0a1101aPobrane z czasopisma Annales AI- Informatica http://ai.annales.umcs.pl
Data: 04/05/2018 16:53:32
U M CS
439ca6ded75dffa5ddea203dde5e621dc4a88c3eRobust Real-time Performance-driven 3D Face Tracking
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Rutgers University, USA
('1736042', 'Vladimir Pavlovic', 'vladimir pavlovic')
('1688642', 'Jianfei Cai', 'jianfei cai')
('1775268', 'Tat-Jen Cham', 'tat-jen cham')
{hxp1,vladimir}@cs.rutgers.edu
{asjfcai,astfcham}@ntu.edu.sg
88e090ffc1f75eed720b5afb167523eb2e316f7fAttribute-Based Transfer Learning for Object
Categorization with Zero/One Training Example
University of Maryland, College Park, MD, USA
('3099583', 'Xiaodong Yu', 'xiaodong yu')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
xdyu@umiacs.umd.edu, yiannis@cs.umd.edu
8877e0b2dc3d2e8538c0cfee86b4e8657499a7c4AUTOMATIC FACIAL EXPRESSION RECOGNITION FOR AFFECTIVE COMPUTING
BASED ON BAG OF DISTANCES
National Chung Cheng University, Chiayi, Taiwan, R.O.C
E-mail: {hfs95p,wylin}cs.ccu.edu.tw
National Taichung University of Science and Technology, Taichung, Taiwan, R.O.C
('2240934', 'Fu-Song Hsu', 'fu-song hsu')
('1682393', 'Wei-Yang Lin', 'wei-yang lin')
('2080026', 'Tzu-Wei Tsai', 'tzu-wei tsai')
E-mail: wei@nutc.edu.tw
88c6d4b73bd36e7b5a72f3c61536c8c93f8d2320Image patch modeling in a light field
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2014-81
http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-81.html
May 15, 2014
('2040369', 'Zeyu Li', 'zeyu li')
889bc64c7da8e2a85ae6af320ae10e05c4cd6ce7174
Using Support Vector Machines to Enhance the
Performance of Bayesian Face Recognition
('1911510', 'Zhifeng Li', 'zhifeng li')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
88a898592b4c1dfd707f04f09ca58ec769a257deMobileFace: 3D Face Reconstruction
with Efficient CNN Regression
1 VisionLabs, Amsterdam, The Netherlands
2 Inria, WILLOW, Departement d’Informatique de l’Ecole Normale Superieure, PSL
Research University, ENS/INRIA/CNRS UMR 8548, Paris, France
('51318557', 'Nikolai Chinaev', 'nikolai chinaev')
('2564281', 'Alexander Chigorin', 'alexander chigorin')
('1785596', 'Ivan Laptev', 'ivan laptev')
{n.chinaev, a.chigorin}@visionlabs.ru
ivan.laptev@inria.fr
88f7a3d6f0521803ca59fde45601e94c3a34a403Semantic Aware Video Transcription
Using Random Forest Classifiers
University of Southern California, Institute for Robotics and Intelligent Systems
Los Angeles, CA 90089, USA
('1726241', 'Chen Sun', 'chen sun')
8812aef6bdac056b00525f0642702ecf8d57790bA Unified Features Approach to Human Face Image
Analysis and Interpretation
Department of Informatics,
Technische Universit¨at M¨unchen
85748 Garching, Germany
('1725709', 'Zahid Riaz', 'zahid riaz')
('2110952', 'Suat Gedikli', 'suat gedikli')
('1699132', 'Bernd Radig', 'bernd radig')
{riaz|gedikli|beetz|radig}@in.tum.de
881066ec43bcf7476479a4146568414e419da804From Traditional to Modern : Domain Adaptation for
Action Classification in Short Social Video Clips
Center for Visual Information Technology, IIIT Hyderabad, India
('2461059', 'Aditya Singh', 'aditya singh')
('3448416', 'Saurabh Saini', 'saurabh saini')
('1962817', 'Rajvi Shah', 'rajvi shah')
8813368c6c14552539137aba2b6f8c55f561b75fTrunk-Branch Ensemble Convolutional Neural
Networks for Video-based Face Recognition
('37990555', 'Changxing Ding', 'changxing ding')
('1692693', 'Dacheng Tao', 'dacheng tao')
88e2574af83db7281c2064e5194c7d5dfa649846Hindawi Publishing Corporation
Computational Intelligence and Neuroscience
Volume 2017, Article ID 4579398, 11 pages
http://dx.doi.org/10.1155/2017/4579398
Research Article
A Robust Shape Reconstruction Method for Facial Feature
Point Detection
School of Automation Engineering, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave
West Hi-Tech Zone, Chengdu 611731, China
Received 24 October 2016; Revised 18 January 2017; Accepted 30 January 2017; Published 19 February 2017
Academic Editor: Ezequiel L´opez-Rubio
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed
and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression
and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction
method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept
of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment
dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments.
Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation
tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the
state-of-the-art methods.
1. Introduction
In most literatures, facial feature points are also referred to
facial landmarks or facial fiducial points. These points mainly
locate around edges or corners of facial components such as
eyebrows, eyes, mouth, nose, and jaw (see Figure 1). Existing
databases for method comparison are labeled with different
number of feature points, varying from the minimum 5-point
configuration [1] to the maximal 194-point configuration
[2]. Generally facial feature point detection is a supervised
or semisupervised learning process that trains model on
a large number of labeled facial images. It starts from a
face detection process and then predicts facial landmarks
inside the detected face bounding box. The localized facial
feature points can be utilized for various face analysis
tasks, for example, face recognition [3], facial animation
[4], facial expression detection [5], and head pose tracking
[6].
In recent years, regression-based methods have gained
increasing attention for robust facial feature point detection.
Among these methods, a cascade framework is adopted to
recursively estimate the face shape 𝑆 of an input image,
which is the concatenation of facial feature point coordinates.
Beginning with an initial shape 𝑆(1), 𝑆 is updated by inferring
a shape increment Δ𝑆 from the previous shape:
Δ𝑆(𝑡) = 𝑊(𝑡)Φ(𝑡) (𝐼, 𝑆(𝑡)) ,
(1)
where Δ𝑆(𝑡) and 𝑊(𝑡) are the shape increment and linear
regression matrix after 𝑡 iterations, respectively. As the input
variable of the mapping function Φ(𝑡), 𝐼 denotes the image
appearance and 𝑆(𝑡) denotes the corresponding face shape.
The regression goes to the next iteration by the additive
formula:
𝑆(𝑡) = 𝑆(𝑡−1) + Δ𝑆(𝑡−1).
(2)
In this paper, we propose a sparse reconstruction method
that embeds sparse coding in the reconstruction of shape
increment. As a very popular signal coding algorithm, sparse
coding has been recently successfully applied to the fields
of computer vision and machine learning, such as feature
selection and clustering analysis, image classification, and
face recognition [7–11]. In our method, sparse overcomplete
dictionaries are learned to encode various facial poses and
local textures considering the complex nature of imaging
('9684590', 'Shuqiu Tan', 'shuqiu tan')
('2915473', 'Dongyi Chen', 'dongyi chen')
('9486108', 'Chenggang Guo', 'chenggang guo')
('2122143', 'Zhiqi Huang', 'zhiqi huang')
('9684590', 'Shuqiu Tan', 'shuqiu tan')
Correspondence should be addressed to Shuqiu Tan; tanshuqiu123136@hotmail.com and Dongyi Chen; dychen@uestc.edu.cn
88bef50410cea3c749c61ed68808fcff84840c37Sparse Representations of Image Gradient Orientations for Visual Recognition
and Tracking
Imperial College London
EEMCS, University of Twente
180 Queen’s Gate, London SW7 2AZ, U.K.
Drienerlolaan 5, 7522 NB Enschede,
The Netherlands
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{gt204,s.zafeiriou,m.pantic}@imperial.ac.uk
PanticM@cs.utwente.nl
883006c0f76cf348a5f8339bfcb649a3e46e2690Weakly Supervised Pain Localization using Multiple Instance Learning ('39707211', 'Karan Sikka', 'karan sikka')
('1735697', 'Abhinav Dhall', 'abhinav dhall')
88850b73449973a34fefe491f8836293fc208580www.ijaret.org Vol. 2, Issue I, Jan. 2014
ISSN 2320-6802
INTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN
ENGINEERING AND TECHNOLOGY
WINGS TO YOUR THOUGHTS…..
XBeats-An Emotion Based Music Player
1U.G. Student, Dept. of Computer Engineering,
D.J. Sanghvi College of Engineering
Vile Parle (W), Mumbai-400056.
2 U.G. Student, Dept. of Computer Engineering,
D.J. Sanghvi College of Engineering
Vile Parle (W), Mumbai-400056.
3 U.G. Student, Dept. of Computer Engineering,
D.J. Sanghvi College of Engineering
Vile Parle (W), Mumbai-400056.
4 Assistant Professor, Dept. of Computer Engineering,
D.J. Sanghvi College of Engineering
Vile Parle (W), Mumbai-400056.
('40770722', 'Sayali Chavan', 'sayali chavan')
('2122358', 'Dipali Bhatt', 'dipali bhatt')
sayalichavan17@gmail.com
ekta.malkan27@yahoo.in
dipupb1392@gmail.com
prakashparanjape2012@gmail.com
8820d1d3fa73cde623662d92ecf2e3faf1e3f328Continuous Video to Simple Signals for Swimming Stroke Detection with
Convolutional Neural Networks
La Trobe University, Australia
Australian Institute of Sport
('38689120', 'Brandon Victor', 'brandon victor')
('1787185', 'Zhen He', 'zhen he')
('31548192', 'Stuart Morgan', 'stuart morgan')
('2874225', 'Dino Miniutti', 'dino miniutti')
{b.victor,z.he,s.morgan}@latrobe.edu.au
Dino.Miniutti@ausport.gov.au
88f2952535df5859c8f60026f08b71976f8e19ecA neural network framework for face
recognition by elastic bunch graph matching
('37048377', 'Francisco A. Pujol López', 'francisco a. pujol lópez')
('3144590', 'Higinio Mora Mora', 'higinio mora mora')
('2260459', 'José A. Girona Selva', 'josé a. girona selva')
8818b12aa0ff3bf0b20f9caa250395cbea0e8769Fashion Conversation Data on Instagram
∗Graduate School of Culture Technology, KAIST, South Korea
†Department of Communication Studies, UCLA, USA
('3459091', 'Yu-i Ha', 'yu-i ha')
('2399803', 'Sejeong Kwon', 'sejeong kwon')
('1775511', 'Meeyoung Cha', 'meeyoung cha')
('1834047', 'Jungseock Joo', 'jungseock joo')
8862a573a42bbaedd392e9e634c1ccbfd177a01d3D Face Tracking and Texture Fusion in the Wild
Centre for Vision, Speech and Signal Processing
Image Understanding and Interactive Robotics
University of Surrey
Guildford, GU2 7XH, United Kingdom
Contact: http://www.patrikhuber.ch
Reutlingen University
D-72762 Reutlingen, Germany
('39976184', 'Patrik Huber', 'patrik huber')
('1748684', 'Josef Kittler', 'josef kittler')
('16764402', 'Philipp Kopp', 'philipp kopp')
887b7676a4efde616d13f38fcbfe322a791d1413Deep Temporal Appearance-Geometry Network
for Facial Expression Recognition
Korea Advanced Institute of Science and Technology
Electronics and Telecommunications Research Institute
('8271137', 'Injae Lee', 'injae lee')
('1769295', 'Junmo Kim', 'junmo kim')
('1800903', 'Heechul Jung', 'heechul jung')
{heechul, haeng, sunny0414, junmo.kim}@kaist.ac.kr†, {ninja, hyun}@etri.re.kr‡
8878871ec2763f912102eeaff4b5a2febfc22fbe3781
Human Action Recognition in Unconstrained
Videos by Explicit Motion Modeling
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('9227981', 'Qi Dai', 'qi dai')
('39059457', 'Wei Liu', 'wei liu')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
('1751681', 'Chong-Wah Ngo', 'chong-wah ngo')
8855d6161d7e5b35f6c59e15b94db9fa5bbf2912COGNITION IN PREGNANCY AND THE POSTPARTUM PERIOD
8895d6ae9f095a8413f663cc83f5b7634b3dc805BEHL ET AL: INCREMENTAL TUBE CONSTRUCTION FOR HUMAN ACTION DETECTION 1
Incremental Tube Construction for Human
Action Detection
Harkirat Singh Behl1
1 Department of Engineering Science
University of Oxford
Oxford, UK
2 Think Tank Team
Samsung Research America
Mountain View, CA
3 Dept. of Computing and
Communication Technologies
Oxford Brookes University
Oxford, UK
(a) Illustrative results on a video sequence from the LIRIS-HARL dataset [23]. Two people enter a room
Figure 1:
and put/take an object from a box (frame 150). They then shake hands (frame 175) and start having a discussion
(frame 350). In frame 450, another person enters the room, shakes hands, and then joins the discussion. Each
action tube instance is numbered and coloured according to its action category. We selected this video to show that
our tube construction algorithm can handle very complex situations in which multiple distinct action categories
occur in sequence and at concurrent times. (b) Action tubes drawn as viewed from above, compared to (c) the
ground truth action tubes.
('3019396', 'Michael Sapienza', 'michael sapienza')
('1931660', 'Gurkirt Singh', 'gurkirt singh')
('49348905', 'Suman Saha', 'suman saha')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
('1730268', 'Philip H. S. Torr', 'philip h. s. torr')
harkirat@robots.ox.ac.uk
m.sapienza@samsung.com
gurkirt.singh-2015@brookes.ac.uk
suman.saha-2014@brookes.ac.uk
fabio.cuzzolin@brookes.ac.uk
phst@robots.ox.ac.uk
88bee9733e96958444dc9e6bef191baba4fa6efaExtending Face Identification to
Open-Set Face Recognition
Department of Computer Science
Universidade Federal de Minas Gerais
Belo Horizonte, Brazil
('2823797', 'Cassio E. dos Santos', 'cassio e. dos santos')
('1679142', 'William Robson Schwartz', 'william robson schwartz')
{cass,william}@dcc.ufmg.br
88fd4d1d0f4014f2b2e343c83d8c7e46d198cc79978-1-4799-9988-0/16/$31.00 ©2016 IEEE
2697
ICASSP 2016
887745c282edf9af40d38425d5fdc9b3fe139c08FAME:
Face Association through Model Evolution
Bilkent University
06800 Ankara/Turkey
Pinar Duygulu
Bilkent University
06800 Ankara/Turkey
('2540074', 'Eren Golge', 'eren golge')eren.golge@bilkent.edu.tr
pinar.duygulu@gmail.com
9f6d04ce617d24c8001a9a31f11a594bd6fe3510Personality and Individual Differences 52 (2012) 61–66
Contents lists available at SciVerse ScienceDirect
Personality and Individual Differences
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / p a i d
Attentional bias towards angry faces in trait-reappraisal
1E1 WC Mackenzie Health Sciences Centre, University of Alberta, Edmonton, AB, Canada T6G 2R
a r t i c l e
i n f o
a b s t r a c t
Article history:
Received 31 May 2011
Received in revised form 26 August 2011
Accepted 31 August 2011
Available online 2 October 2011
Keywords:
Trait emotion regulation
Reappraisal
Attention
Individual differences
Dot-probe
Emotion regulation (ER) strategies differ in when and how they influence emotion experience, expres-
sion, and concomitant cognition. However, no study to date has directly compared cognition in individ-
uals who have a clear disposition for either cognitive or behavioural ER strategies. The present study
compared selective attention to angry faces in groups of high trait-suppressors (people who are hiding
emotional reactions in response to emotional challenge) and high trait-reappraisers (people who cogni-
tively reinterpret emotional events). Since reappraisers are also low trait-anxious and suppressors are
high trait-anxious, high and low anxious control groups, both being low in trait-ER, were also included.
Attention to angry faces was assessed using an emotional dot-probe task. Trait-reappraisers and high-
anxious individuals both showed attentional biases towards angry faces. Trait-reappraisers’ vigilance
for angry faces was significantly more pronounced compared to both trait-suppressors and low anxious
controls. We suggest that threat prioritization in high trait-reappraisal may allow deeper cognitive pro-
cessing of threat information without being associated with psychological maladjustment.
Ó 2011 Elsevier Ltd. All rights reserved.
1. Introduction
An extensive literature suggests that cognition is influenced by
the emotional connotation of to-be-processed information. Emo-
tional events, especially negative emotional events, orient, attract
and/or capture attention more so than neutral events. Evidence
comes from studies using the emotional dot-probe paradigm
(MacLeod & Mathews, 1988). This task measures selective atten-
tion biases towards or away from emotional relative to neutral
stimuli (see Methods for details). Several person variables influ-
ence such biases. For example, high trait anxious individuals are
more likely than low trait anxious individuals to show an atten-
tional bias towards threatening stimuli (Frewen, Dozois, Joanisse,
& Neufeld, 2008). Interestingly, trait anxiety seems to modify the
ability to disengage attentional resources from the location of a
threatening stimulus more so than the speed of orienting attention
toward the stimulus location. For example, Fox, Russo, Bowles, and
Dutton (2001) found that high anxious, but not low anxious indi-
viduals responded slower to a dot-probe when an angry face, as
opposed to a happy or a neutral face, appeared in a different screen
location just prior. However, high anxious participants were not
faster to respond to the dot-probe when it followed in the same
location as the angry faces compared to happy or neutral faces
(attentional orienting). Hence, trait anxiety seems associated with
University of
Calgary, 2500 University Dr., N.W. Calgary, AB, Canada T2N 1N4. Tel
4667; fax: +1 403 282 8249.
0191-8869/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.paid.2011.08.030
a tendency to dwell on (i.e., difficulty in disengaging attention),
rather than to quickly orient toward, threatening stimuli such as
angry facial expressions.
information,
Although it is relatively well-established that individual differ-
ences in trait emotionality (i.e., high versus low trait anxiety) influ-
ence attentional processing of emotional
little is
known about how attentional biases may interact with a person’s
attempt to modulate their emotional responses. Recent findings
in emotion regulation (ER) suggest that emotion regulative strate-
gies differ in their consequences for the emotional response and
concomitant cognition. To date, most studies of ER have compared
cognitive and behavioural forms of ER, with the two most com-
monly studied ER strategies being cognitive reappraisal and
expressive suppression (Gross, 1998; Richards & Gross, 2000).
According to Gross (1998), reappraisal involves cognitively chang-
ing our appraisal of the emotional meaning of a stimulus in order
to render it less emotional, and in so doing, down-regulating our
own emotional response. In contrast, suppression involves the
behavioural inhibition of overt reactions to emotional experiences
(e.g., frowning) without changing the evaluation of the emotional
stimulus itself.
1.1. Instructed emotion regulation
To examine the consequences of ER, researchers have tradition-
ally exposed participants to an emotion-eliciting stimulus with an
instruction to use a specific ER strategy to down-regulate (or more
rarely, up-regulate) the resulting emotion. Because participants are
('6027810', 'Jody E. Arndt', 'jody e. arndt')
('2726268', 'Esther Fujiwara', 'esther fujiwara')
E-mail address: jearndt@ucalgary.ca (J.E. Arndt).
9f499948121abb47b31ca904030243e924585d5fHierarchical Attention Network for Action
Recognition in Videos
Arizona State University
Arizona State University
Yahoo Research
Neil O’Hare
Yahoo Research
Yahoo Research
Arizona State University
('33513248', 'Yilin Wang', 'yilin wang')
('2893721', 'Suhang Wang', 'suhang wang')
('1736632', 'Jiliang Tang', 'jiliang tang')
('1787097', 'Yi Chang', 'yi chang')
('2913552', 'Baoxin Li', 'baoxin li')
ywang370@asu.edu
suhang.wang@asu.edu
jlt@yahoo-inc.com
nohare@yahoo-inc.com
yichang@yahoo-inc.com
baoxin.li@asu.edu
9fc04a13eef99851136eadff52e98eb9caac919dRethinking the Camera Pipeline for Computer Vision
Cornell University
Carnegie Mellon University
Cornell University
('2328520', 'Mark Buckler', 'mark buckler')
('39131476', 'Suren Jayasuriya', 'suren jayasuriya')
('2138184', 'Adrian Sampson', 'adrian sampson')
mab598@cornell.edu
sjayasur@andrew.cmu.edu
asampson@cs.cornell.edu
9f4078773c8ea3f37951bf617dbce1d4b3795839Leveraging Inexpensive Supervision Signals
for Visual Learning
Technical Report Number:
CMU-RI-TR-17-13
a dissertation presented
by
to
The Robotics Institute
in partial fulfillment of the requirements
for the degree of
Master of Science
in the subject of
Robotics
Carnegie Mellon University
Pittsburgh, Pennsylvania
May 2017
All rights reserved.
('3234247', 'Senthil Purushwalkam', 'senthil purushwalkam')
('3234247', 'Senthil Purushwalkam', 'senthil purushwalkam')
9f65319b8a33c8ec11da2f034731d928bf92e29dTAKING ROLL: A PIPELINE FOR FACE RECOGNITION
Dip. di Scienze Teoriche e Applicate
University of Insubria
21100, Varese, Italy
Louisiana State University
2222 Business Education Complex South,
LA, 70803, USA
('39149650', 'I. Gallo', 'i. gallo')
('1876793', 'S. Nawaz', 's. nawaz')
('3457883', 'A. Calefati', 'a. calefati')
('2398301', 'G. Piccoli', 'g. piccoli')
9fa1be81d31fba07a1bde0275b9d35c528f4d0b8Identifying Persons by Pictorial and
Contextual Cues
Nicholas Leonard Pi¨el
Thesis submitted for the degree of Master of Science
Supervisor:
April 2009
('1695527', 'Theo Gevers', 'theo gevers')
9f094341bea610a10346f072bf865cb550a1f1c1Recognition and Volume Estimation of Food Intake using a Mobile Device
Sarnoff Corporation
201 Washington Rd,
Princeton, NJ, 08540
('1981308', 'Manika Puri', 'manika puri'){mpuri, zzhu, qyu, adivakaran, hsawhney}@sarnoff.com
9fdfe1695adac2380f99d3d5cb6879f0ac7f2bfdEURASIP Journal on Applied Signal Processing 2005:13, 2091–2100
c(cid:1) 2005 Hindawi Publishing Corporation
Spatio-Temporal Graphical-Model-Based
Multiple Facial Feature Tracking
Congyong Su
College of Computer Science, Zhejiang University, Hangzhou 310027, China
Li Huang
College of Computer Science, Zhejiang University, Hangzhou 310027, China
Received 1 January 2004; Revised 20 February 2005
It is challenging to track multiple facial features simultaneously when rich expressions are presented on a face. We propose a two-
step solution. In the first step, several independent condensation-style particle filters are utilized to track each facial feature in the
temporal domain. Particle filters are very effective for visual tracking problems; however multiple independent trackers ignore
the spatial constraints and the natural relationships among facial features. In the second step, we use Bayesian inference—belief
propagation—to infer each facial feature’s contour in the spatial domain, in which we learn the relationships among contours of
facial features beforehand with the help of a large facial expression database. The experimental results show that our algorithm
can robustly track multiple facial features simultaneously, while there are large interframe motions with expression changes.
Keywords and phrases: facial feature tracking, particle filter, belief propagation, graphical model.
1.
INTRODUCTION
Multiple facial feature tracking is very important in the com-
puter vision field: it needs to be carried out before video-
based facial expression analysis and expression cloning. Mul-
tiple facial feature tracking is also very challenging be-
cause there are plentiful nonrigid motions in facial fea-
tures besides rigid motions in faces. Nonrigid facial fea-
ture motions are usually very rapid and often form dense
clutter by facial features themselves. Only using traditional
Kalman filter is inadequate because it is based on Gaus-
sian density, and works relatively poorly in clutter, which
causes the density for facial feature’s contour to be multi-
modal and therefore non-Gaussian. Isard and Blake [1] firstly
proposed a face tracker by particle filters—condensation—
which is more effective in clutter than comparable Kalman
filter.
Although particle filters are often very effective for visual
tracking problems, they are specialized to temporal problems
whose corresponding graphs are simple Markov chains (see
Figure 1). There is often structure within each time instant
that is ignored by particle filters. For example, in multiple
facial feature tracking, the expressions of each facial feature
(such as eyes, brows, lips) are closely related; therefore a more
complex graph should be formulated.
The contribution of this paper is extending particle filters
to track multiple facial features simultaneously. The straight-
forward approach of tracking each facial feature by one in-
dependent particle filter is questionable, because influences
and actions among facial features are not taken into account.
In this paper, we propose a spatio-temporal graphical
model for multiple facial feature tracking (see Figure 2). Here
the graphical model is not a 2D or a 3D facial mesh model.
In the spatial domain, the model is shown in Figure 3, where
xi is a hidden random variable and yi is a noisy local ob-
servation. Nonparametric belief propagation is used to infer
facial feature’s interrelationships in a part-based face model,
allowing positions and states of some features in clutter to
be recovered. Facial structure is also taken into account, be-
cause facial features have spatial position constraints [2]. In
the temporal domain, every facial feature forms a Markov
chain (see Figure 1).
After briefly reviewing related work in Section 2, we
introduce the details of our algorithm in Sections 3 and
4. Many convincing experimental results are shown in
Section 5. Conclusions are given in Section 6.
2. RELATED WORK
After the pioneering work of Isard and Blake [1] who
creatively used particle filters for visual tracking, many
Email: su@cs.zju.edu.cn
Email: lihuang@cs.zju.edu.cn
6b333b2c6311e36c2bde920ab5813f8cfcf2b67b
6b3e360b80268fda4e37ff39b7f303e3684e8719FACE RECOGNITION FROM SKETCHES USING ADVANCED
CORRELATION FILTERS USING HYBRID EIGENANALYSIS
FOR FACE SYNTHESIS
Language Technology Institute, Carnegie Mellon Universty
Carnegie Mellon University
Keywords:
Face from sketch synthesis, face recognition, eigenface, advanced correlation filters, OTSDF.
('3036546', 'Yung-hui Li', 'yung-hui li')
('1794486', 'Marios Savvides', 'marios savvides')
6b9aa288ce7740ec5ce9826c66d059ddcfd8dba9
6bcfcc4a0af2bf2729b5bc38f500cfaab2e653f0Facial expression recognition in the wild using improved dense trajectories and
Fisher vector encoding
Computational Science and Engineering Program, Bo gazic i University, Istanbul, Turkey
Bo gazic i University, Istanbul, Turkey
('2471932', 'Sadaf Afshar', 'sadaf afshar')
('1764521', 'Albert Ali Salah', 'albert ali salah')
{sadaf.afshar, salah}@boun.edu.tr
6bca0d1f46b0f7546ad4846e89b6b842d538ee4eFACE RECOGNITION FROM SURVEILLANCE-QUALITY VIDEO
A Dissertation
Submitted to the Graduate School
of the University of Notre Dame
in Partial Fulfillment of the Requirements
for the Degree of
Doctor of Philosophy
by
Patrick J. Flynn, Co-Director
Graduate Program in Computer Science and Engineering
Notre Dame, Indiana
July 2010
('30042752', 'Deborah Thomas', 'deborah thomas')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
6b089627a4ea24bff193611e68390d1a4c3b3644CROSS-POLLINATION OF NORMALISATION
TECHNIQUES FROM SPEAKER TO FACE
AUTHENTICATION USING GAUSSIAN
MIXTURE MODELS
Idiap-RR-03-2012
JANUARY 2012
Centre du Parc, Rue Marconi 19, P.O. Box 592, CH - 1920 Martigny
('1843477', 'Roy Wallace', 'roy wallace')
('1698382', 'Sébastien Marcel', 'sébastien marcel')
T +41 27 721 77 11 F +41 27 721 77 12 info@idiap.ch www.idiap.ch
6b8d0569fffce5cc221560d459d6aa10c4db2f03Interlinked Convolutional Neural Networks for
Face Parsing
State Key Laboratory of Intelligent Technology and Systems
Tsinghua National Laboratory for Information Science and Technology (TNList)
Department of Computer Science and Technology
Tsinghua University, Beijing 100084, China
('1879713', 'Yisu Zhou', 'yisu zhou')
('1705418', 'Xiaolin Hu', 'xiaolin hu')
('49846744', 'Bo Zhang', 'bo zhang')
6be0ab66c31023762e26d309a4a9d0096f72a7f0Enhance Visual Recognition under Adverse
Conditions via Deep Networks
('1771885', 'Ding Liu', 'ding liu')
('2392101', 'Bowen Cheng', 'bowen cheng')
('2969311', 'Zhangyang Wang', 'zhangyang wang')
('40479011', 'Haichao Zhang', 'haichao zhang')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
6bcee7dba5ed67b3f9926d2ae49f9a54dee64643Assessment of Time Dependency in Face Recognition:
An Initial Study
IDept of Computer Science and Engineering
University of Notre Dame. Notre Dame, IN 46556.USA
Nqtional Institute of Standards and Technology
100 Bureau Dr.• Stop 8940, Gaithersburg, MD 20899 USA
Performance
and products
research matures
factors of strong practical
the performance of such syslemsis
are
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
{flynn,kwb}@nd.edu
jonathon@nist.gov
6b18628cc8829c3bf851ea3ee3bcff8543391819Face recognition based on subset selection via metric learning on manifold.
1058. [doi:10.1631/FITEE.1500085]
Face recognition based on subset
selection via metric learning on manifold
Key words: Face recognition, Sparse representation, Manifold structure,
Metric learning, Subset selection
ORCID: http://orcid.org/0000-0001-7441-4749
Front Inform Technol & Electron Eng
('2684160', 'Hong Shao', 'hong shao')
('1752664', 'Shuang Chen', 'shuang chen')
('1941366', 'Wen-cheng Cui', 'wen-cheng cui')
('1752664', 'Shuang Chen', 'shuang chen')
E-mail: chenshuang19891129@gmail.com
6b7f7817b2e5a7e7d409af2254a903fc0d6e02b6April 13, 2009
14:13 WSPC/INSTRUCTION FILE
International Journal of Pattern Recognition and Artificial Intelligence
c(cid:13) World Scientific Publishing Company
Feature extraction through cross - phase congruency for facial
expression analysis
Electronics Department
Faculty of Electrical Engineering and Information Technology
University of Oradea
410087, Universitatii 1,
Romania
http://webhost.uoradea.ro/ibuciu
Electronics and Communications Faculty
Politehnica University of Timisoara
Bd. Vasile Parvan, no.2
300223 Timisoara
Romania
http://hermes.etc.upt.ro
Human face analysis has attracted a large number of researchers from various fields,
such as computer vision, image processing, neurophysiology or psychology. One of the
particular aspects of human face analysis is encompassed by facial expression recognition
task. A novel method based on phase congruency for extracting the facial features used
in the facial expression classification procedure is developed. Considering a set of image
samples comprising humans expressing various expressions, this new approach computes
the phase congruency map between the samples. The analysis is performed in the fre-
quency space where the similarity (or dissimilarity) between sample phases is measured
to form discriminant features. The experiments were run using samples from two facial
expression databases. To assess the method’s performance, the technique is compared to
the state-of-the art techniques utilized for classifying facial expressions, such as Principal
Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discrim-
inant Analysis (LDA), and Gabor jets. The features extracted by the aforementioned
techniques are further classified using two classifiers: a distance-based classifier and a
Support Vector Machine - based classifier. Experiments reveal superior facial expression
recognition performance for the proposed approach with respect to other techniques.
Keywords: feature extraction; phase congruency; facial expression analysis.
1. Feature Extraction for Facial Expression Recognition
Facial expression analysis is a concern of several disciplinary scientific fields, such
as computer vision, image processing, neurophysiology and psychology. The large
interest for this analysis is motivated by an impressive area of applications. These
('2336758', 'Ioan Buciu', 'ioan buciu')
('2526319', 'Ioan Nafornita', 'ioan nafornita')
ibuciu@uoradea.ro
ioan.nafornita@etc.upt.ro
6b1b43d58faed7b457b1d4e8c16f5f7e7d819239
6bb0425baac448297fbd29a00e9c9b9926ce8870INTERNATIONAL CONFERENCE ON COMMUNICATION, COMPUTER AND POWER (ICCCP’09)
MUSCAT, FEBRUARY 15-18, 2009
Facial Expression Recognition Using Log-Gabor
Filters and Local Binary Pattern Operators
School of Electrical and Computer Engineering, RMIT University, Melbourne, Australia
('1857490', 'Seyed Mehdi Lajevardi', 'seyed mehdi lajevardi')
('1749220', 'Zahir M. Hussain', 'zahir m. hussain')
seyed.lajevardi@rmit.edu.au, zmhussain@ieee.org
6b35b15ceba2f26cf949f23347ec95bbbf7bed64
6b6493551017819a3d1f12bbf922a8a8c8cc2a03Pose Normalization for Local Appearance-Based
Face Recognition
Computer Science Department, Universit¨at Karlsruhe (TH)
Am Fasanengarten 5, Karlsruhe 76131, Germany
http://isl.ira.uka.de/cvhci
('1697965', 'Hua Gao', 'hua gao')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
{hua.gao,ekenel,stiefel}@ira.uka.de
6b17b219bd1a718b5cd63427032d93c603fcf24fCarnegie Mellon University
Language Technologies Institute
School of Computer Science
10-1-2016
Videos from the 2013 Boston Marathon: An Event
Reconstruction Dataset for Synchronization and
Localization
Carnegie Mellon University
Carnegie Mellon University
Carnegie Mellon University
Carnegie Mellon University
Follow this and additional works at: http://repository.cmu.edu/lti
Part of the Computer Sciences Commons
('3175807', 'Jia Chen', 'jia chen')
('1915796', 'Junwei Liang', 'junwei liang')
('2075232', 'Han Lu', 'han lu')
('2927024', 'Shoou-I Yu', 'shoou-i yu')
('7661726', 'Alexander Hauptmann', 'alexander hauptmann')
Research Showcase @ CMU
Carnegie Mellon University, alex@cs.cmu.edu
This Technical Report is brought to you for free and open access by the School of Computer Science at Research Showcase @ CMU. It has been
accepted for inclusion in Language Technologies Institute by an authorized administrator of Research Showcase @ CMU. For more information, please
contact research-showcase@andrew.cmu.edu.
6bb630dfa797168e6627d972560c3d438f71ea99
6b6ff9d55e1df06f8b3e6f257e23557a73b2df96International Journal of Computer Applications (0975 – 8887)
Volume 61– No.17, January 2013
Survey of Threats to the Biometric Authentication
Systems and Solutions
Research Scholor,Mewar
University, Chitorgarh. (INDIA
P.C.Gupta
Kota University, Kota(INDIA
Khushboo Mantri
M.tech.student, Arya College of
engineering ,Jaipur(INDIA)
('2875951', 'Sarika Khandelwal', 'sarika khandelwal')
07377c375ac76a34331c660fe87ebd7f9b3d74c4Detailed Human Avatars from Monocular Video
1Computer Graphics Lab, TU Braunschweig, Germany
Max Planck Institute for Informatics, Saarland Informatics Campus, Germany
Figure 1: Our method creates a detailed avatar from a monocular video of a person turning around. Based on the SMPL
model, we first compute a medium-level avatar, then add subject-specific details and finally generate a seamless texture.
('1914886', 'Thiemo Alldieck', 'thiemo alldieck')
('9765909', 'Weipeng Xu', 'weipeng xu')
{alldieck,magnor}@cg.cs.tu-bs.de {wxu,theobalt,gpons}@mpi-inf.mpg.de
0729628db4bb99f1f70dd6cb2353d7b76a9fce47Separating Pose and Expression in Face Images:
A Manifold Learning Approach
University of Pennsylvania
Moore Bldg, 200 South 33rd St, Philadelphia, PA 19104, USA
(Submitted on December 27, 2006)
('1732066', 'Daniel D. Lee', 'daniel d. lee')E-mail: {jhham,ddlee}@seas.upenn.edu
0728f788107122d76dfafa4fb0c45c20dcf523caThe Best of Both Worlds: Combining Data-independent and Data-driven
Approaches for Action Recognition
('1711953', 'Dezhong Yao', 'dezhong yao')
('2735055', 'Ming Lin', 'ming lin')
('2927024', 'Shoou-I Yu', 'shoou-i yu')
{lanzhzh, minglin, iyu, alex@cs.cmu.edu}, dyao@hust.edu.cn
07c90e85ac0f74b977babe245dea0f0abcf177e3Appeared in the 4th International Conference on Audio- and Video-Based
Biometric Person Authentication, pp 10{18, June 9 - 11, 2003, Guildford, UK
An Image Preprocessing Algorithm for
Illumination Invariant Face Recognition
The Robotics Institute, Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
('33731953', 'Ralph Gross', 'ralph gross')
('2407094', 'Vladimir Brajovic', 'vladimir brajovic')
frgross,brajovicg@cs.cmu.edu
07ea3dd22d1ecc013b6649c9846d67f2bf697008HUMAN-CENTRIC VIDEO UNDERSTANDING WITH WEAK
SUPERVISION
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
June 2016
('34066479', 'Vignesh Ramanathan', 'vignesh ramanathan')
071099a4c3eed464388c8d1bff7b0538c7322422FACIAL EXPRESSION RECOGNITION IN THE WILD USING RICH DEEP FEATURES
Microsoft Advanced Technology labs, Microsoft Technology and Research, Cairo, Egypt
('34828041', 'Abubakrelsedik Karali', 'abubakrelsedik karali')
('2376438', 'Ahmad Bassiouny', 'ahmad bassiouny')
('3144122', 'Motaz El-Saban', 'motaz el-saban')
07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1Large Scale Unconstrained Open Set Face Database
University of Colorado at Colorado Springs
2Securics Inc, Colorado Springs
('27469806', 'Archana Sapkota', 'archana sapkota')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
asapkota@vast.uccs.edu
tboult@vast.uccs.edu
076d3fc800d882445c11b9af466c3af7d2afc64fFACE ATTRIBUTE CLASSIFICATION USING ATTRIBUTE-AWARE CORRELATION MAP
AND GATED CONVOLUTIONAL NEURAL NETWORKS
Korea Advanced institute of Science and Technology
Department of Electrical Engineering
291 Daehak-ro, Yuseong-gu, Daejeon, Korea
('3315036', 'Sunghun Kang', 'sunghun kang')
('2350325', 'Donghoon Lee', 'donghoon lee')
071af21377cc76d5c05100a745fb13cb2e40500f
070ab604c3ced2c23cce2259043446c5ee342fd6AnActiveIlluminationandAppearance(AIA)ModelforFaceAlignment
FatihKahraman,MuhittinGokmen
IstanbulTechnicalUniversity
ComputerScienceDept.,Turkey
InformaticsandMathematicalModelling,Denmark
SuneDarkner,RasmusLarsen
TechnicalUniversityofDenmark
{fkahraman, gokmen}@itu.edu.tr
{sda,rl}@imm.dtu.dk
071135dfb342bff884ddb9a4d8af0e70055c22a1New Architecture and Transfer Learning for Video Classification
Temporal 3D ConvNets:
ESAT-PSI, KU Leuven, 2University of Bonn, 3CV:HCI, KIT, Karlsruhe, 4Sensifai
('3310120', 'Ali Diba', 'ali diba')
('3169187', 'Mohsen Fayyaz', 'mohsen fayyaz')
('38035876', 'Vivek Sharma', 'vivek sharma')
('31493847', 'Amir Hossein Karami', 'amir hossein karami')
('2713759', 'Mohammad Mahdi Arzani', 'mohammad mahdi arzani')
('9456273', 'Rahman Yousefzadeh', 'rahman yousefzadeh')
('1681236', 'Luc Van Gool', 'luc van gool')
{firstname.lastname}@esat.kuleuven.be, {lastname}@sensifai.com,
fayyaz@iai.uni-bonn.de, vivek.sharma@kit.edu
0754e769eb613fd3968b6e267a301728f52358beTowards a Watson That Sees: Language-Guided Action Recognition for
Robots
('7607499', 'Yezhou Yang', 'yezhou yang')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
07c83f544d0604e6bab5d741b0bf9a3621d133daLearning Spatio-Temporal Features with 3D Residual Networks
for Action Recognition
National Institute of Advanced Industrial Science and Technology (AIST
Tsukuba, Ibaraki, Japan
('2199251', 'Kensho Hara', 'kensho hara')
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
{kensho.hara, hirokatsu.kataoka, yu.satou}@aist.go.jp
0773c320713dae62848fceac5a0ac346ba224ecaDigital Facial Augmentation for Interactive
Entertainment
Centre for Intelligent Machines
McGill University
Montreal, Quebec, Canada
('2726121', 'Naoto Hieda', 'naoto hieda')
('2242019', 'Jeremy R. Cooperstock', 'jeremy r. cooperstock')
Email: {nhieda, jer}@cim.mcgill.ca
070de852bc6eb275d7ca3a9cdde8f6be8795d1a3A FACS Valid 3D Dynamic Action Unit Database with Applications to 3D
Dynamic Morphable Facial Modeling
Department of Computer Science
School of Humanities and Social Sciences
University of Bath
Jacobs University
Centre for Vision, Speech and Signal Processing
University of Surrey
('1792288', 'Darren Cosker', 'darren cosker')
('2035177', 'Eva Krumhuber', 'eva krumhuber')
('1695085', 'Adrian Hilton', 'adrian hilton')
dpc@cs.bath.ac.uk
e.krumhuber@jacobs-university.de
a.hilton@surrey.ac.uk
07a472ea4b5a28b93678a2dcf89028b086e481a2Head Dynamic Analysis: A Multi-view
Framework
University of California, San Diego, USA
('1947383', 'Ashish Tawari', 'ashish tawari'){atawari,mtrivedi}@ucsd.edu
0717b47ab84b848de37dbefd81cf8bf512b544acInternational Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622
International Conference on Humming Bird ( 01st March 2014)
RESEARCH ARTICLE
OPEN ACCESS
Robust Face Recognition and Tagging in Visual Surveillance
System
('21008397', 'Kavitha MS', 'kavitha ms')
('39546266', 'Siva Pradeepa', 'siva pradeepa')
('21008397', 'Kavitha MS', 'kavitha ms')
('39546266', 'Siva Pradeepa', 'siva pradeepa')
e-mail:kavithams999@gmail.com
07fa153b8e6196ee6ef6efd8b743de8485a07453Action Prediction from Videos via Memorizing Hard-to-Predict Samples
Northeastern University, Boston, MA, USA
College of Engineering, Northeastern University, Boston, MA, USA
College of Computer and Information Science, Northeastern University, Boston, MA, USA
('48901920', 'Yu Kong', 'yu kong')
('9355577', 'Shangqian Gao', 'shangqian gao')
('47935056', 'Bin Sun', 'bin sun')
('1708679', 'Yun Fu', 'yun fu')
{yukong,yunfu}@ece.neu.edu, {gao.sh,sun.bi}@husky.neu.edu
0708059e3bedbea1cbfae1c8cd6b7259d4b56b5bGraph-regularized Multi-class Support Vector
Machines for Face and Action Recognition
Tampere University of Technology, Tampere, Finland
('9219875', 'Moncef Gabbouj', 'moncef gabbouj')Email: {alexandros.iosifidis,moncef.gabbouj}@tut.fi
074af31bd9caa61fea3c4216731420bd7c08b96aFace Verification Using Sparse Representations
Institute for Advanced Computer Studies, University of Maryland, College Park, MD
TNLIST, Tsinghua University, Beijing, 100084, China
('2723427', 'Huimin Guo', 'huimin guo')
('3373117', 'Ruiping Wang', 'ruiping wang')
('3826759', 'Jonghyun Choi', 'jonghyun choi')
('1693428', 'Larry S. Davis', 'larry s. davis')
{hmguo, jhchoi, lsd}@umiacs.umd.edu, rpwang@tsinghua.edu.cn
0750a816858b601c0dbf4cfb68066ae7e788f05dCosFace: Large Margin Cosine Loss for Deep Face Recognition
Tencent AI Lab
('39049654', 'Hao Wang', 'hao wang')
('1996677', 'Yitong Wang', 'yitong wang')
('48741267', 'Zheng Zhou', 'zheng zhou')
('3478009', 'Xing Ji', 'xing ji')
('2856494', 'Dihong Gong', 'dihong gong')
('2263912', 'Jingchao Zhou', 'jingchao zhou')
('1911510', 'Zhifeng Li', 'zhifeng li')
('46641573', 'Wei Liu', 'wei liu')
{hawelwang,yitongwang,encorezhou,denisji,sagazhou,michaelzfli}@tencent.com
gongdihong@gmail.com wliu@ee.columbia.edu
078d507703fc0ac4bf8ca758be101e75ea286c80 ISSN: 2321-8169
International Journal on Recent and Innovation Trends in Computing and Communication
Volume: 3 Issue: 8
5287 - 5296
________________________________________________________________________________________________________________________________
Large- Scale Content Based Face Image Retrieval using Attribute Enhanced
Sparse Codewords.
Chaitra R,
Mtech Digital Coomunication Engineering
Acharya Institute Of Technology
Bangalore
0716e1ad868f5f446b1c367721418ffadfcf0519Interactively Guiding Semi-Supervised
Clustering via Attribute-Based Explanations
Virginia Tech, Blacksburg, VA, USA
('9276834', 'Shrenik Lad', 'shrenik lad')
('1713589', 'Devi Parikh', 'devi parikh')
073eaa49ccde15b62425cda1d9feab0fea03a842
07f31bef7a7035792e3791473b3c58d03928abbfLessons from Collecting a Million
Biometric Samples
University of Notre Dame
National Institute of Standards and Technology
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
0726a45eb129eed88915aa5a86df2af16a09bcc1Introspective Perception: Learning to Predict Failures in Vision Systems ('2739544', 'Shreyansh Daftry', 'shreyansh daftry')
('3308210', 'Sam Zeng', 'sam zeng')
('1756566', 'J. Andrew Bagnell', 'j. andrew bagnell')
('1709305', 'Martial Hebert', 'martial hebert')
07de8371ad4901356145722aa29abaeafd0986b9April 13, 2017
DRAFT
Towards Usable Multimedia Event Detection
February, 2017
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Alexander G. Hauptmann (Chair)
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy.
('34692532', 'Zhenzhong Lan', 'zhenzhong lan')
('1880336', 'Bhiksha Raj Ramakrishnan', 'bhiksha raj ramakrishnan')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
('14517812', 'Leonid Sigal', 'leonid sigal')
('34692532', 'Zhenzhong Lan', 'zhenzhong lan')
07e639abf1621ceff27c9e3f548fadfa2052c912RESEARCH ARTICLE
5-HTTLPR Expression Outside the Skin: An
Experimental Test of the Emotional
Reactivity Hypothesis in Children
Utrecht Centre for Child and Adolescent Studies, Utrecht University, Utrecht, The Netherlands
Research Institute of Child Development and Education, University of Amsterdam, Utrecht, The
Netherlands, Utrecht University, Utrecht, The Netherlands
Current Address: Research Institute of Child Development and Education, University of Amsterdam
Amsterdam,The Netherlands
('4594074', 'Joyce Weeland', 'joyce weeland')
('6811600', 'Meike Slagt', 'meike slagt')
('5859538', 'Eddie Brummelman', 'eddie brummelman')
('3935697', 'Walter Matthys', 'walter matthys')
('4441681', 'Geertjan Overbeek', 'geertjan overbeek')
* j.weeland@uva.nl
07da958db2e561cc7c24e334b543d49084dd1809Dictionary Learning Based Dimensionality
Reduction for Classification
Karin Schnass and Pierre Vandergheynst
Signal Processing Institute
Swiss Federal Institute of Technology
Lausanne, Switzerland
EPFL-STI-ITS-LTS2
CH-1015 Lausanne
Tel: +41 21 693 2657
Fax: +41 21 693 7600
EDICS: SPC-CODC
{karin.schnass, pierre.vandergheynst}@epfl.ch
0742d051caebf8a5d452c03c5d55dfb02f84baabReal-Time Geometric Motion Blur for a Deforming Polygonal Mesh
Nathan Jones
Formerly: Texas AandM University
Currently: The Software Group
nathan.jones@tylertechnologies.com
07d986b1005593eda1aeb3b1d24078db864f8f6aInternational Journal of Industrial Electronics and Electrical Engineering, ISSN: 2347-6982
Volume-3, Issue-11, Nov.-2015
FACIAL EXPRESSION RECOGNITION USING LOCAL FACIAL
FEATURES
National University of Kaohsiung, 811 Kaohsiung, Taiwan
National University of Kaohsiung, 811 Kaohsiung, Taiwan
National Sun Yat Sen University, 804 Kaohsiung, Taiwan
followed by
communications
[1]. Automatic
E-mail: abc3329797@gmail.com, {cclai, johnw, stpan}@nuk.edu.tw, leesj@mail.ee.nsysu.edu.tw
38d56ddcea01ce99902dd75ad162213cbe4eaab7Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
2648
389334e9a0d84bc54bcd5b94b4ce4c5d9d6a2f26FACIAL PARAMETER EXTRACTION SYSTEM BASED ON ACTIVE CONTOURS
Universitat Politècnica de Catalunya, Barcelona, Spain
('1767549', 'Montse Pardàs', 'montse pardàs')
('1820469', 'Marcos Losada', 'marcos losada')
38f7f3c72e582e116f6f079ec9ae738894785b96IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319 5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 4, Issue 11, November 2015
A New Technique for Face Matching after
Plastic Surgery in Forensics
Student, Amal Jyothi College of Engineering, Kanjirappally, India
Amal Jyothi College of Engineering, Kanjirappally, India
I. INTRODUCTION
Facial recognition is one of the most important task that
forensic examiners execute
their
investigation. This work focuses on analysing the effect of
plastic surgery in face recognition algorithms. It is
imperative for the subsequent facial recognition systems to
be capable of addressing this significant issue and
accordingly there is a need for more research in this
important area.
('32764403', 'Anju Joseph', 'anju joseph')
('16501589', 'Nilu Tressa Thomas', 'nilu tressa thomas')
('40864737', 'Neethu C. Sekhar', 'neethu c. sekhar')
380dd0ddd5d69adc52defc095570d1c22952f5cc
38679355d4cfea3a791005f211aa16e76b2eaa8dTitle
Evolutionary cross-domain discriminative Hessian Eigenmaps
Author(s)
Si, S; Tao, D; Chan, KP
Citation
1086
Issued Date
2010
URL
http://hdl.handle.net/10722/127357
Rights
This work is licensed under a Creative Commons Attribution-
NonCommercial-NoDerivatives 4.0 International License.; ©2010
IEEE. Personal use of this material is permitted. However,
permission to reprint/republish this material for advertising or
promotional purposes or for creating new collective works for
resale or redistribution to servers or lists, or to reuse any
copyrighted component of this work in other works must be
obtained from the IEEE.
3802c97f925cb03bac91d9db13d8b777dfd29dccNon-Parametric Bayesian Constrained Local Models
Institute of Systems and Robotics, University of Coimbra, Portugal
('39458914', 'Pedro Martins', 'pedro martins')
('2117944', 'Rui Caseiro', 'rui caseiro')
('1678231', 'Jorge Batista', 'jorge batista')
{pedromartins,ruicaseiro,batista}@isr.uc.pt
38a2661b6b995a3c4d69e7d5160b7596f89ce0e6Randomized Intraclass-Distance Minimizing Binary Codes for Face Recognition
Colorado State University
Fort Collins, CO 80523
National Institute of Standards and Technology
('40370804', 'Hao Zhang', 'hao zhang')
('1757322', 'J. Ross Beveridge', 'j. ross beveridge')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
{zhangh, ross, qmo, draper}@cs.colostate.edu
jonathon.phillips@nist.gov
38682c7b19831e5d4f58e9bce9716f9c2c29c4e7International Journal of Computer Trends and Technology (IJCTT) – Volume 18 Number 5 – Dec 2014
Movie Character Identification Using Graph Matching
Algorithm
M.Tech Scholar, Dept of CSE, QISCET, ONGOLE, Dist: Prakasam, AP, India.
Associate Professor, Department of CSE, QISCET, ONGOLE, Dist: Prakasam, AP, India
38787338ba659f0bfbeba11ec5b7748ffdbb1c3dEVALUATION OF THE DISCRIMINATION POWER OF FEATURES EXTRACTED
FROM 2-D AND 3-D FACIAL IMAGES FOR FACIAL EXPRESSION ANALYSIS
University of Piraeus
Karaoli & Dimitriou 80, Piraeus 185 34
GREECE
('2828175', 'Ioanna-Ourania Stathopoulou', 'ioanna-ourania stathopoulou')
('1802584', 'George A. Tsihrintzis', 'george a. tsihrintzis')
phone: + 30 210 4142322, fax: + 30 210 4142264, email: {iostath, geoatsi}@unipi.gr
3803b91e784922a2dacd6a18f61b3100629df932Temporal Multimodal Fusion
for Video Emotion Classification in the Wild
Orange Labs
Cesson-Sévigné, France
Orange Labs
Cesson-Sévigné, France
Normandie Univ., UNICAEN,
ENSICAEN, CNRS
Caen, France
('26339425', 'Valentin Vielzeuf', 'valentin vielzeuf')
('2642628', 'Stéphane Pateux', 'stéphane pateux')
('1801809', 'Frédéric Jurie', 'frédéric jurie')
valentin.vielzeuf@orange.com
stephane.pateux@orange.com
frederic.jurie@unicaen.fr
38eea307445a39ee7902c1ecf8cea7e3dcb7c0e7Noname manuscript No.
(will be inserted by the editor)
Multi-distance Support Matrix Machine
Received: date / Accepted: date
('34679353', 'Yunfei Ye', 'yunfei ye')
('49405675', 'Dong Han', 'dong han')
38c901a58244be9a2644d486f9a1284dc0edbf8aMulti-Camera Action Dataset for Cross-Camera Action Recognition
Benchmarking
School of Electronic Information Engineering, Tianjin University, China
Interactive and Digital Media Institute, National University of Singapore, Singapore
School of Computing, National University of Singapore, Singapore
('1803305', 'Wenhui Li', 'wenhui li')
('3026404', 'Yongkang Wong', 'yongkang wong')
('1678662', 'Yang Li', 'yang li')
385750bcf95036c808d63db0e0b14768463ff4c6
3852968082a16db8be19b4cb04fb44820ae823d4Unsupervised Learning of Long-Term Motion Dynamics for Videos
Stanford University
('3378742', 'Zelun Luo', 'zelun luo')
('3378457', 'Boya Peng', 'boya peng')
('38485317', 'De-An Huang', 'de-an huang')
('3304525', 'Alexandre Alahi', 'alexandre alahi')
('3216322', 'Li Fei-Fei', 'li fei-fei')
{zelunluo,boya,dahuang,alahi,feifeili}@cs.stanford.edu
38cc2f1c13420170c7adac30f9dfac69b297fb76Rochester Institute of Technology
RIT Scholar Works
Theses
7-1-2009
Thesis/Dissertation Collections
Recognition of human activities and expressions in
video sequences using shape context descriptor
Follow this and additional works at: http://scholarworks.rit.edu/theses
Recommended Citation
Kholgade, Natasha Prashant, "Recognition of human activities and expressions in video sequences using shape context descriptor"
Thesis. Rochester Institute of Technology. Accessed from
This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion
('2201569', 'Natasha Prashant Kholgade', 'natasha prashant kholgade')in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact ritscholarworks@rit.edu.
38cbb500823057613494bacd0078aa0e57b30af82017 IEEE Conference on Computer Vision and Pattern Recognition Workshops
Deep Face Deblurring
Imperial College London
Imperial College London
('34586458', 'Grigorios G. Chrysos', 'grigorios g. chrysos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
g.chrysos@imperial.ac.uk
s.zafeiriou@imperial.ac.uk
384f972c81c52fe36849600728865ea50a0c46701
Multi-Fold Gabor, PCA and ICA Filter
Convolution Descriptor for Face Recognition
('1801904', 'Andrew Beng Jin Teoh', 'andrew beng jin teoh')
('3326176', 'Cong Jie Ng', 'cong jie ng')
38f1fac3ed0fd054e009515e7bbc72cdd4cf801aFinding Person Relations in Image Data of the
Internet Archive
Eric M¨uller-Budack1,2[0000−0002−6802−1241],
1 Leibniz Information Centre for Science and Technology (TIB), Hannover, Germany
L3S Research Center, Leibniz Universit at Hannover, Germany
('51008013', 'Kader Pustu-Iren', 'kader pustu-iren')
('50983345', 'Sebastian Diering', 'sebastian diering')
('1738703', 'Ralph Ewerth', 'ralph ewerth')
38f06a75eb0519ae1d4582a86ef4730cc8fb8d7fShrinkage Expansion Adaptive Metric Learning
1 School of Information and Communications Engineering,
Dalian University of Technology, China
School of Computer Science and Technology, Harbin Institute of Technology, China
Hong Kong Polytechnic University, Hong Kong
('2769011', 'Qilong Wang', 'qilong wang')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('36685537', 'Lei Zhang', 'lei zhang')
('40426020', 'Peihua Li', 'peihua li')
{csqlwang,cswmzuo}@gmail.com, cslzhang@comp.polyu.edu.hk,
peihuali@dlut.edu.cn
380d5138cadccc9b5b91c707ba0a9220b0f39271Deep Imbalanced Learning for Face Recognition
and Attribute Prediction
('2000034', 'Chen Huang', 'chen huang')
('47002704', 'Yining Li', 'yining li')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
384945abd53f6a6af51faf254ba8ef0f0fb3f338Visual Recognition with Humans in the Loop
University of California, San Diego
California Institute of Technology
('3251767', 'Steve Branson', 'steve branson')
('2367820', 'Catherine Wah', 'catherine wah')
('2490700', 'Boris Babenko', 'boris babenko')
('1690922', 'Pietro Perona', 'pietro perona')
{sbranson,cwah,gschroff,bbabenko,sjb}@cs.ucsd.edu
{welinder,perona}@caltech.edu
38215c283ce4bf2c8edd597ab21410f99dc9b094The SEMAINE Database: Annotated Multimodal Records of
Emotionally Colored Conversations between a Person and a Limited
Agent
McKeown, G., Valstar, M., Cowie, R., Pantic, M., & Schröder, M. (2012). The SEMAINE Database: Annotated
Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent. IEEE
Transactions on Affective Computing, 3(1), 5-17. DOI: 10.1109/T-AFFC.2011.20
Published in:
Document Version:
Peer reviewed version
Queen's University Belfast - Research Portal
Link to publication record in Queen's University Belfast Research Portal
General rights
Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other
copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated
with these rights.
Take down policy
The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to
ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the
Download date:05. Nov. 2018
Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk.
38861d0d3a0292c1f54153b303b0d791cbba1d50
38d8ff137ff753f04689e6b76119a44588e143f3When 3D-Aided 2D Face Recognition Meets Deep Learning:
An extended UR2D for Pose-Invariant Face Recognition
Computational Biomedicine Lab
University of Houston
4800 Calhoun Rd. Houston, TX, USA
('5084124', 'Xiang Xu', 'xiang xu')
('39634395', 'Pengfei Dou', 'pengfei dou')
('26401746', 'Ha A. Le', 'ha a. le')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
3896c62af5b65d7ba9e52f87505841341bb3e8dfFace Recognition from Still Images and Video
Department of Electrical and Computer Engineering
Center for Automation Research
University of Maryland, College Park
Related concepts Biometric identification, verification.
Definition Face recognition is concerned with identifying or verifying one or more persons from still
images or video sequences using a stored database of faces.
Background The earliest work on face recognition started as early as 1950’s in psychology and in the
1960’s in engineering, but research on automatic face recognition practically started in the 1970’s after the
seminal work of Kanade [1] and Kelly [2].
Application Face recognition has wide range of applications in many different areas ranging from
law enforcement and surveillance, information security to human-computer interaction, virtual reality and
computer entertainment.
1 Introduction
Face recognition with its wide range of commercial and law enforcement applications has been one of the
most active areas of research in the field of computer vision and pattern recognition. Personal identification
systems based on faces have the advantage that facial images can be obtained from a distance without requir-
ing cooperation of the subject, as compared to other biometrics such as fingerprint, iris, etc. Face recognition
is concerned with identifying or verifying one or more persons from still images or video sequences using
a stored database of faces. Depending on the particular application, there can be different scenarios, rang-
ing from controlled still images to uncontrolled videos. Since face recognition is essentially the problem of
recognizing a 3D object from its 2D image or a video sequence, it has to deal with significant appearance
changes due to illumination and pose variations. Current algorithms perform well in controlled scenarios,
but their performance is far from satisfactory in uncontrolled scenarios. Most of the current research in this
area is focused toward recognizing faces in uncontrolled scenarios. This chapter is broadly divided into two
sections. The first section discusses the approaches proposed for recognizing faces from still images and the
second section deals with face recognition from video sequences.
2 Still image face recognition
This section discusses some of the early subspace and feature-based approaches, followed by those which
address the problem of appearance change due to illumination variations and approaches that can handle both
illumination and pose variations.
2.1 Early approaches
Among the early subspace-based holistic approaches, eigenfaces [3] and Fisherfaces [4][5] have proved to be
very effective for the task of face recognition. Since human faces have similar overall configuration, the facial
images can be described by a relatively low-dimensional subspace. Principal Component Analysis (PCA) [3]
has been used for finding those vectors which can best account for the distribution of facial images within the
whole image space. These vectors are eigenvectors of the covariance matrix computed from the aligned face
images in the training set and are thus known as ’eigenfaces’. Given the eigenfaces, every face in the gallery
database is represented as a vector of weights obtained by projecting the image onto the eigenfaces using
('2642508', 'Soma Biswas', 'soma biswas')
('9215658', 'Rama Chellappa', 'rama chellappa')
38192a0f9261d9727b119e294a65f2e25f72d7e6
38bbca5f94d4494494860c5fe8ca8862dcf9676eProbabilistic, Features-based Object Recognition
Thesis by
In Partial Ful(cid:2)llment of the Requirements
for the Degree of
Doctor of Philosophy
California Institute of Technology
Pasadena, California
2008
(Defended October 12, 2007)
('2462051', 'Pierre Moreels', 'pierre moreels')
38183fe28add21693729ddeaf3c8a90a2d5caea3Scale-Aware Face Detection
SenseTime, 2Tsinghua University
('19235216', 'Zekun Hao', 'zekun hao')
('1715752', 'Yu Liu', 'yu liu')
('2137185', 'Hongwei Qin', 'hongwei qin')
('1721677', 'Junjie Yan', 'junjie yan')
('2693308', 'Xiu Li', 'xiu li')
('1705418', 'Xiaolin Hu', 'xiaolin hu')
{haozekun, yanjunjie}@outlook.com, liuyuisanai@gmail.com,
{qhw12@mails., xlhu@, li.xiu@sz.}tsinghua.edu.cn
38a9ca2c49a77b540be52377784b9f734e0417e4Face Verification using Large Feature Sets and One Shot Similarity
1Department of Computer Science
University of Maryland
College Park, MD, 20740, USA
Institute of Computing
University of Campinas
Campinas, SP, 13084-971, Brazil
('2723427', 'Huimin Guo', 'huimin guo')
('1679142', 'William Robson Schwartz', 'william robson schwartz')
('1693428', 'Larry S. Davis', 'larry s. davis')
hmguo@cs.umd.edu
schwartz@ic.unicamp.br
lsd@umiacs.umd.edu
3802da31c6d33d71b839e260f4022ec4fbd88e2dDeep Attributes for One-Shot Face Recognition
Xerox Research Center India
3Department of Electrical Engineering, IIT Kanpur
('5060928', 'Aishwarya Jadhav', 'aishwarya jadhav')
('1744135', 'Vinay P. Namboodiri', 'vinay p. namboodiri')
('1797662', 'K. S. Venkatesh', 'k. s. venkatesh')
aishwaryauj@gmail.com, vinaypn@iitk.ac.in, venkats@iitk.ac.in
00fb2836068042c19b5197d0999e8e93b920eb9c
00f7f7b72a92939c36e2ef9be97397d8796ee07c3D ConvNets with Optical Flow Based Regularization
Stanford University
Stanford, CA
('35627656', 'Kevin Chavez', 'kevin chavez')kjchavez@stanford.edu
0021f46bda27ea105d722d19690f5564f2b8869eDeep Region and Multi-label Learning for Facial Action Unit Detection
School of Comm. and Info. Engineering, Beijing University of Posts and Telecom., Beijing China
Robotics Institute, Carnegie Mellon University, USA
('2393320', 'Kaili Zhao', 'kaili zhao')
0081e2188c8f34fcea3e23c49fb3e17883b33551Training Deep Face Recognition Systems
with Synthetic Data
Department of Mathematics and Computer Science
University of Basel
('2780587', 'Adam Kortylewski', 'adam kortylewski')
('1801001', 'Andreas Schneider', 'andreas schneider')
('3277377', 'Thomas Gerig', 'thomas gerig')
('34460642', 'Bernhard Egger', 'bernhard egger')
('31540387', 'Andreas Morel-Forster', 'andreas morel-forster')
('1687079', 'Thomas Vetter', 'thomas vetter')
00dc942f23f2d52ab8c8b76b6016d9deed8c468dAdvanced Correlation-Based Character Recognition Applied to
the Archimedes Palimpsest
by
B. S. Rochester Institute of Technology
A dissertation submitted in partial fulfillment of the
requirements for the degree of Doctor of Philosophy
in the Chester F. Carlson Center for Imaging Science
Rochester Institute of Technology
May 2008
Signature of the Author
Accepted by
Coordinator, Ph.D. Degree Program
Date
('31960835', 'Derek J. Walvoord', 'derek j. walvoord')
0077cd8f97cafd2b389783858a6e4ab7887b0b6bMAI et al.: ON THE RECONSTRUCTION OF DEEP FACE TEMPLATES
On the Reconstruction of Deep Face Templates
('3391550', 'Guangcan Mai', 'guangcan mai')
('1684684', 'Kai Cao', 'kai cao')
('1768574', 'Pong C. Yuen', 'pong c. yuen')
('6680444', 'Anil K. Jain', 'anil k. jain')
0055c7f32fa6d4b1ad586d5211a7afb030ca08ccSAHAet al.: DEEPLEARNINGFORDETECTINGSPACE-TIMEACTIONTUBES
Deep Learning for Detecting Multiple
Space-Time Action Tubes in Videos
1 Dept. of Computing and
Communication Technologies
Oxford Brookes University
Oxford, UK
2 Department of Engineering Science
University of Oxford
Oxford, UK
('3017538', 'Suman Saha', 'suman saha')
('1931660', 'Gurkirt Singh', 'gurkirt singh')
('3019396', 'Michael Sapienza', 'michael sapienza')
('1730268', 'Philip H. S. Torr', 'philip h. s. torr')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
suman.saha-2014@brookes.ac.uk
gurkirt.singh-2015@brookes.ac.uk
michael.sapienza@eng.ox.ac.uk
philip.torr@eng.ox.ac.uk
fabio.cuzzolin@brookes.ac.uk
009cd18ff06ff91c8c9a08a91d2516b264eee48e8
Face and Automatic Target Recognition Based
on Super-Resolved Discriminant Subspace
Chulalongkorn University, Bangkok
Thailand
1. Introduction
Recently, super-resolution reconstruction (SRR) method of low-dimensional face subspaces
has been proposed for face recognition. This face subspace, also known as eigenface, is
extracted using principal component analysis (PCA). One of the disadvantages of the
reconstructed features obtained from the super-resolution face subspace is that no class
information is included. To remedy the mentioned problem, at first, this chapter will be
discussed about two novel methods for super-resolution reconstruction of discriminative
features, i.e., class-specific and discriminant analysis of principal components; that aims on
improving the discriminant power of the recognition systems. Next, we discuss about two-
dimensional principal component analysis (2DPCA), also refered to as image PCA. We suggest
new reconstruction algorithm based on the replacement of PCA with 2DPCA in extracting
super-resolution subspace for face and automatic target recognition. Our experimental
results on Yale and ORL face databases are very encouraging. Furthermore, the performance
of our proposed approach on the MSTAR database is also tested.
In general, the fidelity of data, feature extraction, discriminant analysis, and classification
rule are four basic elements in face and target recognition systems. One of the efficacies of
recognition systems could be improved by enhancing the fidelity of the noisy, blurred, and
undersampled images that are captured by the surveillance imagers. Regarding to the
fidelity of data, when the resolution of the captured image is too small, the quality of the
detail information becomes too limited, leading to severely poor decisions in most of the
existing recognition systems. Having used super-resolution reconstruction algorithms (Park
et al., 2003), it is fortunately to learn that a high-resolution (HR) image can be reconstructed
from an undersampled image sequence obtained from the original scene with pixel
displacements among images. This HR image is then used to input to the recognition system
in order to improve the recognition performance. In fact, super-resolution can be considered
as the numerical and regularization study of the ill-conditioned large scale problem given to
describe the relationship between low-resolution (LR) and HR pixels (Nguyen et al., 2001).
On the one hand, feature extraction aims at reducing the dimensionality of face or target
image so that the extracted feature is as representative as possible. On the other hand,
super-resolution aims at visually increasing the dimensionality of face or target image.
Having applied super-resolution methods at pixel domain (Lin et al., 2005; Wagner et al.,
2004), the performance of face and target recognition applicably increases. However, with
the emphases on improving computational complexity and robustness to registration error
www.intechopen.com
('2874330', 'Widhyakorn Asdornwised', 'widhyakorn asdornwised')
00214fe1319113e6649435cae386019235474789Bachelorarbeit im Fach Informatik
Face Recognition using
Distortion Models
Mathematik, Informatik und Naturwissenschaften der
RHEINISCH-WESTFÄLISCHEN TECHNISCHEN HOCHSCHULE AACHEN
Der Fakultät für
Lehrstuhl für Informatik VI
Prof. Dr.-Ing. H. Ney
vorgelegt von:
Matrikelnummer 252400
Gutachter:
Prof. Dr.-Ing. H. Ney
Prof. Dr. B. Leibe
Betreuer:
September 2009
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('1967060', 'Philippe Dreuw', 'philippe dreuw')
004e3292885463f97a70e1f511dc476289451ed5Quadruplet-wise Image Similarity Learning
Marc T. Law
LIP6, UPMC - Sorbonne University, Paris, France
('1728523', 'Nicolas Thome', 'nicolas thome')
('1702233', 'Matthieu Cord', 'matthieu cord')
{Marc.Law, Nicolas.Thome, Matthieu.Cord}@lip6.fr
0004f72a00096fa410b179ad12aa3a0d10fc853c
00b08d22abc85361e1c781d969a1b09b97bc7010Who is the Hero? − Semi-Supervised Person Re-Identification in Videos
Tampere University of Technology, Tampere, Finland
Nokia Research Center, Tampere, Finland
Keywords:
Semi-supervised person re-identification, Important person detection, Face tracks, Clustering
('13413642', 'Umar Iqbal', 'umar iqbal')
('9219875', 'Moncef Gabbouj', 'moncef gabbouj')
{umar.iqbal, moncef.gabbouj}@tut.fi, igor.curcio@nokia.com
007250c2dce81dd839a55f9108677b4f13f2640aAdvances in Component Based Face Detection
S. M. Bileschi
B. Heisele
Center for Biological And Computational Learning
Massachusetts Institute of Technology
Cambridge, MA.
Honda Research and Development
Boston, MA.
00e3957212517a252258baef833833921dd308d4Adaptively Weighted Multi-task Deep Network for Person
A￿ribute Classification
Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, China
School of Data Science, Fudan University, China
('37391748', 'Keke He', 'keke he')
('11032846', 'Zhanxiong Wang', 'zhanxiong wang')
('35782003', 'Yanwei Fu', 'yanwei fu')
('6260277', 'Rui Feng', 'rui feng')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
{kkhe15,15210240046,yanweifu,fengrui,ygj,xyxue}@fudan.edu.cn
00f0ed04defec19b4843b5b16557d8d0ccc5bb42
0037bff7be6d463785d4e5b2671da664cd7ef746Author manuscript, published in "European Conference on Computer Vision (ECCV '10) 6311 (2010) 634--647"
DOI : 10.1007/978-3-642-15549-9_46
009a18d04a5e3ec23f8ffcfc940402fd8ec9488fBOYRAZ ET AL. : WEAKLY-SUPERVISED ACTION RECOGNITION BY LOCALIZATION
Action Recognition by Weakly-Supervised
Discriminative Region Localization
Marshall Tappen12
1 Department of EECS
University of Central Florida
Orlando, FL USA
Amazon, Inc
Seattle, WA USA
Sighthound, Inc
Orlando, FL USA
('3174233', 'Hakan Boyraz', 'hakan boyraz')
('2234898', 'Syed Zain Masood', 'syed zain masood')
('6312216', 'Baoyuan Liu', 'baoyuan liu')
('1691260', 'Hassan Foroosh', 'hassan foroosh')
hakanb@amazon.com
zainmasood@sighthound.com
bliu@cs.ucf.edu
tappenm@amazon.com
foroosh@cs.ucf.edu
0066caed1238de95a431d836d8e6e551b3cde391Filtered Component Analysis to Increase Robustness
to Local Minima in Appearance Models
Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania
Pennsylvania
('1707876', 'Fernando De la Torre', 'fernando de la torre')ftorre@cs.cmu.edu acollet@cs.cmu.edu mquero@andrew.cmu.edu
tk@cs.cmu.edu
jeffcohn@pitt.edu
00075519a794ea546b2ca3ca105e2f65e2f5f471Generating a Large, Freely-Available Dataset for
Face-Related Algorithms
Amherst College
('40175953', 'Benjamin Mears', 'benjamin mears')
0019925779bff96448f0c75492717e4473f88377Deep Heterogeneous Face Recognition Networks based on Cross-modal
Distillation and an Equitable Distance Metric
U.S. Army Research Laboratory
University of Maryland, College Park
3Booz Allen Hamilton Inc.
('39412489', 'Christopher Reale', 'christopher reale')
('2445131', 'Hyungtae Lee', 'hyungtae lee')
('1688527', 'Heesung Kwon', 'heesung kwon')
reale@umiacs.umd.edu
lee hyungtae@bah.com
heesung.kwon.civ@mail.mil
00e9011f58a561500a2910a4013e6334627dee60FACIAL EXPRESSION RECOGNITION USING ANGLE-RELATED INFORMATION
FROM FACIAL MESHES
1Computer Science Department, Aristotle
University of Thessaloniki
University Campus, 54124, Thessaloniki, Greece
phone: (+30) 2310 996361, fax: (+30) 2310 996304,
web: www.aiia.csd.auth.gr
('1738865', 'Nicholas Vretos', 'nicholas vretos')
('1681629', 'Vassilios Solachidis', 'vassilios solachidis')
('3176394', 'Petr Somol', 'petr somol')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
email: vretos,vasilis,pitas@aiia.csd.auth.gr
00d9d88bb1bdca35663946a76d807fff3dc1c15fSubjects and Their Objects: Localizing Interactees for a
Person-Centric View of Importance
('3197570', 'Chao-Yeh Chen', 'chao-yeh chen')
00a967cb2d18e1394226ad37930524a31351f6cfFully-adaptive Feature Sharing in Multi-Task Networks with Applications in
Person Attribute Classification
UC San Diego
IBM Research
IBM Research
Binghamton Univeristy, SUNY
UC San Diego
Rogerio Feris
IBM Research
('2325498', 'Yongxi Lu', 'yongxi lu')
('8991006', 'Yu Cheng', 'yu cheng')
('40632040', 'Abhishek Kumar', 'abhishek kumar')
('2443456', 'Shuangfei Zhai', 'shuangfei zhai')
('1737723', 'Tara Javidi', 'tara javidi')
yol070@ucsd.edu
abhishk@us.ibm.com
szhai2@binghamton.edu
chengyu@us.ibm.com
tjavidi@eng.ucsd.edu
rsferis@us.ibm.com
00f1e5e954f9eb7ffde3ca74009a8c3c27358b58Unsupervised Clustering for Google Searches of Celebrity Images
California Institute of Technology, Pasadena, CA
* These authors contributed equally in this work
('3075121', 'Alex Holub', 'alex holub')
('2462051', 'Pierre Moreels', 'pierre moreels')
('1690922', 'Pietro Perona', 'pietro perona')
holub@vision.caltech.edu, pmoreels@vision.caltech.edu, perona@vision.caltech.edu
00a3cfe3ce35a7ffb8214f6db15366f4e79761e3Kinect for real-time emotion recognition via facial expressions. Frontiers of
Information Technology & Electronic Engineering, 16(4):272-282.
[doi:10.1631/FITEE.1400209]
Using Kinect for real-time emotion
recognition via facial expressions
Key words: Kinect, Emotion recognition, Facial expression, Real-time
classification, Fusion algorithm, Support vector machine (SVM)
ORCID: http://orcid.org/0000-0002-5021-9057
Front Inform Technol & Electron Eng
('2566775', 'Qi-rong Mao', 'qi-rong mao')
('2016065', 'Xin-yu Pan', 'xin-yu pan')
('20342486', 'Yong-zhao Zhan', 'yong-zhao zhan')
('2800876', 'Xiang-jun Shen', 'xiang-jun shen')
('2566775', 'Qi-rong Mao', 'qi-rong mao')
E-mail: mao_qr@ujs.edu.cn
0058cbe110933f73c21fa6cc9ae0cd23e974a9c7BISWAS, JACOBS: AN EFFICIENT ALGORITHM FOR LEARNING DISTANCES
An Efficient Algorithm for Learning
Distances that Obey the Triangle Inequality
http://www.xrci.xerox.com/profile-main/67
http://www.cs.umd.edu/~djacobs/
Xerox Research Centre India
Bangalore, India
Computer Science Department
University of Maryland
College Park, USA
('2221075', 'Arijit Biswas', 'arijit biswas')
('1682573', 'David Jacobs', 'david jacobs')
004a1bb1a2c93b4f379468cca6b6cfc6d8746cc4Balanced k-Means and Min-Cut Clustering ('1729163', 'Xiaojun Chang', 'xiaojun chang')
('1688370', 'Feiping Nie', 'feiping nie')
('1727419', 'Zhigang Ma', 'zhigang ma')
('39033919', 'Yi Yang', 'yi yang')
00d94b35ffd6cabfb70b9a1d220b6823ae9154eeDiscriminative Bayesian Dictionary Learning
for Classification
('2941543', 'Naveed Akhtar', 'naveed akhtar')
('1688013', 'Faisal Shafait', 'faisal shafait')
00ebc3fa871933265711558fa9486057937c416eCollaborative Representation based Classification
for Face Recognition
The Hong Kong Polytechnic University, Hong Kong, China
b School of Applied Mathematics, Xidian University, Xi an, China
c Principal Researcher, Microsoft Research Asia, Beijing, China
('36685537', 'Lei Zhang', 'lei zhang')
('5828998', 'Meng Yang', 'meng yang')
('2340559', 'Xiangchu Feng', 'xiangchu feng')
('1700297', 'Yi Ma', 'yi ma')
('1698371', 'David Zhang', 'david zhang')
006f283a50d325840433f4cf6d15876d475bba77756
Preserving Structure in Model-Free Tracking
('2883723', 'Lu Zhang', 'lu zhang')
('1803520', 'Laurens van der Maaten', 'laurens van der maaten')
00b29e319ff8b3a521b1320cb8ab5e39d7f42281Towards Transparent Systems: Semantic
Characterization of Failure Modes
Carnegie Mellon University, Pittsburgh, USA
University of Washington, Seattle, USA
3 Virginia Tech, Blacksburg, USA
('3294630', 'Aayush Bansal', 'aayush bansal')
('2270286', 'Ali Farhadi', 'ali farhadi')
('1713589', 'Devi Parikh', 'devi parikh')
00d931eccab929be33caea207547989ae7c1ef39The Natural Input Memory Model
IKAT, Universiteit Maastricht, St. Jacobsstraat 6, 6211 LB Maastricht, The Netherlands
Universiteit van Amsterdam, Roeterstraat 15, 1018 WB Amsterdam, The Netherlands
IKAT, Universiteit Maastricht, St. Jacobsstraat 6, 6211 LB Maastricht, The Netherlands
Joyca P.W. Lacroix (j.lacroix@cs.unimaas.nl)
Jaap M.J. Murre (jaap@murre.com)
Eric O. Postma (postma@cs.unimaas.nl)
H. Jaap van den Herik (herik@cs.unimaas.nl)
0059b3dfc7056f26de1eabaafd1ad542e34c2c2e
0052de4885916cf6949a6904d02336e59d98544cSpringer Science + Business Media, Inc. Manufactured in The Netherlands
DOI: 10.1007/s10994-005-3561-6
Generalized Low Rank Approximations of Matrices
University of Minnesota-Twin Cities, Minneapolis
MN 55455, USA
Editor:
Peter Flach
Published online: 12 August 2005
('37513601', 'Jieping Ye', 'jieping ye')jieping@cs.umn.edu
6e60536c847ac25dba4c1c071e0355e5537fe061Computer Vision and Natural Language Processing: Recent
Approaches in Multimedia and Robotics
71
Integrating computer vision and natural language processing is a novel interdisciplinary field that has
received a lot of attention recently. In this survey, we provide a comprehensive introduction of the integration
of computer vision and natural language processing in multimedia and robotics applications with more than
200 key references. The tasks that we survey include visual attributes, image captioning, video captioning,
visual question answering, visual retrieval, human-robot interaction, robotic actions, and robot navigation.
We also emphasize strategies to integrate computer vision and natural language processing models as a
unified theme of distributional semantics. We make an analog of distributional semantics in computer vision
and natural language processing as image embedding and word embedding, respectively. We also present a
unified view for the field and propose possible future directions.
Categories and Subject Descriptors: I.2.0 [Artificial Intelligence]: General; I.2.7 [Artificial Intelligence]:
Natural Language Processing; I.2.9 [Artificial Intelligence]: Robotics; I.2.10 [Artificial Intelligence]:
Vision and Scene Understanding; I.4.9 [Image Processing and Computer Vision]: Applications; I.5.4
[Pattern Recognition]: Applications
General Terms: Computer Vision, Natural Language Processing, Robotics
Additional Key Words and Phrases: Language and vision, survey, multimedia, robotics, symbol grounding,
distributional semantics, computer vision, natural language processing, visual attribute, image captioning,
imitation learning, word2vec, word embedding, image embedding, semantic parsing, lexical semantics
ACM Reference Format:
Computer vision and natural language processing: Recent approaches in multimedia and robotics. ACM
Comput. Surv. 49, 4, Article 71 (December 2016), 44 pages.
DOI: http://dx.doi.org/10.1145/3009906
1. INTRODUCTION
We have many ways to describe the world for communication between people: texts,
gestures, sign languages, and face expressions are all ways of sharing meaning. Lan-
guage is unique among communication systems in that its compositionality through
syntax allows a limitless number of meanings to be expressed. Such meaning ulti-
mately must be tied to perception of the world. This is usually referred to as the symbol
An earlier version of this article appeared as “Computer Vision and Natural Language Processing: Re-
cent Approaches in Multimedia and Robotics,” Scholarly Paper Archive, Department of Computer Science,
University of Maryland, College Park, MD
Authors’ addresses: P. Wiriyathammabhum, C. Ferm ¨uller, and Y. Aloimonos, Computer Vision Lab, Uni-
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for
('2862582', 'Peratham Wiriyathammabhum', 'peratham wiriyathammabhum')
('1937719', 'Douglas Summers-Stay', 'douglas summers-stay')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
('2862582', 'Peratham Wiriyathammabhum', 'peratham wiriyathammabhum')
('1937719', 'Douglas Summers-Stay', 'douglas summers-stay')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
versity of Maryland College Park, MD 20742-3275; email: {peratham@cs.umd.edu, fer@umiacs.umd.edu,
yiannis@cs.umd.edu}. D. Summers-Stay, U.S. Army Research Laboratory, Adelphi, MD 20783; email:
{douglas.a.summers-stay.civ@mail.mil}.
6e198f6cc4199e1c4173944e3df6f39a302cf787MORPH-II: Inconsistencies and Cleaning Whitepaper
NSF-REU Site at UNC Wilmington, Summer 2017
('39845059', 'G. Bingham', 'g. bingham')
('1693470', 'B. Yip', 'b. yip')
('1833570', 'M. Ferguson', 'm. ferguson')
('1693283', 'C. Chen', 'c. chen')
('11134292', 'Y. Wang', 'y. wang')
('3369885', 'T. Kling', 't. kling')
6eaf446dec00536858548fe7cc66025b70ce20eb
6e173ad91b288418c290aa8891193873933423b3Are you from North or South India? A hard race classification task reveals
systematic representational differences between humans and machines
aCentre for Neuroscience, Indian Institute of Science, Bangalore, India
('2478739', 'Harish Katti', 'harish katti')
6e91be2ad74cf7c5969314b2327b513532b1be09Dimensionality Reduction with Subspace Structure
Preservation
Department of Computer Science
SUNY Buffalo
Buffalo, NY 14260
('2309967', 'Devansh Arpit', 'devansh arpit')
('1841118', 'Ifeoma Nwogu', 'ifeoma nwogu')
('1723877', 'Venu Govindaraju', 'venu govindaraju')
{devansh,inwogua,govind}@buffalo.edu
6eba25166fe461dc388805cc2452d49f5d1cdaddPages 122.1-122.12
DOI: https://dx.doi.org/10.5244/C.30.122
6e8a81d452a91f5231443ac83e4c0a0db4579974Illumination robust face representation based on intrinsic geometrical
information
Soyel, H; Ozmen, B; McOwan, PW
This is a pre-copyedited, author-produced PDF of an article accepted for publication in IET
Conference on Image Processing (IPR 2012). The version of record is available
http://ieeexplore.ieee.org/document/6290632/?arnumber=6290632&tag=1
For additional information about this publication click this link.
http://qmro.qmul.ac.uk/xmlui/handle/123456789/16147
Information about this research object was correct at the time of download; we occasionally
make corrections to records, please therefore check the published record when citing. For
more information contact scholarlycommunications@qmul.ac.uk
6ed738ff03fd9042965abdfaa3ed8322de15c116This document is downloaded from DR-NTU, Nanyang Technological
University Library, Singapore
Title
K-MEAP: Generating Specified K Clusters with Multiple
Exemplars by Efficient Affinity Propagation
Author(s) Wang, Yangtao; Chen, Lihui
Citation
Wang, Y & Chen, L. (2014). K-MEAP: Generating
Specified K Clusters with Multiple Exemplars by Efficient
Affinity Propagation. 2014 IEEE International Conference
on Data Mining (ICDM), 1091-1096.
Date
2014
URL
http://hdl.handle.net/10220/39690
Rights
© 2014 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other
uses, in any current or future media, including
reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for
resale or redistribution to servers or lists, or reuse of any
copyrighted component of this work in other works. The
published version is available at:
[http://dx.doi.org/10.1109/ICDM.2014.54].
6ecd4025b7b5f4894c990614a9a65e3a1ac347b2International Journal on Recent and Innovation Trends in Computing and Communication

ISSN: 2321-8169
Volume: 2 Issue: 5
1275– 1281
_______________________________________________________________________________________________
Automatic Naming of Character using Video Streaming for Face
Recognition with Graph Matching
Nivedita.R.Pandey
Ranjan.P.Dahake
PG Student at MET’s IOE Bhujbal Knowledge City,
PG Student at MET’s IOE Bhujbal Knowledge City,
Nasik, Maharashtra, India,
Nasik, Maharashtra, India,
pandeynivedita7@gmail.com
dahakeranjan@gmail.com
6eddea1d991e81c1c3024a6cea422bc59b10a1dcTowards automatic analysis of gestures and body
expressions in depression
University of Cambridge
Computer Laboratory
Cambridge, UK
University of Cambridge
Computer Laboratory
Cambridge, UK
('2022940', 'Marwa Mahmoud', 'marwa mahmoud')
('39840677', 'Peter Robinson', 'peter robinson')
marwa.mahmoud@cl.cam.ac.uk
peter.robinson@cl.cam.ac.uk
6eaeac9ae2a1697fa0aa8e394edc64f32762f578
6ee2ea416382d659a0dddc7a88fc093accc2f8ee
6e97a99b2879634ecae962ddb8af7c1a0a653a82Towards Context-aware Interaction Recognition∗
School of Computer Science, University of Adelaide, Australia
Contents
1. Introduction
2. Related work
3. Methods
3.1. Context-aware interaction classification
framework . . . . . . . . . . . . . . . . .
3.2. Feature representations for interactions
recognition . . . . . . . . . . . . . . . .
3.2.1
Spatial feature representation . .
3.2.2 Appearance feature representation
Improving appearance representation
with attention and context-aware atten-
tion . . . . . . . . . . . . . . . . . . . .
3.4. Implementation details . . . . . . . . . .
3.3.
4. Experiments
4.1. Evaluation on the Visual Relationship
dataset . . . . . . . . . . . . . . . . . . .
4.1.1 Detection results comparison . .
Zero-shot learning performance
4.1.2
evaluation . . . . . . . . . . . . .
4.1.3 Extensions and comparison with
the state-of-the-art methods . . .
4.2. Evaluation on the Visual Phrase dataset
5. Conclusion
('3194022', 'Bohan Zhuang', 'bohan zhuang')
('2161037', 'Lingqiao Liu', 'lingqiao liu')
('1780381', 'Chunhua Shen', 'chunhua shen')
6e9a8a34ab5b7cdc12ea52d94e3462225af2c32cFusing Aligned and Non-Aligned Face Information
for Automatic Affect Recognition in the Wild: A Deep Learning Approach
Computational NeuroSystems Laboratory (CNSL)
Korea Advanced Institute of Science and Technology (KAIST
('3918690', 'Bo-Kyeong Kim', 'bo-kyeong kim')
('2527421', 'Suh-Yeon Dong', 'suh-yeon dong')
('3294960', 'Jihyeon Roh', 'jihyeon roh')
('34577016', 'Soo-Young Lee', 'soo-young lee')
{bokyeong1015, suhyeon.dong}@gmail.com, {rohleejh, gmkim90, sylee}@kaist.ac.kr
6e3a181bf388dd503c83dc324561701b19d37df1Finding a low-rank basis in a matrix subspace
Andr´e Uschmajew
('2391697', 'Yuji Nakatsukasa', 'yuji nakatsukasa')
6ef1996563835b4dfb7fda1d14abe01c8bd24a05Nonparametric Part Transfer for Fine-grained Recognition
Computer Vision Group, Friedrich Schiller University Jena
www.inf-cv.uni-jena.de
('1679449', 'Erik Rodner', 'erik rodner')
('1720839', 'Alexander Freytag', 'alexander freytag')
('1728382', 'Joachim Denzler', 'joachim denzler')
6e8c3b7d25e6530a631ea01fbbb93ac1e8b69d2fDeep Episodic Memory: Encoding, Recalling, and Predicting
Episodic Experiences for Robot Action Execution
('35309584', 'Jonas Rothfuss', 'jonas rothfuss')
('2128564', 'Fabio Ferreira', 'fabio ferreira')
('34876449', 'Eren Erdal Aksoy', 'eren erdal aksoy')
('46432716', 'You Zhou', 'you zhou')
('1722677', 'Tamim Asfour', 'tamim asfour')
6e911227e893d0eecb363015754824bf4366bdb7Wasserstein Divergence for GANs
1 Computer Vision Lab, ETH Zurich, Switzerland
2 VISICS, KU Leuven, Belgium
('1839268', 'Jiqing Wu', 'jiqing wu')
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('30691454', 'Janine Thoma', 'janine thoma')
('32610154', 'Dinesh Acharya', 'dinesh acharya')
('1681236', 'Luc Van Gool', 'luc van gool')
{jwu,zhiwu.huang,jthoma,vangool}@vision.ee.ethz.ch,
acharyad@student.ethz.ch
6ee8a94ccba10062172e5b31ee097c846821a822Submitted 3/13; Revised 10/13; Published 12/13
How to Solve Classification and Regression Problems on
High-Dimensional Data with a Supervised
Extension of Slow Feature Analysis
Institut f¨ur Neuroinformatik
Ruhr-Universit¨at Bochum
Bochum D-44801, Germany
Editor: David Dunson
('2366497', 'Alberto N. Escalante', 'alberto n. escalante')
('1736245', 'Laurenz Wiskott', 'laurenz wiskott')
ALBERTO.ESCALANTE@INI.RUB.DE
LAURENZ.WISKOTT@INI.RUB.DE
6ee64c19efa89f955011531cde03822c2d1787b8Table S1: Review of existing facial expression databases that are often used in social
psycholgy.
Author
Face
name
database
Expressions1
Format
Short summary
[1]
GEMEP Corpus
Mind Reading: the
interactive
guide
to emotions
audio
and
video
record-
ings
Videos
anger,
amuse-
admiration,
ment,
tender-
ness, disgust, despair,
pride,
shame, anxiety
(worry),
interest,
irritation, joy (elation),
contempt, panic fear,
pleasure
(sensual),
relief, surprise, sadness
expressions
groups
:afraid, angry, bored,
bothered, disbelieving,
disgust, excited,
fond,
happy, hurt, interested,
kind,
romantic,
sad, sneaky, sorry, sure,
thinking,
surprised,
touched,
unfriendly,
unsure, wanting
liked,
RU-FACS Sponta-
neous Expression
Database
spontaneous facial ac-
tions
Videos
This database contains more than 7000 clips of the six basic
emotions as well as subtle emotions. For the recordings 10
professional actors (5 female) were coached by a professional
director. The actors received a list of the emotions together
with short definitions and brief scenarios. The recordings are
available in different intensity levels and part of the database
has been validated.
The database contains over 400 videos of facial expressions
that are summarized in 24 groups. Each group consists of dif-
ferent subordinate expressions. Each expression is displayed
by 6 models ranging in age.
100 participants were asked for recording the database. There-
fore, a false option paradigm was used which is though to elicit
spontaneous facial expressions. Here, participants fill out a
questionnaire regarding their opinions about particular social
or political issues. Participants are then asked about their
answer by an interviewer. Either participants are asked to
tell the truth or to fool the interviewer. Moreover, partici-
pants were financially rewarded. The expressions were video
captured by four synchronized cameras and clips of 33 partic-
ipants have been FACS coded (onset, apex, and offset of the
face action).
Comprises 1008 short videos of expressions produced by 8 Ital-
ian professional actors. Each expression was recorded in three
intensities (low, medium, and high) and in two different condi-
tions: (1) Utterance condition in which actors spoke additional
sentences and (2) Non-Utterance condition. Here, actors were
additionally given scenarios according to the expressions to be
produced.
The expressions are taken from 12 participants (European,
Asian and African). Each expression was created using a di-
rect facial action task and all expressions were FACS coded.
Moreover, the expressions have been morphed into 5 different
levels of intensity.
It contains 165 greyscale images of 15 individuals one per
different facial expression or configuration (with or without
glasses, different camera perspectives).
This database contains two sets of facial expressions: (1) The
laboratory set, that includes 40 participants (varied in culture,
race, and appearance) displaying their own choice of expres-
sions. Participants were allowed to move their head without
going into profile view. Moreover, they were asked to avoid
speech. Each video sequence contains 1-3 expressions.
(2)
video recordings from TV that also contained speech.
The database contains videos of one actor performing approx-
imately 45 action units which were recorded from six different
viewpoints simultaneously.
For this database, between 19 and 97 different action units
were recorded form 10 participants. Action unit sequences
contain single and combined action units. The peak of each ex-
pression has been manually coded by certified FACS experts.
Moreover, a framework is proposed that allows to build dy-
namic 3D morphable models for the first time.
[2]
[3]
[5]
[6]
[7]
Breidt2
[8]
Chen3 , 2007
[4]
DaFEx
happiness,
fear,
and disgust
sadness,
surprise,
anger
Videos
Images
Images
Videos
happiness,
anger,
and embarrassment
fear,
sadness,
disgust,
happiness,
sleepy,
wink
surprise,
sadness,
and
happiness,
fear,
and disgust
sadness,
surprise,
anger
Facial action units
Videos
Facial action units
Videos
Montreal Set
of
facial displays of
emotion (MSFDE)
The Yale
Database
Face
University
of
Database
Maryland
Face
Database
MPI
Video
the
of
Dynamic
FACS
(D3DFACS)
data
3D
set
Fa-
Taiwanese
cial
Expression
Database (TFEID)
anger, contempt, dis-
gust,
fear, happiness,
sadness and surprise
Images
The database consists of 7200 images captured from 40 indi-
viduals. The expressions are displayed in two (high and low)
intensities and two viewing angles (0◦ and 45◦ ) simultane-
ously.
[9]
CAFE Database
anger, disgust, happy,
maudlin (for sad), fear,
surprise
Images
The database consists of two normalized versions (one gamma
corrected and the other histogram equalized) of the faces.
1Neutral expression is not included.
2Please see http://vdb.kyb.tuebingen.mpg.de/.
3Please see http://bml.ym.edu.tw/ download/html/news.htm.
6e00a406edb508312108f683effe6d3c1db020fbFaces as Lighting Probes via Unsupervised Deep
Highlight Extraction
Simon Fraser University, Burnaby, Canada
National University of Defense Technology, Changsha, China
3 Microsoft Research, Beijing, China
('2693616', 'Renjiao Yi', 'renjiao yi')
('2041096', 'Chenyang Zhu', 'chenyang zhu')
('37291674', 'Ping Tan', 'ping tan')
('1686911', 'Stephen Lin', 'stephen lin')
{renjiaoy, cza68, pingtan}@sfu.ca
stevelin@microsoft.com
6e94c579097922f4bc659dd5d6c6238a428c4d22Graph Based Multi-class Semi-supervised
Learning Using Gaussian Process
State Key Laboratory of Intelligent Technology and Systems,
Tsinghua University, Beijing, China
('1809614', 'Yangqiu Song', 'yangqiu song')
('1700883', 'Changshui Zhang', 'changshui zhang')
('1760678', 'Jianguo Lee', 'jianguo lee')
{songyq99, lijg01}@mails.tsinghua.edu.cn, zcs@mail.tsinghua.edu.cn
6e379f2d34e14efd85ae51875a4fa7d7ae63a662A NEW MULTI-MODAL BIOMETRIC SYSTEM
BASED ON FINGERPRINT AND FINGER
VEIN RECOGNITION
Master's Thesis
Department of Software Engineering
JULY-2014
I
('37171106', 'Naveed AHMED', 'naveed ahmed')
('1987743', 'Asaf VAROL', 'asaf varol')
6eb1e006b7758b636a569ca9e15aafd038d2c1b1Human Capabilities on Video-based Facial
Expression Recognition
Faculty of Science and Engineering, Waseda University, Tokyo, Japan
2 Institut f¨ur Informatik, Technische Universit¨at M¨unchen, Germany
('32131501', 'Matthias Wimmer', 'matthias wimmer')
('1989987', 'Ursula Zucker', 'ursula zucker')
('1699132', 'Bernd Radig', 'bernd radig')
6eece104e430829741677cadc1dfacd0e058d60fAutomated Facial Image Analysis 1
To appear in J. A. Coan & J. B. Allen (Eds.), The handbook of emotion elicitation and assess-
ment. Oxford University Press Series in Affective Science. New York: Oxford
Use of Automated Facial Image Analysis for Measurement of Emotion Expression
Department of Psychology
University of Pittsburgh
Takeo Kanade
Robotics Institute
Carnegie Mellon University
Facial expressions are a key index of emotion. They have consistent correlation with
self-reported emotion (Keltner, 1995; Rosenberg & Ekman, 1994; Ekman & Rosenberg, in press)
and emotion-related central and peripheral physiology (Davidson, Ekman, Saron, Senulis, &
Friesen, 1990; Fox & Davidson, 1988; Levenson, Ekman, & Friesen, 1990). They putatively
share similar underlying dimensions with self-reported emotion (e.g., positive and negative
affect) (Bullock & Russell, 1984; Gross & John, 1997; Watson & Tellegen, 1985). Facial
expressions serve interpersonal functions of emotion by conveying communicative intent,
signaling affective information in social referencing (Campos, Bertenthal, & Kermoian, 1992),
and more generally contributing to the regulation of social interaction (Cohn & Elmore, 1988;
Fridlund, 1994; Schmidt & Cohn, 2001). As a measure of trait affect, stability in facial
expression emerges early in life (Cohn & Campbell, 1992; Malatesta, Culver, Tesman, &
Shephard, 1989). By adulthood, stability is moderately strong, comparable to that for self-
reported emotion (Cohn, Schmidt, Gross, & Ekman, 2002), and predictive of favorable outcomes
in emotion-related domains including marriage and personal well-being over periods as long as
30 years (Harker & Keltner, 2001). Expressive changes in the face are a rich source of cues
about intra- and interpersonal functions of emotion (cf. Keltner & Haitd, 1999).
clinical practice, reliable, valid, and efficient methods of measurement are critical. Until recently,
selecting a measurement method meant choosing among one or another human-observer-based
coding system (e.g., Ekman & Friesen, 1978 and Izard, 1983) or facial electromyography
(EMG). While each of these approaches has advantages, they are not without costs. Human-
observer-based methods are time consuming to learn and use, and they are difficult to
standardize, especially across laboratories and over time (Bakeman & Gottman, 1986; Martin &
Bateson, 1986). Facial EMG requires placement of sensors on the face, which may inhibit facial
action and which rules out its use for naturalistic observation.
computer vision. Computer vision is the science of extracting and representing meaningful
information from digitized video and recognizing perceptually meaningful patterns. An early
focus in automated face image analysis by computer vision was face recognition (Kanade, 1973,
To make use of the information afforded by facial expression for emotion science and
An emerging alternative to these methods is automated facial image analysis using
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
6e0a05d87b3cc7e16b4b2870ca24cf5e806c0a94RANDOM GRAPHS FOR STRUCTURE
DISCOVERY IN HIGH-DIMENSIONAL DATA
by
Jos¶e Ant¶onio O. Costa
A dissertation submitted in partial fulflllment
of the requirements for the degree of
Doctor of Philosophy
(Electrical Engineering: Systems)
in The University of Michigan
2005
Doctoral Committee:
Professor Alfred O. Hero III, Chair
Professor Jefirey A. Fessler
Professor David L. Neuhofi
('1703616', 'Susan A. Murphy', 'susan a. murphy')
6e1802874ead801a7e1072aa870681aa2f555f351­4244­0728­1/07/$20.00 ©2007 IEEE
I ­ 629
ICASSP 2007
-:241/.-)674-,-5+412645.4.)+-4-+/16115DKE?DAC;=20K=9=C2:E=K6=C=@16D=I0K=C1-+-,AF=HJAJ7ELAHIEJOB1EEI=J7H>==+D=F=EC75)2,AF=HJAJB1BH=JE-CEAAHEC+DEAIA7ELAHIEJOB0CC0CC)*564)+60MJA?@A=B=?AEI=ME@AOIJK@EA@FH>AE>JDF=JJAHHA?CEJE=@FIO?DCOEJAH=JKHAI=OBA=JKHA@AI?HEFJHI/=>HBA=JKHA?=*E=HO2=JJAH*2=@-@CAHEAJ=JE0EIJCH=D=LA>AAFHFIA@1JDEIF=FAHMACELA=?FHADAIELAIJK@OBJDAIA@AI?HEFJHIK@AHJDABH=AMHB2HE?EF=+FAJ)=OIEI2+)BMA@>OEA=H,EI?HEE=J)=OIEI,)?F=HA@JDHAA@EBBAHAJFFK=HIEE=HEJOA=IKHAI=@JM@EBBAHAJBA=JKHA?HHAIF@A?AIJH=JACEAIDEIJE?=@?=HALAHMAFHAIAJ=AMBA=JKHA@AI?HEFJH=A@KJE4=@EKI*2=@=IFHFIA=?>E=JEI?DAABHJDA*2=@/=>H@AI?HEFJH6DAANFAHEAJIJDA2KH@KA=@+721-@=J=>=IAI@AIJH=JAJD=J=>LEKIHA?CEJE>IJB*2EI=?DEALA@K@AH2+),)BH=AMH?F=HA@JJDA@EHA?J?=IIE?=JE JDA*2=@/=>HBA=JKHAI=HA?F=H=>A=IMA=IKJK=O?FAAJ=HO=@JDA?>E=JEBJDAIAJM@AI?HEFJHI>HECI=IECE?=JEFHLAAJE?=IIE?=JE?=F=>EEJOLAHIECAAI=@!JDAKJE4=@EKI*2IDMIJKJFAHBH=JDAIJ=JABJDA=HJBA=JKHA@AI?HEFJHI1@AN6AHIa.A=JKHA,AI?HEFJH5EE=HEJOA=IKHA164,7+616DAIK??AIIB=B=?AHA?CEJE=CHEJDCHA=JOHAEAIDMJANJH=?JABBA?JELABA=JKHAIJ@AI?HE>A=E=CA=@ DMJEBAHJDAIEE=HEJOBJMB=?AI>=IA@JDAANJH=?JA@BA=JKHAIIJFHALEKIIJK@EAIB?KIJDA=JJAHF=HJIK?D=I-ECAB=?A .EIDAHB=?A =@LEAM>=IA@HA?CEJE=FFH=?DAI!6DAO=HA=>=IA@JDAHECE=CH=OALAL=KAI=@FHAIAJ@EBBAHAJIEE=HEJOA=IKHAI>=IA@IK>IF=?AJA?DEGKAI6DAHECE=CH=OALABA=JKHABJAIKBBAHIBHJDAEKE=JE=@ANFHAIIEL=HE=JEI=@=O=FFH=?DAID=LA>AAFHFIA@JANJH=?JHAH>KIJHIA=JE?BA=JKHAIBHE=CAI)CJDA/=>HBA=JKHAEIJDAIJFFK=HA=@JDA-=IJE?*K?D/H=FD=J?DEC-*/"AJD@D=IIK??AIIBKOEJACH=JA@/=>HBA=JKHAI=@JDA?=?HHAIF@A?AIJH=JACOBHJDAKJELEAMB=?AHA?CEJEFH>A+>EA@MEJDKEA@IK>IF=?A==OIEI EJD=I=I>AAKIA@JEFHLAE@H=@KJ@HB=?AHA?CEJE4A?AJO?=*E=HO2=JJAH*2HECE=OEJH@K?A@BHJANJKHAHAFHAIAJ=JED=IFHLA@J>A=FMAHBK@AI?HEFJHBHB=?AHA?CEJE6DA-0@AI?HEFJH?FKJAIJDADEIJCH=BJDAA@CAHEAJ=JE@EIJHE>KJEMEJDEJDAAECD>HD@B=FEJ=@JDAMHE#IDMIJD=J-0BA=JKHA?=IECE?=JOEFHLAJDAB=?A@AJA?JEFAHBH=?AE?F=HEIJJDAHECE=CH=OALABA=JKHA1JDEIF=FAHMA=FFOJDA-0@AI?HEFJHBHB=?AHA?CEJE6DAMHEIDMIJD=J*2>=IA@@EHA?JA=HAIJAECD>H+=IIEAH@AIJ=M=OIFH@K?AI=JEIBOECHAIKJI1JDEIMHMAFHFIAJKJEEAJDABH=AMHB2HE?EF=+FAJ)=OIEI2+)BMA@>OEA=H,EI?HEE=J)=OIEI,)BH*2>=IA@B=?AHA?CEJEEMDE?D=FIIE>A2+)=@,)@EAIE?>E=JEI=HAANFHA@)@H==JE?HA?CEJE>IJEI>IAHLA@>=IA@JDEIAMBH=AMH)IMAFHFIAJDAKJE4=@EKI*2HAFHAIAJ=JEBHB=?AHA?CEJE=@CELA=?FHADAIELAIJK@OMEJDJDABA=JKHA@AI?HEFJHI/=>H*2-0=IMA=IJDAHECE=CH=OALABA=JKHAI6DAIABA=JKHA@AI?HEFJHI=HA?F=HA@MEJDJM@EBBAHAJBA=JKHA?HHAIF@A?AAJD@IEADEIJE?=@?=6DABHAHA?IJHK?JIJDABA=JKHA?HHAIF@A?A@EHA?JO>=IA@JDAE=CA?H@E=JAIMDEAJDA=JJAHAEI>=IA@=IAJBAOBA=JKHAFEJIIK?D=IJDAKJD=@AOA?HAHIIA=@B=?A?JKHFEJIHALAHJDHAA@EBBAHAJIEE=HEJOA=IKHAI =@+IEA=HA=FFEA@JANJAIELAOAL=K=JAJDAABBA?JELAAIIBJDAIA@EBBAHAJ@AI?HEFJHI/=>HBA=JKHA=@*2?D=H=?JAHEAJDAFHFAHJOB?=JANJKHA@EIJHE>KJEIE@EIJE?JM=OI1JDEIMHE=@@EJEJJDA?FHADAIELA?F=HEIMAAL=K=JAJDAEH?FAAJ=HOFHFAHJO=@JDAANFAHEAJ=HAIKJIJDA+721-=@2KH@KA@=J=>=IAIIDMJD=JJDA?>E=JEBJDA>HECIIECE?=JFAHBH=?AEFHLAAJI 4-81-9.60-.-)674-,-5+4126451JDEIIA?JEMACELA=LAHLEAMBJDAIJ=JABJDA=HJBA=JKHA@AI?HEFJHIBHJDAB=?AHA?CEJEFH>A
6ed22b934e382c6f72402747d51aa50994cfd97bCustomized Expression Recognition for Performance-Driven
Cutout Character Animation
†NEC Laboratories America
‡Snapchat
('39960064', 'Xiang Yu', 'xiang yu')
('1706007', 'Jianchao Yang', 'jianchao yang')
6e93fd7400585f5df57b5343699cb7cda20cfcc2http://journalofvision.org/9/2/22/
Comparing a novel model based on the transferable
belief model with humans during the recognition of
partially occluded facial expressions
Département de Psychologie, Université de Montréal,
Canada
Département de Psychologie, Université de Montréal,
Canada
Département de Psychologie, Université de Montréal,
Canada
Humans recognize basic facial expressions effortlessly. Yet, despite a considerable amount of research, this task remains
elusive for computer vision systems. Here, we compared the behavior of one of the best computer models of facial
expression recognition (Z. Hammal, L. Couvreur, A. Caplier, & M. Rombaut, 2007) with the behavior of human observers
during the M. Smith, G. Cottrell, F. Gosselin, and P. G. Schyns (2005) facial expression recognition task performed on
stimuli randomly sampled using Gaussian apertures. The modelVwhich we had to significantly modify in order to give the
ability to deal with partially occluded stimuliVclassifies the six basic facial expressions (Happiness, Fear, Sadness,
Surprise, Anger, and Disgust) plus Neutral from static images based on the permanent facial feature deformations and the
Transferable Belief Model (TBM). Three simulations demonstrated the suitability of the TBM-based model to deal with
partially occluded facial parts and revealed the differences between the facial information used by humans and by the
model. This opens promising perspectives for the future development of the model.
Keywords: facial features behavior, facial expressions classification, Transferable Belief Model, Bubbles
Citation: Hammal, Z., Arguin, M., & Gosselin, F. (2009). Comparing a novel model based on the transferable belief
http://journalofvision.org/9/2/22/, doi:10.1167/9.2.22.
Introduction
Facial expressions communicate information from
which we can quickly infer the state of mind of our peers
and adjust our behavior accordingly (Darwin, 1872). To
illustrate, take a person like patient SM with complete
bilateral damage to the amygdala nuclei that prevents her
from recognizing facial expressions of fear. SM would be
incapable of interpreting the fearful expression on the face
of a bystander, who has encountered a furious Grizzly
bear, as a sign of potential
threat (Adolphs, Tranel,
Damasio, & Damasio, 1994).
Facial expressions are typically arranged into six
universally recognized basic categories Happiness, Sur-
prise, Disgust, Anger, Sadness, and Fear that are similarly
expressed across different backgrounds and cultures
(Cohn, 2006; Ekman, 1999; Izard, 1971, 1994). Facial
expressions result
from the precisely choreographed
deformation of facial features, which are often described
using the 46 Action Units (AUs; Ekman & Friesen,
1978).
Facial expression recognition and computer
vision
The study of human facial expressions has an impact in
several areas of life such as art, social interaction, cognitive
science, medicine, security, affective computing, and
human-computer interaction (HCI). An automatic facial
expressions classification system may contribute signifi-
cantly to the development of all these disciplines. However,
the development of such a system constitutes a significant
challenge because of the many constraints that are imposed
by its application in a real-world context (Pantic & Bartlett,
2007; Pantic & Patras, 2006). In particular, such systems
need to provide great accuracy and robustness without
demanding too many interventions from the user.
There have been major advances in computer vision
over the past 15 years for the recognition of the six basic
facial expressions (for reviews, see Fasel & Luettin, 2003;
Pantic & Rothkrantz, 2000b). The main approaches can be
divided in two classes: Model-based and fiducial points
approaches. The model-based approach requires the
design of a deterministic physical model that can represent
doi: 10.1167/9.2.22
Received January 28, 2008; published February 26, 2009
ISSN 1534-7362 * ARVO
('1785007', 'Zakia Hammal', 'zakia hammal')
('3005969', 'Martin Arguin', 'martin arguin')
('2074568', 'Frédéric Gosselin', 'frédéric gosselin')
6eb1b5935b0613a41b72fd9e7e53a3c0b32651e9LEGO Pictorial Scales for Assessing Affective Responses
t2i Lab, Chalmers University of Technology, Gothenburg, Sweden
2Digital Productivity, CSIRO, Australia
University of Canterbury, New Zealand
Texas AandM University, College Station, TX, USA
Human Centered Multimedia, Augsburg University, Germany
Human Interface Technology Lab New Zealand, University of Canterbury, New Zealand
('1761180', 'Mohammad Obaid', 'mohammad obaid')
('39191121', 'Andreas Dünser', 'andreas dünser')
('1719307', 'Elena Moltchanova', 'elena moltchanova')
('33096182', 'Danielle Cummings', 'danielle cummings')
('1728894', 'Christoph Bartneck', 'christoph bartneck')
mobaid@chalmers.se
6e12ba518816cbc2d987200c461dc907fd19f533
6e782073a013ce3dbc5b9b56087fd0300c510f67IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. II (May – Jun. 2015), PP 61-68
www.iosrjournals.org
Real Time Facial Emotion Recognition using Kinect V2 Sensor
Doctoral School of Automatic Control and Computers, University POLITEHNICA of Bucharest, Romania
Ministry of Higher Education and Scientific Research / The University of Mustsnsiriyah/Baghdad IRAQ
2(Department of Computers/Faculty of Automatic Control and ComputersPOLITEHNICA of Bucharest
3(Department of Computers/Faculty of Automatic Control and ComputersPOLITEHNICA of Bucharest
ROMANIA)
ROMANIA)
('9384437', 'Hesham A. Alabbasi', 'hesham a. alabbasi')
('3088730', 'Alin Moldoveanu', 'alin moldoveanu')
9ab463d117219ed51f602ff0ddbd3414217e3166Weighted Transmedia
Relevance Feedback for
Image Retrieval and
Auto-annotation
TECHNICAL
REPORT
N° 0415
December 2011
Project-Teams LEAR - INRIA
and TVPA - XRCE
('1722052', 'Thomas Mensink', 'thomas mensink')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
('1808423', 'Gabriela Csurka', 'gabriela csurka')
9ac82909d76b4c902e5dde5838130de6ce838c16Recognizing Facial Expressions Automatically
from Video
1 Introduction
Facial expressions, resulting from movements of the facial muscles, are the face
changes in response to a person’s internal emotional states, intentions, or social
communications. There is a considerable history associated with the study on fa-
cial expressions. Darwin (1872) was the first to describe in details the specific fa-
cial expressions associated with emotions in animals and humans, who argued that
all mammals show emotions reliably in their faces. Since that, facial expression
analysis has been a area of great research interest for behavioral scientists (Ekman,
Friesen, and Hager, 2002). Psychological studies (Mehrabian, 1968; Ambady and
Rosenthal, 1992) suggest that facial expressions, as the main mode for non-verbal
communication, play a vital role in human face-to-face communication. For illus-
tration, we show some examples of facial expressions in Fig. 1.
Computer recognition of facial expressions has many important applications in
intelligent human-computer interaction, computer animation, surveillance and se-
curity, medical diagnosis, law enforcement, and awareness systems (Shan, 2007).
Therefore, it has been an active research topic in multiple disciplines such as psy-
chology, cognitive science, human-computer interaction, and pattern recognition.
Meanwhile, as a promising unobtrusive solution, automatic facial expression analy-
sis from video or images has received much attention in last two decades (Pantic and
Rothkrantz, 2000a; Fasel and Luettin, 2003; Tian, Kanade, and Cohn, 2005; Pantic
and Bartlett, 2007).
This chapter introduces recent advances in computer recognition of facial expres-
sions. Firstly, we describe the problem space, which includes multiple dimensions:
level of description, static versus dynamic expression, facial feature extraction and
('10795229', 'Caifeng Shan', 'caifeng shan')
('3297850', 'Ralph Braspenning', 'ralph braspenning')
('10795229', 'Caifeng Shan', 'caifeng shan')
('3297850', 'Ralph Braspenning', 'ralph braspenning')
Philips Research, Eindhoven, The Netherlands, e-mail: caifeng.shan@philips.com
Philips Research, Eindhoven, The Netherlands, e-mail: ralph.braspenning@philips.com
9a0c7a4652c49a177460b5d2fbbe1b2e6535e50aAutomatic and Quantitative evaluation of attribute discovery methods
The University of Queensland, School of ITEE
QLD 4072, Australia
('2499431', 'Liangchen Liu', 'liangchen liu')
('2331880', 'Arnold Wiliem', 'arnold wiliem')
('3104113', 'Shaokang Chen', 'shaokang chen')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
l.liu9@uq.edu.au
a.wiliem@uq.edu.au
shaokangchenuq@gmail.com
lovell@itee.uq.edu.au
9ac15845defcd0d6b611ecd609c740d41f0c341dCopyright
by
2011
('1926834', 'Juhyun Lee', 'juhyun lee')
9ac43a98fe6fde668afb4fcc115e4ee353a6732dSurvey of Face Detection on Low-quality Images
Beckmann Institute, University of Illinois at Urbana-Champaign, USA
('1698743', 'Yuqian Zhou', 'yuqian zhou')
('1771885', 'Ding Liu', 'ding liu')
{yuqian2, dingliu2}@illinois.edu
huang@ifp.uiuc.edu
9af1cf562377b307580ca214ecd2c556e20df000Feb. 28
International Journal of Advanced Studies in Computer Science and Engineering
IJASCSE, Volume 4, Issue 2, 2015
Video-Based Facial Expression Recognition
Using Local Directional Binary Pattern
Electrical Engineering Dept., AmirKabir Univarsity of Technology
Tehran, Iran
('38519671', 'Sahar Hooshmand', 'sahar hooshmand')
('3232144', 'Ali Jamali Avilaq', 'ali jamali avilaq')
('3293075', 'Amir Hossein Rezaie', 'amir hossein rezaie')
9a23a0402ae68cc6ea2fe0092b6ec2d40f667adbHigh-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
1NVIDIA Corporation
2UC Berkeley
Figure 1: We propose a generative adversarial framework for synthesizing 2048 × 1024 images from semantic label maps
(lower left corner in (a)). Compared to previous work [5], our results express more natural textures and details. (b) We can
change labels in the original label map to create new scenes, like replacing trees with buildings. (c) Our framework also
allows a user to edit the appearance of individual objects in the scene, e.g. changing the color of a car or the texture of a road.
Please visit our website for more side-by-side comparisons as well as interactive editing demos.
('2195314', 'Ting-Chun Wang', 'ting-chun wang')
('2436356', 'Jun-Yan Zhu', 'jun-yan zhu')
('1690538', 'Jan Kautz', 'jan kautz')
9a4c45e5c6e4f616771a7325629d167a38508691A Facial Features Detector Integrating Holistic Facial Information and
Part-based Model
Eslam Mostafa1,2
Aly Farag1
CVIP Lab, University of Louisville, Louisville, KY 40292, USA
Alexandria University, Alexandria, Egypt
Assiut University, Assiut 71515, Egypt
4Kentucky Imaging Technology (KIT), Louisville, KY 40245, USA.
('28453046', 'Asem A. Ali', 'asem a. ali')
('2239392', 'Ahmed Shalaby', 'ahmed shalaby')
9af9a88c60d9e4b53e759823c439fc590a4b5bc5Learning Deep Convolutional Embeddings for Face Representation Using Joint
Sample- and Set-based Supervision
Department of Electrical and Electronic Engineering,
Imperial College London
('2151914', 'Baris Gecer', 'baris gecer')
('3288623', 'Vassileios Balntas', 'vassileios balntas')
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
{b.gecer,v.balntas15,tk.kim}@imperial.ac.uk
9a7858eda9b40b16002c6003b6db19828f94a6c6MOONEY FACE CLASSIFICATION AND PREDICTION BY LEARNING ACROSS TONE
(cid:63) UC Berkeley / †ICSI
('2301765', 'Tsung-Wei Ke', 'tsung-wei ke')
('2251428', 'Stella X. Yu', 'stella x. yu')
('1821337', 'David Whitney', 'david whitney')
9a3535cabf5d0f662bff1d897fb5b777a412d82eUniversity of Kentucky
UKnowledge
Computer Science
Computer Science Faculty Publications
6-10-2015
Large-Scale Geo-Facial Image Analysis
Mohammed T. Islam
University of Kentucky
University of North Carolina at Charlotte
Click here to let us know how access to this document benefits you.
Follow this and additional works at: https://uknowledge.uky.edu/cs_facpub
Part of the Computer Sciences Commons
Repository Citation
Islam, Mohammed T.; Greenwell, Connor; Souvenir, Richard; and Jacobs, Nathan, "Large-Scale Geo-Facial Image Analysis" (2015).
Computer Science Faculty Publications. 7.
https://uknowledge.uky.edu/cs_facpub/7
This Article is brought to you for free and open access by the Computer Science at UKnowledge. It has been accepted for inclusion in Computer
('2121759', 'Connor Greenwell', 'connor greenwell')
('1690110', 'Richard Souvenir', 'richard souvenir')
('1990750', 'Nathan Jacobs', 'nathan jacobs')
University of Kentucky, connor.greenwell@uky.edu
University of Kentucky, nathan.jacobs@uky.edu
Science Faculty Publications by an authorized administrator of UKnowledge. For more information, please contact UKnowledge@lsv.uky.edu.
9abd35b37a49ee1295e8197aac59bde802a934f3Depth2Action: Exploring Embedded Depth for
Large-Scale Action Recognition
University of California, Merced
('1749901', 'Yi Zhu', 'yi zhu'){yzhu25,snewsam}@ucmerced.edu
9a276c72acdb83660557489114a494b86a39f6ffEmotion Classification through Lower Facial Expressions using Adaptive
Support Vector Machines
Department of Information Technology, Faculty of Industrial Technology and Management,
('2621463', 'Porawat Visutsak', 'porawat visutsak')King Mongkut’s University of Technology North Bangkok, porawatv@kmutnb.ac.th
9a1a9dd3c471bba17e5ce80a53e52fcaaad4373eAutomatic Recognition of Spontaneous Facial
Actions
Institute for Neural Computation, University of California, San Diego
University at Buffalo, State University of New York
('2218905', 'Marian Stewart Bartlett', 'marian stewart bartlett')
('21751782', 'Gwen C. Littlewort', 'gwen c. littlewort')
('2639526', 'Mark G. Frank', 'mark g. frank')
('2767464', 'Claudia Lainscsek', 'claudia lainscsek')
('2039025', 'Ian R. Fasel', 'ian r. fasel')
('1741200', 'Javier R. Movellan', 'javier r. movellan')
mbartlet@ucsd.edu, gwen@mplab.ucsd.edu, clainscsek@ucsd.edu, ianfasel@cogsci.ucsd.edu,
movellan@mplab.ucsd.edu
mfrank83@buffalo.edu
9a42c519f0aaa68debbe9df00b090ca446d25bc4Face Recognition via Centralized Coordinate
Learning
('2689287', 'Xianbiao Qi', 'xianbiao qi')
('1684635', 'Lei Zhang', 'lei zhang')
9aad8e52aff12bd822f0011e6ef85dfc22fe8466Temporal-Spatial Mapping for Action Recognition ('3865974', 'Xiaolin Song', 'xiaolin song')
('40093162', 'Cuiling Lan', 'cuiling lan')
('8434337', 'Wenjun Zeng', 'wenjun zeng')
('1757173', 'Junliang Xing', 'junliang xing')
('1759461', 'Jingyu Yang', 'jingyu yang')
('1692735', 'Xiaoyan Sun', 'xiaoyan sun')
36b40c75a3e53c633c4afb5a9309d10e12c292c7
363ca0a3f908859b1b55c2ff77cc900957653748International Journal of Computer Trends and Technology (IJCTT) – volume 1 Issue 3 Number 4 – Aug 2011
Local Binary Patterns and Linear Programming using
Facial Expression
Ms.P.Jennifer
Bharath Institute of Science and Technology
B.Tech (C.S.E), Bharath University, Chennai
Dr. A. Muthu kumaravel
Bharath Institute of Science and Technology
B.Tech (C.S.E), Bharath University, Chennai
36939e6a365e9db904d81325212177c9e9e76c54Assessing the Accuracy of Four Popular Face Recognition Tools for
Inferring Gender, Age, and Race
Qatar Computing Research Institute, HBKU
HBKU Research Complex, Doha, P.O. Box 34110, Qatar
('1861541', 'Soon-Gyo Jung', 'soon-gyo jung')
('40660541', 'Jisun An', 'jisun an')
('2592694', 'Haewoon Kwak', 'haewoon kwak')
('2734912', 'Joni Salminen', 'joni salminen')
{sjung,jan,hkwak,jsalminen,bjansen}@hbku.edu.qa
3646b42511a6a0df5470408bc9a7a69bb3c5d742International Journal of Computer Applications (0975 – 8887)
Applications of Computers and Electronics for the Welfare of Rural Masses (ACEWRM) 2015
Detection of Facial Parts based on ABLATA
Technical Campus, Bhilai
Vikas Singh
Technical Campus, Bhilai
Abha Choubey
Technical Campus, Bhilai
('9173769', 'Siddhartha Choubey', 'siddhartha choubey')
365f67fe670bf55dc9ccdcd6888115264b2a2c56
36fe39ed69a5c7ff9650fd5f4fe950b5880760b0Tracking von Gesichtsmimik
mit Hilfe von Gitterstrukturen
zur Klassifikation von schmerzrelevanten Action
Units
1Fraunhofer-Institut f¨ur Integrierte Schaltungen IIS, Erlangen,
2Otto-Friedrich-Universit¨at Bamberg, 3Universit¨atsklinkum Erlangen
Kurzfassung. In der Schmerzforschung werden schmerzrelevante Mi-
mikbewegungen von Probanden mittels des Facial Action Coding System
klassifiziert. Die manuelle Klassifikation hierbei ist aufw¨andig und eine
automatische (Vor-)klassifikation k¨onnte den diagnostischen Wert dieser
Analysen erh¨ohen sowie den klinischen Workflow unterst¨utzen. Der hier
vorgestellte regelbasierte Ansatz erm¨oglicht eine automatische Klassifika-
tion ohne große Trainingsmengen vorklassifizierter Daten. Das Verfahren
erkennt und verfolgt Mimikbewegungen, unterst¨utzt durch ein Gitter,
und ordnet diese Bewegungen bestimmten Gesichtsarealen zu. Mit die-
sem Wissen kann aus den Bewegungen auf die zugeh¨origen Action Units
geschlossen werden.
1 Einleitung
Menschliche Empfindungen wie Emotionen oder Schmerz l¨osen spezifische Mu-
ster von Kontraktionen der Gesichtsmuskulatur aus, die Grundlage dessen sind,
was wir Mimik nennen. Aus der Beobachtung der Mimik kann wiederum auf
menschliche Empfindungen r¨uckgeschlossen werden. Im Rahmen der Schmerz-
forschung werden Videoaufnahmen von Probanden hinsichtlich des mimischen
Schmerzausdrucks analysiert. Zur Beschreibung des mimischen Ausdrucks und
dessen Ver¨anderungen wird das Facial Action Coding System (FACS) [1] verwen-
det, das anatomisch begr¨undet, kleinste sichtbare Muskelbewegungen im Gesicht
beschreibt und als einzelne Action Units (AUs) kategorisiert. Eine Vielzahl von
Untersuchungen hat gezeigt, dass spezifische Muster von Action Units auftre-
ten, wenn Probanden Schmerzen angeben [2]. Die manuelle Klassifikation und
Markierung der Action Units von Probanden in Videosequenzen bedarf einer
langwierigen Beobachtung durch ausgebildete FACS-Coder. Eine automatische
(Vor-)klassifikation kann hierbei den klinischen Workflow unterst¨utzen und dieses
Verfahren zum brauchbaren diagnostischen Instrument machen. Bisher realisier-
te Ans¨atze zum Erkennen von Gesichtsausdr¨ucken basieren auf der Klassifikation
('31431972', 'Christine Barthold', 'christine barthold')
('2009811', 'Anton Papst', 'anton papst')
('1773752', 'Thomas Wittenberg', 'thomas wittenberg')
('1793798', 'Stefan Lautenbacher', 'stefan lautenbacher')
('1727734', 'Ute Schmid', 'ute schmid')
('2500903', 'Sven Friedl', 'sven friedl')
sven.friedl@iis.fraunhofer.de
36a3a96ef54000a0cd63de867a5eb7e84396de09Automatic Photo Orientation Detection with Convolutional Neural Networks
Dept. of Computer Science
University of Toronto
Toronto, Ontario, Canada
('40121109', 'Ujash Joshi', 'ujash joshi')
('1959343', 'Michael Guerzhoy', 'michael guerzhoy')
ujash.joshi@utoronto.ca, guerzhoy@cs.toronto.edu
36fc4120fc0638b97c23f97b53e2184107c52233National Conference on Innovative Paradigms in Engineering & Technology (NCIPET-2013)
Proceedings published by International Journal of Computer Applications® (IJCA)
Introducing Celebrities in an Images using HAAR
Cascade algorithm
Asst. Professor
PES Modern College of Engg
PES Modern College of Engg
PES Modern College of Engg
Shivaji Nagar, Pune
Shivaji Nagar, Pune
Shivaji Nagar, Pune
('12682677', 'Deipali V. Gore', 'deipali v. gore')
36ce0b68a01b4c96af6ad8c26e55e5a30446f360Multimed Tools Appl
DOI 10.1007/s11042-014-2322-6
Facial expression recognition based on a mlp neural
network using constructive training algorithm
Received: 5 February 2014 / Revised: 22 August 2014 / Accepted: 13 October 2014
© Springer Science+Business Media New York 2014
('1746834', 'Hayet Boughrara', 'hayet boughrara')
('3410172', 'Chokri Ben Amar', 'chokri ben amar')
3674f3597bbca3ce05e4423611d871d09882043bISSN 1796-2048
Volume 7, Number 4, August 2012
Contents
Special Issue: Multimedia Contents Security in Social Networks Applications
Guest Editors: Zhiyong Zhang and Muthucumaru Maheswaran
Guest Editorial
Zhiyong Zhang and Muthucumaru Maheswaran
SPECIAL ISSUE PAPERS
DRTEMBB: Dynamic Recommendation Trust Evaluation Model Based on Bidding
Gang Wang and Xiao-lin Gui
Block-Based Parallel Intra Prediction Scheme for HEVC
Jie Jiang, Baolong, Wei Mo, and Kefeng Fan
Optimized LSB Matching Steganography Based on Fisher Information
Yi-feng Sun, Dan-mei Niu, Guang-ming Tang, and Zhan-zhan Gao
A Novel Robust Zero-Watermarking Scheme Based on Discrete Wavelet Transform
Yu Yang, Min Lei, Huaqun Liu, Yajian Zhou, and Qun Luo
Stego Key Estimation in LSB Steganography
Jing Liu and Guangming Tang
REGULAR PAPERS
Facial Expression Spacial Charts for Describing Dynamic Diversity of Facial Expressions
277
279
289
295
303
309
314
('46575279', 'H. Madokoro', 'h. madokoro')
362bfeb28adac5f45b6ef46c07c59744b4ed6a52INCORPORATING SCALABILITY IN UNSUPERVISED SPATIO-TEMPORAL FEATURE
LEARNING
University of California, Riverside, CA
('49616225', 'Sujoy Paul', 'sujoy paul')
('2177805', 'Sourya Roy', 'sourya roy')
('1688416', 'Amit K. Roy-Chowdhury', 'amit k. roy-chowdhury')
360d66e210f7011423364327b7eccdf758b5fdd217th European Signal Processing Conference (EUSIPCO 2009)
Glasgow, Scotland, August 24-28, 2009
LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION
RECOGNITION
School of Electrical and Computer Engineering, RMIT University
City Campus, Swanston St., Melbourne, Australia
http://www.rmit.edu.au
('1857490', 'Seyed Mehdi Lajevardi', 'seyed mehdi lajevardi')
('1749220', 'Zahir M. Hussain', 'zahir m. hussain')
seyed.lajevardi@rmit.edu.au, zmhussain@ieee.org
365866dc937529c3079a962408bffaa9b87c1f06 IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 1 Issue 3, May 2014.
www.ijiset.com
ISSN 2348 – 7968
Facial Feature Expression Based Approach for Human Face
Recognition: A Review
SSESA, Science College, Congress Nagar, Nagpur, (MS)-India
RTM Nagpur University, Campus Nagpur, (MS)-India
for
face
task
required
extraction of
361c9ba853c7d69058ddc0f32cdbe94fbc2166d5Deep Reinforcement Learning of
Video Games
s2098407
September 29, 2017
MSc. Project
Arti(cid:12)cial Intelligence
University of Groningen, The Netherlands
Supervisors
Dr. M.A. (Marco) Wiering
Prof. dr. L.R.B. (Lambert) Schomaker
ALICE Institute
University of Groningen
Nijenborgh 9, 9747 AG, Groningen, The Netherlands
('3405120', 'Jos van de Wolfshaar', 'jos van de wolfshaar')
368e99f669ea5fd395b3193cd75b301a76150f9dOne-to-many face recognition with bilinear CNNs
Aruni RoyChowdhury
University of Massachusetts, Amherst
Erik Learned-Miller
('2144284', 'Tsung-Yu Lin', 'tsung-yu lin')
('35208858', 'Subhransu Maji', 'subhransu maji')
{arunirc,tsungyulin,smaji,elm}@cs.umass.edu
362a70b6e7d55a777feb7b9fc8bc4d40a57cde8c978-1-4799-9988-0/16/$31.00 ©2016 IEEE
2792
ICASSP 2016
36df81e82ea5c1e5edac40b60b374979a43668a5ON-THE-FLY SPECIFIC PERSON RETRIEVAL
University of Oxford, United Kingdom
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{omkar,vedaldi,az}@robots.ox.ac.uk
3619a9b46ad4779d0a63b20f7a6a8d3d49530339SIMONYAN et al.: FISHER VECTOR FACES IN THE WILD
Fisher Vector Faces in the Wild
Visual Geometry Group
Department of Engineering Science
University of Oxford
('34838386', 'Karen Simonyan', 'karen simonyan')
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
karen@robots.ox.ac.uk
omkar@robots.ox.ac.uk
vedaldi@robots.ox.ac.uk
az@robots.ox.ac.uk
366d20f8fd25b4fe4f7dc95068abc6c6cabe1194
3630324c2af04fd90f8668f9ee9709604fe980fdThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2016.2607345, IEEE
Transactions on Circuits and Systems for Video Technology
Image Classification with Tailored Fine-Grained
Dictionaries
('2287686', 'Xiangbo Shu', 'xiangbo shu')
('8053308', 'Jinhui Tang', 'jinhui tang')
('2272096', 'Guo-Jun Qi', 'guo-jun qi')
('3233021', 'Zechao Li', 'zechao li')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
362ba8317aba71c78dafca023be60fb71320381d
36cf96fe11a2c1ea4d999a7f86ffef6eea7b5958RGB-D Face Recognition with Texture and
Attribute Features
Member, IEEE
('1931069', 'Gaurav Goswami', 'gaurav goswami')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('39129417', 'Richa Singh', 'richa singh')
36e8ef2e5d52a78dddf0002e03918b101dcdb326Multiview Active Shape Models with SIFT Descriptors
for the 300-W Face Landmark Challenge
University of Cape Town
Anthropics Technology Ltd.
University of Cape Town
('2822258', 'Stephen Milborrow', 'stephen milborrow')
('1823550', 'Tom E. Bishop', 'tom e. bishop')
('2537623', 'Fred Nicolls', 'fred nicolls')
milbo@sonic.net
t.e.bishop@gmail.com
fred.nicolls@uct.ac.za
36018404263b9bb44d1fddaddd9ee9af9d46e560OCCLUDED FACE RECOGNITION BY USING GABOR
FEATURES
1 Department of Electrical And Electronics Engineering, METU, Ankara, Turkey
2 7h%ł7$.(cid:3)%ł/7(1(cid:15)(cid:3)$QNDUD(cid:15)(cid:3)7XUNH\
('2920043', 'Burcu Kepenekci', 'burcu kepenekci')
('3110567', 'F. Boray Tek', 'f. boray tek')
('1929001', 'Gozde Bozdagi Akar', 'gozde bozdagi akar')
367f2668b215e32aff9d5122ce1f1207c20336c8Proceedings of the Pakistan Academy of Sciences 52 (1): 15–25 (2015)
Copyright © Pakistan Academy of Sciences
ISSN: 0377 - 2969 (print), 2306 - 1448 (online)
Pakistan Academy of Sciences
Research Article
Speaker-Dependent Human Emotion Recognition in
Unimodal and Bimodal Scenarios
University of Peshawar, Pakistan
University of Engineering and Technology
Sarhad University of Science and Information Technology
University of Peshawar, Peshawar, Pakistan
Peshawar, Pakistan
Peshawar, Pakistan
('34267835', 'Sanaul Haq', 'sanaul haq')
('3124216', 'Tariqullah Jan', 'tariqullah jan')
('1766329', 'Muhammad Asif', 'muhammad asif')
('1710701', 'Amjad Ali', 'amjad ali')
('40332145', 'Naveed Ahmad', 'naveed ahmad')
36c2db5ff76864d289781f93cbb3e6351f11984c17th European Signal Processing Conference (EUSIPCO 2009)
Glasgow, Scotland, August 24-28, 2009
ONE COLORED IMAGE BASED 2.5D HUMAN FACE RECONSTRUCTION
School of Electrical, Electronic and Computer Engineering
Newcastle University, Newcastle upon Tyne
England, United Kingdom
('1687577', 'Peng Liu', 'peng liu')Email: peng.liu2@ncl.ac.uk, w.l.woo@ncl.ac.uk, s.s.dlay@ncl.ac.uk
3661a34f302883c759b9fa2ce03de0c7173d2bb2Peak-Piloted Deep Network for Facial Expression
Recognition
University of California, San Diego 2 Carnegie Mellon University
AI Institute
National University of Singapore
Institute of Automation, Chinese Academy of Sciences
('8343585', 'Xiangyun Zhao', 'xiangyun zhao')
('1776665', 'Luoqi Liu', 'luoqi liu')
('1699559', 'Nuno Vasconcelos', 'nuno vasconcelos')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1743598', 'Teng Li', 'teng li')
xiz019@ucsd.edu xdliang328@gmail.com liuluoqi@360.cn
tenglwy@gmail.com nvasconcelos@ucsd.edu eleyans@nus.edu.sg
36c473fc0bf3cee5fdd49a13cf122de8be736977Temporal Segment Networks: Towards Good
Practices for Deep Action Recognition
1Computer Vision Lab, ETH Zurich, Switzerland
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, CAS, China
('33345248', 'Limin Wang', 'limin wang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('1915826', 'Zhe Wang', 'zhe wang')
('33427555', 'Yu Qiao', 'yu qiao')
('1807606', 'Dahua Lin', 'dahua lin')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1681236', 'Luc Van Gool', 'luc van gool')
368d59cf1733af511ed8abbcbeb4fb47afd4da1cTo Frontalize or Not To Frontalize: A Study of Face Pre-Processing Techniques
and Their Impact on Recognition
RichardWebster1, Vitomir ˇStruc2, Patrick J. Flynn1 and Walter J. Scheirer1
University of Notre Dame, USA
Faculty of Electrical Engineering, University of Ljubljana, Slovenia
('40061203', 'Sandipan Banerjee', 'sandipan banerjee')
('6846673', 'Joel Brogan', 'joel brogan')
('5014060', 'Aparna Bharati', 'aparna bharati')
{janez.krizaj, vitomir.struc}@fe.uni-lj.si
{sbanerj1, jbrogan4, abharati, brichar1, flynn, wscheire}@nd.edu
366595171c9f4696ec5eef7c3686114fd3f116adAlgorithms and Representations for Visual
Recognition
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2012-53
http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-53.html
May 1, 2012
('35208858', 'Subhransu Maji', 'subhransu maji')
36b9f46c12240898bafa10b0026a3fb5239f72f3Collaborative Deep Reinforcement Learning for Joint Object Search
Peking University
Microsoft Research
Peking University
Microsoft Research
('2045334', 'Xiangyu Kong', 'xiangyu kong')
('1894653', 'Bo Xin', 'bo xin')
('36637369', 'Yizhou Wang', 'yizhou wang')
('1745420', 'Gang Hua', 'gang hua')
kong@pku.edu.cn
boxin@microsoft.com
yizhou.wang@pku.edu.cn
ganghua@microsoft.com
361d6345919c2edc5c3ce49bb4915ed2b4ee49beDelft University of Technology
Models for supervised learning in sequence data
Pei, Wenjie
DOI
10.4233/uuid:fff15717-71ec-402d-96e6-773884659f2c
Publication date
2018
Document Version
Publisher's PDF, also known as Version of record
Citation (APA)
Pei, W. (2018). Models for supervised learning in sequence data DOI: 10.4233/uuid:fff15717-71ec-402d-
96e6-773884659f2c
Important note
To cite this publication, please use the final published version (if applicable).
Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent
of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights.
We will remove access to the work immediately and investigate your claim.
This work is downloaded from Delft University of Technology
For technical reasons the number of authors shown on this cover page is limited to a maximum of 10.
3634b4dd263c0f330245c086ce646c9bb748cd6bTemporal Localization of Fine-Grained Actions in Videos
by Domain Transfer from Web Images
University of Southern California
Google, Inc
('1726241', 'Chen Sun', 'chen sun'){chensun,nevatia}@usc.edu
{sanketh,sukthankar}@google.com
367a786cfe930455cd3f6bd2492c304d38f6f488A Training Assistant Tool for the Automated Visual
Inspection System
A Thesis
Presented to
the Graduate School of
Clemson University
In Partial Fulfillment
of the Requirements for the Degree
Master of Science
Electrical Engineering
by
December 2015
Accepted by:
Dr. Adam W. Hoover, Committee Chair
Dr. Richard E. Groff
Dr. Yongqiang Wang
('4154752', 'Mohan Karthik Ramaraj', 'mohan karthik ramaraj')
5c4ce36063dd3496a5926afd301e562899ff53ea
5c6de2d9f93b90034f07860ae485a2accf529285Int. J. Biometrics, Vol. X, No. Y, xxxx
Compensating for pose and illumination in
unconstrained periocular biometrics
Department of Computer Science,
IT – Instituto de Telecomunicações,
University of Beira Interior
6200-Covilhã, Portugal
Fax: +351-275-319899
*Corresponding author
('1678263', 'Chandrashekhar N. Padole', 'chandrashekhar n. padole')
('1712429', 'Hugo Proença', 'hugo proença')
E-mail: chandupadole@ubi.pt
E-mail: hugomcp@di.ubi.pt
5cbe1445d683d605b31377881ac8540e1d17adf0On 3D Face Reconstruction via Cascaded Regression in Shape Space
College of Computer Science, Sichuan University, Chengdu, China
('50207647', 'Feng Liu', 'feng liu')
('39422721', 'Dan Zeng', 'dan zeng')
('1723081', 'Jing Li', 'jing li')
('7345195', 'Qijun Zhao', 'qijun zhao')
qjzhao@scu.edu.cn
5ca23ceb0636dfc34c114d4af7276a588e0e8dacTexture Representation in AAM using Gabor Wavelet
and Local Binary Patterns
School of Electronic Engineering,
Xidian University
Xi’an 710071, China
School of Computer Science and Information Systems,
Birkbeck College, University of London
London WC1E 7HX, U.K.
School of Computer Engineering,
Nanyang Technological University
50 Nanyang Avenue, Singapore 639798
School of Electronic Engineering,
Xidian University
Xi’an 710071, China
('5452186', 'Ya Su', 'ya su')
('1720243', 'Xuelong Li', 'xuelong li')
('1692693', 'Dacheng Tao', 'dacheng tao')
('10699750', 'Xinbo Gao', 'xinbo gao')
su1981ya@gmail.com
xuelong@dcs.bbk.ac.uk
dacheng.tao@gmail.com
xbgao@mail.xidian.edu.cn
5c2a7518fb26a37139cebff76753d83e4da25159
5c493c42bfd93e4d08517438983e3af65e023a87The Thirty-Second AAAI Conference
on Artificial Intelligence (AAAI-18)
Multimodal Keyless Attention
Fusion for Video Classification
Tsinghua University, 2Rutgers University, 3Baidu IDL
('1716690', 'Xiang Long', 'xiang long')
('2551285', 'Chuang Gan', 'chuang gan')
('1732213', 'Gerard de Melo', 'gerard de melo')
('48033101', 'Xiao Liu', 'xiao liu')
('48515099', 'Yandong Li', 'yandong li')
('9921390', 'Fu Li', 'fu li')
('35247507', 'Shilei Wen', 'shilei wen')
{longx13, ganc13}@mails.tsinghua.edu.cn, gdm@demelo.org, {liuxiao12, liyandong, lifu, wenshilei}@baidu.com
5cb83eba8d265afd4eac49eb6b91cdae47def26dFace Recognition with Local Line Binary Pattern
Mahanakorn University of Technology
51 Cheum-Sampan Rd., Nong Chok, Bangkok, THAILAND 10530
('2337544', 'Amnart Petpon', 'amnart petpon')
('1805935', 'Sanun Srisuk', 'sanun srisuk')
ta tee473@hotmail.com, sanun@mut.ac.th
5c8672c0d2f28fd5d2d2c4b9818fcff43fb01a48Robust Face Detection by Simple Means
Institute for Computer Graphics and Vision
Graz University of Technology, Austria
1 Motivation
Face detection is still one of the core problems in computer vision, especially in
unconstrained real-world situations where variations in face pose or bad imaging
conditions have to be handled. These problems are covered by recent benchmarks
such as Face Detection Dataset and Benchmark (FDDB) [2], which reveals that
established methods, e.g, Viola and Jones [8] suffer a drop in performance. More
effective approaches exist, but are closed source and not publicly available. Thus,
we propose a simple but effective detector that is available to the public. It
combines Histograms of Orientated Gradient (HOG) [1] features with linear
Support Vector Machine (SVM) classification.
2 Technical Details
One important aspect in the training of our face detector is bootstrapping. Thus,
we rely on iterative training. In particular, each iteration consists of first describ-
ing the face patches by HOGs [1] and then learning a linear SVM. At the end
of each iteration we bootstrap with the preliminary detector hard examples to
enrich the training set. We perform several bootstrapping rounds to improve the
detector until the desired false positive per window rate is reached. Interestingly,
we found out that picking up false positives at multiple scales in a sliding win-
dow fashion yields better results than just at a single scale. Testing several patch
sizes and HOG layouts revealed that a patch size of 36 by 36 delivers the best
results. For the HOG descriptor we ended up with a block size of 12x12, 4x4 for
the cells. Prior to the actual training we gathered face crops of the Annotated
facial landmarks in the wild (AFLW) dataset [4]. As AFLW includes the coarse
face pose we are able to retrieve about 28k frontal faces by limiting the yaw angle
between ± π
6 and mirroring them. For each face we crop a square region between
forehead and chin. The non-face patches are obtained by randomly sampling at
multiple scales of the PASCAL VOC 2007 dataset, excluding the persons subset.
3 Results
In Figure 1 we report the performance of our final detector on the challenging
FDDB benchmark compared to state-of-the-art methods. Despite the simplicity
of our detector it is able to improve considerably over the boosted classifier cas-
cade of Viola and Jones [8] and even outperforms the recent work of Jain and
('3202367', 'Paul Wohlhart', 'paul wohlhart')
('1791182', 'Peter M. Roth', 'peter m. roth')
('3628150', 'Horst Bischof', 'horst bischof')
{koestinger,wohlhart,pmroth,bischof}@icg.tugraz.at
5c3dce55c61ee86073575ac75cc882a215cb49e6Neural Codes for Image Retrieval
Alexandr Chigorin1, and Victor Lempitsky2
1 Yandex, Russia
Skolkovo Institute of Science and Technology (Skoltech), Russia
Moscow Institute of Physics and Technology, Russia
('2412441', 'Artem Babenko', 'artem babenko')
('32829387', 'Anton Slesarev', 'anton slesarev')
5c2e264d6ac253693469bd190f323622c457ca05978-1-4799-2341-0/13/$31.00 ©2013 IEEE
4367
ICIP 2013
5c473cfda1d7c384724fbb139dfe8cb39f79f626
5c820e47981d21c9dddde8d2f8020146e600368fExtended Supervised Descent Method for
Robust Face Alignment
Beijing University of Posts and Telecommunications, Beijing, China
('9120475', 'Liu Liu', 'liu liu')
('23224233', 'Jiani Hu', 'jiani hu')
('1678529', 'Shuo Zhang', 'shuo zhang')
('1774956', 'Weihong Deng', 'weihong deng')
5c5e1f367e8768a9fb0f1b2f9dbfa060a22e75c02132
Reference Face Graph for Face Recognition
('1784929', 'Mehran Kafai', 'mehran kafai')
('39776603', 'Le An', 'le an')
('1707159', 'Bir Bhanu', 'bir bhanu')
5c35ac04260e281141b3aaa7bbb147032c887f0cFace Detection and Tracking Control with Omni Car
CS 231A Final Report
June 31, 2016
('2645488', 'Tung-Yu Wu', 'tung-yu wu')
5c435c4bc9c9667f968f891e207d241c3e45757aRUIZ-HERNANDEZ, CROWLEY, LUX: HOW OLD ARE YOU?
"How old are you?" : Age Estimation with
Tensors of Binary Gaussian Receptive Maps
INRIA Grenoble Rhones-Alpes
Research Center and Laboratoire
d’Informatique de Grenoble (LIG)
655 avenue de l’Europe
38 334 Saint Ismier Cedex, France
('2291512', 'John A. Ruiz-Hernandez', 'john a. ruiz-hernandez')
('34740185', 'James L. Crowley', 'james l. crowley')
('2599357', 'Augustin Lux', 'augustin lux')
john-alexander.ruiz-hernandez@inrialpes.fr
james.crowley@inrialpes.fr
augustin.lux@inrialpes.fr
5c7adde982efb24c3786fa2d1f65f40a64e2afbfRanking Domain-Specific Highlights
by Analyzing Edited Videos
University of Washington, Seattle, WA, USA
('1711801', 'Min Sun', 'min sun')
('2270286', 'Ali Farhadi', 'ali farhadi')
5c36d8bb0815fd4ff5daa8351df4a7e2d1b32934GeePS: Scalable deep learning on distributed GPUs
with a GPU-specialized parameter server
Carnegie Mellon University
('1874200', 'Henggang Cui', 'henggang cui')
('1682058', 'Hao Zhang', 'hao zhang')
('1707164', 'Gregory R. Ganger', 'gregory r. ganger')
('1974678', 'Phillip B. Gibbons', 'phillip b. gibbons')
('1752601', 'Eric P. Xing', 'eric p. xing')
5cfbeae360398de9e20e4165485837bd42b93217Cengil Emine, Cınars Ahmet, International Journal of Advance Research, Ideas and Innovations in Technology.
ISSN: 2454-132X
Impact factor: 4.295
(Volume3, Issue5)
Available online at www.ijariit.com
Comparison Of Hog (Histogram of Oriented Gradients) and
Haar Cascade Algorithms with a Convolutional Neural Network
Based Face Detection Approaches
Computer Engineering Department
Firat University
Computer Engineering Department
Firat University
('27758959', 'Emine Cengil', 'emine cengil')ecengil@firat.edu.tr
acinar@firat.edu.tr
5ca14fa73da37855bfa880b549483ee2aba26669ISSN (e): 2250 – 3005 || Volume, 07 || Issue, 07|| June – 2017 ||
International Journal of Computational Engineering Research (IJCER)
Face Recognition under Varying Illuminations Using Local
Binary Pattern And Local Ternary Pattern Fusion
Punjabi University Patiala
Punjabi University Patiala
('2029759', 'Reecha Sharma', 'reecha sharma')
5c02bd53c0a6eb361972e8a4df60cdb30c6e3930Multimedia stimuli databases usage patterns: a
survey report
M. Horvat1, S. Popović1 and K. Ćosić1
University of Zagreb, Faculty of Electrical Engineering and Computing
Department of Electric Machines, Drives and Automation
Zagreb, Croatia
marko.horvat2@fer.hr
5c8ae37d532c7bb8d7f00dfde84df4ba63f46297DiscrimNet: Semi-Supervised Action Recognition from Videos using Generative
Adversarial Networks
Georgia Institute of Technology
Google
Irfan Essa
Georgia Institute of Technology
('2308598', 'Unaiza Ahsan', 'unaiza ahsan')
('1726241', 'Chen Sun', 'chen sun')
uahsan3@gatech.edu
chensun@google.com
irfan@gatech.edu
5c717afc5a9a8ccb1767d87b79851de8d3016294978-1-4673-0046-9/12/$26.00 ©2012 IEEE
1845
ICASSP 2012
5ce2cb4c76b0cdffe135cf24b9cda7ae841c8d49Facial Expression Intensity Estimation Using Ordinal Information
Computer and Systems Engineering, Rensselaer Polytechnic Institute
School of Computer Science and Technology, University of Science and Technology of China
('1746803', 'Rui Zhao', 'rui zhao')
('2316359', 'Quan Gan', 'quan gan')
('1791319', 'Shangfei Wang', 'shangfei wang')
('1726583', 'Qiang Ji', 'qiang ji')
1{zhaor,jiq}@rpi.edu, 2{gqquan@mail.,sfwang@}ustc.edu.cn
5c4d4fd37e8c80ae95c00973531f34a6d810ea3aThe Open World of Micro-Videos
UC Irvine1, INRIA2, Carnegie Mellon University
('1879100', 'Phuc Xuan Nguyen', 'phuc xuan nguyen')
('1770537', 'Deva Ramanan', 'deva ramanan')
09b80d8eea809529b08a8b0ff3417950c048d474Adding Unlabeled Samples to Categories by Learned Attributes
University of Maryland, College Park
University of Washington
('3826759', 'Jonghyun Choi', 'jonghyun choi')
('2270286', 'Ali Farhadi', 'ali farhadi')
('1693428', 'Larry S. Davis', 'larry s. davis')
{jhchoi,mrastega,lsd}@umiacs.umd.edu
ali@cs.uw.edu
09f58353e48780c707cf24a0074e4d353da18934To appear in Proc. IEEE IJCB, 2014
Unconstrained Face Recognition: Establishing Baseline
Human Performance via Crowdsourcing
Michigan State University, East Lansing, MI, U.S.A
Cornell University, Ithaca, NY, U.S.A
3Noblis, Falls Church, VA, U.S.A.
('2180413', 'Lacey Best-Rowden', 'lacey best-rowden')
('2339748', 'Shiwani Bisht', 'shiwani bisht')
('2619953', 'Joshua C. Klontz', 'joshua c. klontz')
('6680444', 'Anil K. Jain', 'anil k. jain')
bestrow1@cse.msu.edu;sb854@cornell.edu;joshua.klontz@noblis.org;jain@cse.msu.edu
096eb8b4b977aaf274c271058feff14c99d46af3REPORT DOCUMENTATION PAGE
Form Approved OMB NO. 0704-0188
including
the
time
for reviewing
for
information,
for
this collection of
information
is estimated
to average 1 hour per response,
the data needed, and completing and reviewing
this collection of
instructions,
The public reporting burden
Send comments
searching existing data sources, gathering and maintaining
to Washington
regarding
this burden estimate or any other aspect of
Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302.
Headquarters Services, Directorate
Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of
information if it does not display a currently valid OMB control number.
PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
1. REPORT DATE (DD-MM-YYYY)
05-10-2012
4. TITLE AND SUBTITLE
Multi-observation visual recognition via joint dynamic sparse
representation
5a. CONTRACT NUMBER
W911NF-09-1-0383
5b. GRANT NUMBER
2. REPORT TYPE
Conference Proceeding
3. DATES COVERED (From - To)
the collection of
reducing
for
information.
this burden,
including suggesstions
6. AUTHORS
Huang
7. PERFORMING ORGANIZATION NAMES AND ADDRESSES
William Marsh Rice University
Office of Sponsored Research
William Marsh Rice University
Houston, TX
9. SPONSORING/MONITORING AGENCY NAME(S) AND
ADDRESS(ES)
77005 -
U.S. Army Research Office
P.O. Box 12211
Research Triangle Park, NC 27709-2211
5c. PROGRAM ELEMENT NUMBER
611103
5d. PROJECT NUMBER
5e. TASK NUMBER
5f. WORK UNIT NUMBER
8. PERFORMING ORGANIZATION REPORT
NUMBER
10. SPONSOR/MONITOR'S ACRONYM(S)
ARO
11. SPONSOR/MONITOR'S REPORT
NUMBER(S)
56177-CS-MUR.84
12. DISTRIBUTION AVAILIBILITY STATEMENT
Approved for public release; distribution is unlimited.
13. SUPPLEMENTARY NOTES
The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department
of the Army position, policy or decision, unless so designated by other documentation.
('40479011', 'Haichao Zhang', 'haichao zhang')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
('1801395', 'Yanning Zhang', 'yanning zhang')
0952ac6ce94c98049d518d29c18d136b1f04b0c0
0969e0dc05fca21ff572ada75cb4b703c8212e80Article
Semi-Supervised Classification Based on
Low Rank Representation
College of Computer and Information Science, Southwest University, Chongqing 400715, China
Academic Editor: Javier Del Ser Lorente
Received: 1 June 2016; Accepted: 20 July 2016; Published: 22 July 2016
('40290479', 'Xuan Hou', 'xuan hou')
('3439025', 'Guangjun Yao', 'guangjun yao')
('40362316', 'Jun Wang', 'jun wang')
hx1995@email.swu.edu.cn (X.H.); guangjunyao@email.swu.edu.cn (G.Y.)
* Correspondence: kingjun@swu.edu.cn; Tel.: +86-23-6825-4396
09137e3c267a3414314d1e7e4b0e3a4cae801f45Noname manuscript No.
(will be inserted by the editor)
Two Birds with One Stone: Transforming and Generating
Facial Images with Iterative GAN
Received: date / Accepted: date
('49626434', 'Dan Ma', 'dan ma')
09dd01e19b247a33162d71f07491781bdf4bfd00Efficiently Scaling Up Video Annotation
with Crowdsourced Marketplaces
Department of Computer Science
University of California, Irvine, USA
('1856025', 'Carl Vondrick', 'carl vondrick')
('1770537', 'Deva Ramanan', 'deva ramanan')
{cvondric,dramanan,djp3}@ics.uci.edu
09cf3f1764ab1029f3a7d57b70ae5d5954486d69Comparison of ICA approaches for facial
expression recognition
I. Buciu 1,2 C. Kotropoulos 1
I. Pitas 1
Aristotle University of Thessaloniki
GR-541 24, Thessaloniki, Box 451, Greece
2 Electronics Department
Faculty of Electrical Engineering and Information Technology
University of Oradea 410087, Universitatii 1, Romania
August 18, 2008
DRAFT
costas,pitas@aiia.csd.auth.gr
ibuciu@uoradea.ro
09fa54f1ab7aaa83124d2415bfc6eb51e4b1f081Where to Buy It: Matching Street Clothing Photos in Online Shops
University of North Carolina at Chapel Hill
University of Illinois at Urbana-Champaign
('1772294', 'M. Hadi Kiapour', 'm. hadi kiapour')
('1682965', 'Xufeng Han', 'xufeng han')
('1749609', 'Svetlana Lazebnik', 'svetlana lazebnik')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
{hadi,xufeng,tlberg,aberg}@cs.unc.edu
slazebni@illinois.edu
09926ed62511c340f4540b5bc53cf2480e8063f8Action Tubelet Detector for Spatio-Temporal Action Localization ('1881509', 'Vicky Kalogeiton', 'vicky kalogeiton')
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
0951f42abbf649bb564a21d4ff5dddf9a5ea54d9Joint Estimation of Age and Gender from Unconstrained Face Images
using Lightweight Multi-task CNN for Mobile Applications
Institute of Information Science, Academia Sinica, Taipei
('1781429', 'Jia-Hong Lee', 'jia-hong lee')
('2679814', 'Yi-Ming Chan', 'yi-ming chan')
('2329177', 'Ting-Yen Chen', 'ting-yen chen')
('1720473', 'Chu-Song Chen', 'chu-song chen')
{honghenry.lee, yiming, timh20022002, song}@iis.sinica.edu.tw
09628e9116e7890bc65ebeabaaa5f607c9847baeSemantically Consistent Regularization for Zero-Shot Recognition
Department of Electrical and Computer Engineering
University of California, San Diego
('1797523', 'Pedro Morgado', 'pedro morgado')
('1699559', 'Nuno Vasconcelos', 'nuno vasconcelos')
{pmaravil,nuno}@ucsd.edu
09733129161ca7d65cf56a7ad63c17f493386027Face Recognition under Varying Illumination
Vienna University of Technology
Inst. of Computer Graphics and
Algorithms
Vienna, Austria
Istanbul Technical University
Department of Computer
Engineering
Istanbul, Turkey
Vienna University of Technology
Inst. of Computer Graphics and
Algorithms
Vienna, Austria
('1968256', 'Erald VUÇINI', 'erald vuçini')
('1766445', 'Muhittin GÖKMEN', 'muhittin gökmen')
('1725803', 'Eduard GRÖLLER', 'eduard gröller')
vucini@cg.tuwien.ac.at
gokmen@cs.itu.edu.tr
groeller@cg.tuwien.ac.at
097340d3ac939ce181c829afb6b6faff946cdce0Adding New Tasks to a Single Network with
Weight Transformations using Binary Masks
Sapienza University of Rome, 2Fondazione Bruno Kessler, 3University of Trento
Italian Institute of Technology, 5Mapillary Research
('38286801', 'Massimiliano Mancini', 'massimiliano mancini')
('40811261', 'Elisa Ricci', 'elisa ricci')
('3033284', 'Barbara Caputo', 'barbara caputo')
{mancini,caputo}@diag.uniroma1.it,eliricci@fbk.eu,samuel@mapillary.com
09507f1f1253101d04a975fc5600952eac868602Motion Feature Network: Fixed Motion Filter
for Action Recognition
Seoul National University, Seoul, South Korea
2 V.DO Inc., Suwon, Korea
('2647624', 'Myunggi Lee', 'myunggi lee')
('51151436', 'Seungeui Lee', 'seungeui lee')
('51136389', 'Gyutae Park', 'gyutae park')
('3160425', 'Nojun Kwak', 'nojun kwak')
{myunggi89, dehlix, sjson, pgt4861, nojunk}@snu.ac.kr
09718bf335b926907ded5cb4c94784fd20e5ccd8875
Recognizing Partially Occluded, Expression Variant
Faces From Single Training Image per Person
With SOM and Soft k-NN Ensemble
('2248421', 'Xiaoyang Tan', 'xiaoyang tan')
('1680768', 'Songcan Chen', 'songcan chen')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
('2375371', 'Fuyan Zhang', 'fuyan zhang')
098a1ccc13b8d6409aa333c8a1079b2c9824705bAttribute Pivots for Guiding Relevance Feedback in Image Search
The University of Texas at Austin
('1770205', 'Adriana Kovashka', 'adriana kovashka')
('1794409', 'Kristen Grauman', 'kristen grauman')
{adriana, grauman}@cs.utexas.edu
0903bb001c263e3c9a40f430116d1e629eaa616fCVPR
#987
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
CVPR 2009 Submission #987. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
An Empirical Study of Context in Object Detection
Anonymous CVPR submission
Paper ID 987
090ff8f992dc71a1125636c1adffc0634155b450Topic-aware Deep Auto-encoders (TDA)
for Face Alignment
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
Imperial College London, London, UK
('1698586', 'Jie Zhang', 'jie zhang')
('1693589', 'Meina Kan', 'meina kan')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1874505', 'Xiaowei Zhao', 'xiaowei zhao')
('1710220', 'Xilin Chen', 'xilin chen')
09b43b59879d59493df2a93c216746f2cf50f4acDeep Transfer Metric Learning
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. 2Advanced Digital Sciences Center, Singapore
How to design a good similarity function plays an important role in many
visual recognition tasks. Recent advances have shown that learning a dis-
tance metric directly from a set of training examples can usually achieve
proposing performance than hand-crafted distance metrics [2, 3]. While
many metric learning algorithms have been presented in recent years, there
are still two shortcomings: 1) most of them usually seek a single linear dis-
tance to transform sample into a linear feature space, so that the nonlinear
relationship of samples cannot be well exploited. Even if the kernel trick
can be employed to addressed the nonlinearity issue, these methods still
suffer from the scalability problem because they cannot obtain the explicit
nonlinear mapping functions; 2) most of them assume that the training and
test samples are captured in similar scenarios so that their distributions are
assumed to be the same. This assumption doesn’t hold in many real visual
recognition applications, when samples are captured across datasets.
We propose a deep transfer metric learning (DTML) method for cross-
dataset visual recognition. Our method learns a set of hierarchical nonlinear
transformations by transferring discriminative knowledge from the labeled
source domain to the unlabeled target domain, under which the inter-class
variations are maximized and the intra-class variations are minimized, and
the distribution divergence between the source domain and the target do-
main at the top layer of the network is minimized, simultaneously. Figure 1
illustrates the basic idea of the proposed method.
Figure 1: The basic idea of the proposed DTML method. For each sample
in the training sets from the source domain and the target domain, we pass
it to the developed deep neural network. We enforce two constraints on
the outputs of all training samples at the top of the network: 1) the inter-
class variations are maximized and the intra-class variations are minimized,
and 2) the distribution divergence between the source domain and the target
domain at the top layer of the network is minimized.
Deep Metric Learning. We construct a deep neural network to compute
the representations of each sample x. Assume there are M + 1 layers of the
network and p(m) units in the mth layer, where m = 1,2,··· ,M. The output
of x at the mth layer is computed as:
(cid:16)
W(m)h(m−1) + b(m)(cid:17) ∈ Rp(m)
(1)
f (m)(x) = h(m) = ϕ
where W(m) ∈ Rp(m)×p(m−1) and b(m) ∈ Rp(m) are the weight matrix and bias
of the parameters in this layer; and ϕ is a nonlinear activation function which
operates component-wisely, e.g., tanh or sigmoid functions. The nonlinear
mapping f (m) : Rd (cid:55)→ Rp(m) is a function parameterized by {W(i)}m
i=1 and
{b(i)}m
i=1. For the first layer, we assume h(0) = x.
For each pair of samples xi and x j, they can be finally represented as
f (m)(xi) and f (m)(x j) at the mth layer of our designed network, and their
distance metric can be measured by computing the squared Euclidean dis-
tance between f (m)(xi) and f (m)(x j) at the mth layer:
where Pi j is set as one if x j is one of k1-intra-class nearest neighbors of xi,
and zero otherwise; and Qi j is set as one if x j is one of k2-interclass nearest
neighbors of xi, and zero otherwise.
Deep Transfer Metric Learning. Given target domain data Xt and source
domain data Xs, their probability distributions are usually different in the o-
riginal feature space when they are captured from different datasets. To
reduce the distribution difference, we apply the Maximum Mean Discrep-
ancy (MMD) criterion [1] to measure their distribution difference at the mth
layer, which is defined as as follows:
ts (Xt ,Xs) =
D(m)
Nt ∑Nt
i=1 f (m)(xti)− 1
Ns ∑Ns
i=1 f (m)(xsi)
(6)
By combining (3) and (6), we formulate DTML as the following opti-
mization problem:
(cid:13)(cid:13)(cid:13)(cid:13) 1
(cid:13)(cid:13)(cid:13)(cid:13)2
d2
f (m) (xi,x j) =
(2)
min
f (M)
(cid:13)(cid:13)(cid:13) f (m)(xi)− f (m)(x j)
(cid:13)(cid:13)(cid:13)2
(cid:16)(cid:13)(cid:13)W(m)(cid:13)(cid:13)2
Following the graph embedding framework, we enforce the marginal
fisher analysis criterion [4] on the output of all training samples at the top
layer and formulate a strongly-supervised deep metric learning method:
F +(cid:13)(cid:13)b(m)(cid:13)(cid:13)2
(cid:17)
(3)
J = S(M)
c − α S(M)
b + γ ∑M
m=1
min
f (M)
where α (α > 0) is a free parameter which balances the important between
intra-class compactness and interclass separability; (cid:107)Z(cid:107)F denotes the Frobe-
nius norm of the matrix Z; γ (γ > 0) is a tunable positive regularization pa-
rameter; S(m)
define the intra-class compactness and the interclass
separability, which are defined as follows:
and S(m)
S(m)
c =
S(m)
b =
Nk1
Nk2
i=1∑N
∑N
i=1∑N
∑N
j=1 Pi j d2
f (m) (xi,x j),
j=1 Qi j d2
f (m) (xi,x j),
(4)
(5)
('34651153', 'Junlin Hu', 'junlin hu')
('1697700', 'Jiwen Lu', 'jiwen lu')
('1689805', 'Yap-Peng Tan', 'yap-peng tan')
09df62fd17d3d833ea6b5a52a232fc052d4da3f5ISSN: 1405-5546
Instituto Politécnico Nacional
México

Rivas Araiza, Edgar A.; Mendiola Santibañez, Jorge D.; Herrera Ruiz, Gilberto; González Gutiérrez,
Carlos A.; Trejo Perea, Mario; Ríos Moreno, G. J.
Mejora de Contraste y Compensación en Cambios de la Iluminación
Instituto Politécnico Nacional
Distrito Federal, México
Disponible en: http://www.redalyc.org/articulo.oa?id=61509703
Cómo citar el artículo
Número completo
Más información del artículo
Página de la revista en redalyc.org
Sistema de Información Científica
Red de Revistas Científicas de América Latina, el Caribe, España y Portugal
Proyecto académico sin fines de lucro, desarrollado bajo la iniciativa de acceso abierto
computacion-y-sistemas@cic.ipn.mx
09b0ef3248ff8f1a05b8704a1b4cf64951575be9Recognizing Activities of Daily Living with a Wrist-mounted Camera
Graduate School of Information Science and Technology, The University of Tokyo
('8197937', 'Katsunori Ohnishi', 'katsunori ohnishi')
('2551640', 'Atsushi Kanehira', 'atsushi kanehira')
('2554424', 'Asako Kanezaki', 'asako kanezaki')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
{ohnishi, kanehira, kanezaki, harada}@mi.t.u-tokyo.ac.jp
097104fc731a15fad07479f4f2c4be2e071054a2
094357c1a2ba3fda22aa6dd9e496530d784e1721A Unified Probabilistic Approach Modeling Relationships
between Attributes and Objects
Rensselaer Polytechnic Institute
110 Eighth Street, Troy, NY USA 12180
('40066738', 'Xiaoyang Wang', 'xiaoyang wang')
('1726583', 'Qiang Ji', 'qiang ji')
{wangx16,jiq}@rpi.edu
09f853ce12f7361c4b50c494df7ce3b9fad1d221myjournal manuscript No.
(will be inserted by the editor)
Random forests for real time 3D face analysis
Received: date / Accepted: date
('3092828', 'Gabriele Fanelli', 'gabriele fanelli')
('1681236', 'Luc Van Gool', 'luc van gool')
09111da0aedb231c8484601444296c50ca0b5388
09750c9bbb074bbc4eb66586b20822d1812cdb20978-1-4673-0046-9/12/$26.00 ©2012 IEEE
1385
ICASSP 2012
09ce14b84af2dc2f76ae1cf227356fa0ba337d07Face Reconstruction in the Wild
University of Washington
University of Washington and Google Inc
('2419955', 'Ira Kemelmacher-Shlizerman', 'ira kemelmacher-shlizerman')
('1679223', 'Steven M. Seitz', 'steven m. seitz')
kemelmi@cs.washington.edu
seitz@cs.washington.edu
090e4713bcccff52dcd0c01169591affd2af7e76What Do You Do? Occupation Recognition
in a Photo via Social Context
College of Computer and Information Science, Northeastern University, MA, USA
Northeastern University, MA, USA
('2025056', 'Ming Shao', 'ming shao')
('2897748', 'Liangyue Li', 'liangyue li')
mingshao@ccs.neu.edu, {liangyue, yunfu}@ece.neu.edu
097f674aa9e91135151c480734dda54af5bc4240Proc. VIIth Digital Image Computing: Techniques and Applications, Sun C., Talbot H., Ourselin S. and Adriaansen T. (Eds.), 10-12 Dec. 2003, Sydney
Face Recognition Based on Multiple Region Features
CSIRO Telecommunications & Industrial Physics
Australia
Tel: 612 9372 4104, Fax: 612 9372 4411, Email:
('40833472', 'Jiaming Li', 'jiaming li')
('1751724', 'Ying Guo', 'ying guo')
('39877973', 'Rong-yu Qiao', 'rong-yu qiao')
jiaming.li@csiro.au
5d485501f9c2030ab33f97972aa7585d3a0d59a7
5da740682f080a70a30dc46b0fc66616884463ecReal-Time Head Pose Estimation Using
Multi-Variate RVM on Faces in the Wild
Augmented Vision Research Group,
German Research Center for Arti cial Intelligence (DFKI
Tripstaddterstr. 122, 67663 Kaiserslautern, Germany
Technical University of Kaiserslautern
http://www.av.dfki.de
('2585383', 'Mohamed Selim', 'mohamed selim')
('1771057', 'Alain Pagani', 'alain pagani')
('1807169', 'Didier Stricker', 'didier stricker')
{mohamed.selim,alain.pagani,didier.stricker}@dfki.de
5de5848dc3fc35e40420ffec70a407e4770e3a8dWebVision Database: Visual Learning and Understanding from Web Data
1 Computer Vision Laboratory, ETH Zurich
2 Google Switzerland
('1702619', 'Wen Li', 'wen li')
('33345248', 'Limin Wang', 'limin wang')
('1688012', 'Wei Li', 'wei li')
('2794259', 'Eirikur Agustsson', 'eirikur agustsson')
('1681236', 'Luc Van Gool', 'luc van gool')
5da139fc43216c86d779938d1c219b950dd82a4c1-4244-1437-7/07/$20.00 ©2007 IEEE
II - 205
ICIP 2007
5dc056fe911a3e34a932513abe637076250d96da
5d185d82832acd430981ffed3de055db34e3c653A Fuzzy Reasoning Model for Recognition
of Facial Expressions
Research Center CENTIA, Electronics and Mechatronics
Universidad de las Américas, 72820, Puebla, Mexico
{oleg.starostenko; renan.contrerasgz; vicente.alarcon; leticia.florespo;
Engineering Institute, Autonomous University of Baja California, Blvd. Benito Ju rez
Insurgentes Este, 21280, Mexicali, Baja California, Mexico
3 Universidad Politécnica de Baja California, Mexicali, Baja California, Mexico
('1956337', 'Oleg Starostenko', 'oleg starostenko')
('20083621', 'Renan Contreras', 'renan contreras')
('1690236', 'Vicente Alarcón Aquino', 'vicente alarcón aquino')
('2069473', 'Oleg Sergiyenko', 'oleg sergiyenko')
jorge.rodriguez}@udlap.mx
srgnk@iing.mxl.uabc.mx
vera-tyrsa@yandex.ru
5d233e6f23b1c306cf62af49ce66faac2078f967RESEARCH ARTICLE
Optimal Geometrical Set for Automated
Marker Placement to Virtualized Real-Time
Facial Emotions
School of Mechatronic Engineering, Universiti Malaysia Perlis, 02600, Ulu Pauh, Arau, Perlis, West Malaysia
('6962924', 'Vasanthan Maruthapillai', 'vasanthan maruthapillai')
('32588646', 'Murugappan Murugappan', 'murugappan murugappan')
* murugappan@unimap.edu.my
5dd496e58cfedfc11b4b43c4ffe44ac72493bf55Discriminative convolutional Fisher vector network for action recognition
School of Electrical Engineering and Computer Science
Queen Mary University of London
London E1 4NS, United Kingdom
('2685285', 'Petar Palasek', 'petar palasek')
('1744405', 'Ioannis Patras', 'ioannis patras')
p.palasek@qmul.ac.uk, i.patras@qmul.ac.uk
5db075a308350c083c3fa6722af4c9765c4b8fefThe Novel Method of Moving Target Tracking Eyes
Location based on SIFT Feature Matching and Gabor
Wavelet Algorithm
College of Computer and Information Engineering, Nanyang Institute of Technology
Henan Nanyang, 473004, China
* Tel.: 0086+13838972861
Sensors & Transducers, Vol. 154, Issue 7, July 2013, pp. 129-137

SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss
© 2013 by IFSA
http://www.sensorsportal.com
Received: 28 April 2013 /Accepted: 19 July 2013 /Published: 31 July 2013
('2266189', 'Jing Zhang', 'jing zhang')
('2732767', 'Caixia Yang', 'caixia yang')
('1809507', 'Kecheng Liu', 'kecheng liu')
* E-mail: eduzhangjing@163.com
5d7f8eb73b6a84eb1d27d1138965eb7aef7ba5cfRobust Registration of Dynamic Facial Sequences ('2046537', 'Evangelos Sariyanidi', 'evangelos sariyanidi')
('1781916', 'Hatice Gunes', 'hatice gunes')
('1713138', 'Andrea Cavallaro', 'andrea cavallaro')
5dcf78de4d3d867d0fd4a3105f0defae2234b9cb
5db4fe0ce9e9227042144758cf6c4c2de2042435INTERNATIONAL JOURNAL OF ELECTRICAL AND ELECTRONIC SYSTEMS RESEARCH, VOL.3, JUNE 2010
Recognition of Facial Expression Using Haar
Wavelet Transform
for
paper
features
investigates
('2254697', 'M. Satiyan', 'm. satiyan')
5d88702cdc879396b8b2cc674e233895de99666bExploiting Feature Hierarchies with Convolutional Neural Networks
for Cultural Event Recognition
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
School of Computer Science, Carnegie Mellon University, 15213, USA
('1730228', 'Mengyi Liu', 'mengyi liu')
('1731144', 'Xin Liu', 'xin liu')
('38751558', 'Yan Li', 'yan li')
('1710220', 'Xilin Chen', 'xilin chen')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
('1685914', 'Shiguang Shan', 'shiguang shan')
{mengyi.liu, xin.liu, yan.li}@vipl.ict.ac.cn, {xlchen, sgshan}@ict.ac.cn, alex@cs.cmu.edu
5d5cd6fa5c41eb9d3d2bab3359b3e5eb60ae194eFace Recognition Algorithms
June 16, 2010
Ion Marqu´es
Supervisor:
Manuel Gra˜na
5d09d5257139b563bd3149cfd5e6f9eae3c34776Optics Communications 338 (2015) 77–89
Contents lists available at ScienceDirect
Optics Communications
journal homepage: www.elsevier.com/locate/optcom
Pattern recognition with composite correlation filters designed with
multi-objective combinatorial optimization
a Instituto Politécnico Nacional – CITEDI, Ave. del Parque 1310, Mesade Otay, Tijuana B.C. 22510, México
b Department of Computer Science, CICESE, Carretera Ensenada-Tijuana 3918, Ensenada B.C. 22860, México
c Instituto Tecnológico de Tijuana, Blvd. Industrial y Ave. ITR TijuanaS/N, Mesa de Otay, Tijuana B.C. 22500, México
d National Ignition Facility, Lawrence Livermore National Laboratory, Livermore, CA 94551, USA
a r t i c l e i n f o
a b s t r a c t
Article history:
Received 12 July 2014
Accepted 16 November 2014
Available online 23 October 2014
Keywords:
Object recognition
Composite correlation filters
Multi-objective evolutionary algorithm
Combinatorial optimization
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These
filters are given by a combination of several training templates chosen by a designer in an ad hoc manner.
In this work, we present a new approach for the design of composite filters based on multi-objective
combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used
to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by
employing a suggested binary-search procedure a filter bank with a minimum number of filters can be
constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained
with the proposed method in recognizing geometrically distorted versions of a target in cluttered and
noisy scenes are discussed and compared in terms of recognition performance and complexity with
existing state-of-the-art filters.
& Elsevier B.V. All rights reserved.
1.
Introduction
Nowadays, object recognition receives much research interest
due to its high impact in real-life activities, such as robotics, bio-
metrics, and target tracking [1,2]. Object recognition consists in
solving two essential tasks: detection of a target within an ob-
served scene and determination of the exact position of the de-
tected object. Different approaches can be utilized to address these
tasks, that is feature-based methods [3–6] and template matching
algorithms [7,8]. In feature-based methods the observed scene is
processed to extract relevant features of potential targets within
the scene. Next, the extracted features are processed and analyzed
to make decisions. Feature-based methods yield good results in
many applications. However, they depend on several subjective
decisions which often require optimization [9,10]. On the other
hand, correlation filtering is a template matching processing. In
this approach, the coordinates of the maximum of the filter output
are taken as estimates of the target coordinates in the observed
scene. Correlation filters possess a good mathematical basis and
they can be implemented by exploiting massive parallelism either
in hybrid opto-digital correlators [11,12] or in high-performance
n Corresponding author. Tel.: þ52 664 623 1344x82856.
http://dx.doi.org/10.1016/j.optcom.2014.10.038
0030-4018/& Elsevier B.V. All rights reserved.
hardware such as graphics processing units (GPUs) [13] or field
programmable gate arrays (FPGAs) [14] at high rate. Additionally,
these filters are capable to reliably recognize a target in highly
cluttered and noisy environments [8,15,16]. Moreover, they are
able to estimate very accurately the position of the target within
the scene [17]. Correlation filters are usually designed by a opti-
mization of various criteria [18,19]. The filters can be broadly
classified in to two main categories: analytical and composite fil-
ters. Analytical filters optimize a performance criterion using
mathematical models of signals and noise [20,21]. Composite fil-
ters are constructed by combination of several training templates,
each of them representing an expected target view in the observed
scene [22,21]. In practice, composite filters are effective for real-
life degradations of targets such as rotations and scaling. Compo-
site filters are synthesized by means of a supervised training
process. Thus, the performance of the filters highly depends on a
proper selection of image templates used for training [20,23].
Normally, the training templates are chosen by a designer in an ad
hoc manner. Such a subjective procedure is not optimal. In addi-
tion, Kumar and Pochavsky [24] showed that the signal to noise
ratio of a composite filter gradually reduces when the number of
training templates increases. In order to synthesize composite
filters with improved performance in terms of several competing
metrics, a search and optimization strategy is required to auto-
matically choose the set of training templates.
('1908859', 'Victor H. Diaz-Ramirez', 'victor h. diaz-ramirez')
('14245397', 'Andres Cuevas', 'andres cuevas')
('1684262', 'Vitaly Kober', 'vitaly kober')
('2166904', 'Leonardo Trujillo', 'leonardo trujillo')
('37615801', 'Abdul Awwal', 'abdul awwal')
E-mail address: vdiazr@ipn.mx (V.H. Diaz-Ramirez).
5d479f77ecccfac9f47d91544fd67df642dfab3cLinking People in Videos with “Their” Names
Using Coreference Resolution
Stanford University, USA
Stanford University, USA
('34066479', 'Vignesh Ramanathan', 'vignesh ramanathan')
('2319608', 'Armand Joulin', 'armand joulin')
('40085065', 'Percy Liang', 'percy liang')
('3216322', 'Li Fei-Fei', 'li fei-fei')
{vigneshr,ajoulin,pliang,feifeili}@cs.stanford.edu
5d01283474b73a46d80745ad0cc0c4da14aae194
5d197c8cd34473eb6cde6b65ced1be82a3a1ed14AFaceImageDatabaseforEvaluatingOut-of-FocusBlurQiHan,QiongLiandXiamuNiuHarbinInstituteofTechnologyChina1.IntroductionFacerecognitionisoneofthemostpopularresearchfieldsofcomputervisionandmachinelearning(Tores(2004);Zhaoetal.(2003)).Alongwithinvestigationoffacerecognitionalgorithmsandsystems,manyfaceimagedatabaseshavebeencollected(Gross(2005)).Facedatabasesareimportantfortheadvancementoftheresearchfield.Becauseofthenonrigidityandcomplex3Dstructureofface,manyfactorsinfluencetheperformanceoffacedetectionandrecognitionalgorithmssuchaspose,expression,age,brightness,contrast,noise,blurandetc.Someearlyfacedatabasesgatheredunderstrictlycontrolledenvironment(Belhumeuretal.(1997);Samaria&Harter(1994);Turk&Pentland(1991))onlyallowslightexpressionvariation.Toinvestigatetherelationshipsbetweenalgorithms’performanceandtheabovefactors,morefacedatabaseswithlargerscaleandvariouscharacterswerebuiltinthepastyears(Bailly-Bailliereetal.(2003);Flynnetal.(2003);Gaoetal.(2008);Georghiadesetal.(2001);Hallinan(1995);Phillipsetal.(2000);Simetal.(2003)).Forinstance,The"CAS-PEAL","FERET","CMUPIE",and"YaleB"databasesincludevariousposes(Gaoetal.(2008);Georghiadesetal.(2001);Phillipsetal.(2000);Simetal.(2003));The"HarvardRL","CMUPIE"and"YaleB"databasesinvolvemorethan40differentconditionsinillumination(Georghiadesetal.(2001);Hallinan(1995);Simetal.(2003));Andthe"BANCA",and"NDHID"databasescontainover10timesgathering(Bailly-Bailliereetal.(2003);Flynnetal.(2003)).Thesedatabaseshelpresearcherstoevaluateandimprovetheiralgorithmsaboutfacedetection,recognition,andotherpurposes.Blurisnotthemostimportantbutstillanotablefactoraffectingtheperformanceofabiometricsystem(Fronthaleretal.(2006);Zamanietal.(2007)).Themainreasonsleadingblurconsistinout-of-focusofcameraandmotionofobject,andtheout-of-focusblurismoresignificantintheapplicationenvironmentoffacerecognition(Eskicioglu&Fisher(1995);Kimetal.(1998);Tanakaetal.(2007);Yitzhaky&Kopeika(1996)).Toinvestigatetheinfluenceofbluronafacerecognitionsystem,afaceimagedatabasewithdifferentconditionsofclarityandefficientblurevaluatingalgorithmsareneeded.Thischapterintroducesanewfacedatabasebuiltforthepurposeofblurevaluation.Theapplicationenvironmentsoffacerecognitionareanalyzedfirstly,thenaimagegatheringschemeisdesigned.Twotypicalgatheringfacilitiesareusedandthefocusstatusaredividedinto11steps.Further,theblurassessmentalgorithmsaresummarizedandthecomparisonbetweenthemisraisedonthevarious-claritydatabase.The7www.intechopen.com
5da2ae30e5ee22d00f87ebba8cd44a6d55c6855eThis is an Open Access document downloaded from ORCA, Cardiff University's institutional
repository: http://orca.cf.ac.uk/111659/
This is the author’s version of a work that was submitted to / accepted for publication.
Citation for final published version:
Krumhuber, Eva G, Lai, Yukun, Rosin, Paul and Hugenberg, Kurt 2018. When facial expressions
Publishers page:
Please note:
Changes made as a result of publishing processes such as copy-editing, formatting and page
numbers may not be reflected in this version. For the definitive version of this publication, please
refer to the published source. You are advised to consult the publisher’s version if you wish to cite
this paper.
This version is being made available in accordance with publisher policies. See
http://orca.cf.ac.uk/policies.html for usage policies. Copyright and moral rights for publications
made available in ORCA are retained by the copyright holders.
5df376748fe5ccd87a724ef31d4fdb579dab693fA Dashboard for Affective E-learning:
Data Visualization for Monitoring Online Learner Emotions
School of Computer Science
Carleton University
Canada
('2625368', 'Reza GhasemAghaei', 'reza ghasemaghaei')
('40230630', 'Ali Arya', 'ali arya')
('8547603', 'Robert Biddle', 'robert biddle')
Reza.GhasemAghaei@carleton.ca
31aa20911cc7a2b556e7d273f0bdd5a2f0671e0a
31b05f65405534a696a847dd19c621b7b8588263
31625522950e82ad4dffef7ed0df00fdd2401436Motion Representation with Acceleration Images
National Institute of Advanced Industrial Science and Technology (AIST
Tsukuba, Ibaraki, Japan
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('1713046', 'Yun He', 'yun he')
('3393640', 'Soma Shirakabe', 'soma shirakabe')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
{hirokatsu.kataoka, yun.he, shirakabe-s, yu.satou}@aist.go.jp
3167f415a861f19747ab5e749e78000179d685bcRankBoost with l1 regularization for Facial Expression Recognition and
Intensity Estimation
Rutgers University, Piscataway NJ 08854, USA
2National Laboratory of Pattern Recognition, Chinese Academy of Sciences Beijing, 100080, China
('39606160', 'Peng Yang', 'peng yang')
('1734954', 'Qingshan Liu', 'qingshan liu')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
3107316f243233d45e3c7e5972517d1ed4991f91CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training
University of Science and Technology of China
2Microsoft Research Asia,
('3093568', 'Jianmin Bao', 'jianmin bao')
('39447786', 'Dong Chen', 'dong chen')
('1716835', 'Fang Wen', 'fang wen')
('7179232', 'Houqiang Li', 'houqiang li')
('1745420', 'Gang Hua', 'gang hua')
jmbao@mail.ustc.edu.cn, lihq@ustc.edu.cn
{doch,fangwen,ganghua}@microsoft.com
318e7e6daa0a799c83a9fdf7dd6bc0b3e89ab24aSparsity in Dynamics of Spontaneous
Subtle Emotions: Analysis & Application
('35256518', 'Anh Cat Le Ngo', 'anh cat le ngo')
('2339975', 'John See', 'john see')
('6633183', 'Raphael C.-W. Phan', 'raphael c.-w. phan')
31c0968fb5f587918f1c49bf7fa51453b3e89cf7Deep Transfer Learning for Person Re-identification ('3447059', 'Mengyue Geng', 'mengyue geng')
('5765799', 'Yaowei Wang', 'yaowei wang')
('1700927', 'Tao Xiang', 'tao xiang')
('1705972', 'Yonghong Tian', 'yonghong tian')
313d5eba97fe064bdc1f00b7587a4b3543ef712aCompact Deep Aggregation for Set Retrieval
Visual Geometry Group, University of Oxford, UK
2 DeepMind
('6730372', 'Yujie Zhong', 'yujie zhong')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{yujie,az}@robots.ox.ac.uk
relja@google.com
31e57fa83ac60c03d884774d2b515813493977b9
3137a3fedf23717c411483c7b4bd2ed646258401Joint Learning of Discriminative Prototypes
and Large Margin Nearest Neighbor Classifiers
Institute for Computer Graphics and Vision, Graz University of Technology
('3202367', 'Paul Wohlhart', 'paul wohlhart')
('1791182', 'Peter M. Roth', 'peter m. roth')
('3628150', 'Horst Bischof', 'horst bischof')
{koestinger,wohlhart,pmroth,bischof}@icg.tugraz.at
31c34a5b42a640b824fa4e3d6187e3675226143eShape and Texture based Facial Action and Emotion
Recognition
(Demonstration)
Department of Computer Science and Digital Technologies
Northumbria University
Newcastle, NE1 8ST, UK
('1712838', 'Li Zhang', 'li zhang')
('2801063', 'Kamlesh Mistry', 'kamlesh mistry')
{li.zhang, kamlesh.mistry, alamgir.hossain}@northumbria.ac.uk
316e67550fbf0ba54f103b5924e6537712f06beeMultimodal semi-supervised learning
for image classification
LEAR team, INRIA Grenoble, France
('2737253', 'Matthieu Guillaumin', 'matthieu guillaumin')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
31ef5419e026ef57ff20de537d82fe3cfa9ee741Facial Expression Analysis Based on
High Dimensional Binary Features
´Ecole Polytechique de Montr´eal, Universit´e de Montr´eal, Montr´eal, Canada
('3127597', 'Samira Ebrahimi Kahou', 'samira ebrahimi kahou')
('2558801', 'Pierre Froumenty', 'pierre froumenty')
{samira.ebrahimi-kahou, pierre.froumenty, christopher.pal}@polymtl.ca
31ea88f29e7f01a9801648d808f90862e066f9eaPublished as a conference paper at ICLR 2017
DEEP MULTI-TASK REPRESENTATION LEARNING:
A TENSOR FACTORISATION APPROACH
Queen Mary, University of London
('2653152', 'Yongxin Yang', 'yongxin yang')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
{yongxin.yang, t.hospedales}@qmul.ac.uk
3176ee88d1bb137d0b561ee63edf10876f805cf0Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation
University of Montreal, 2Cornell University, 3Ecole Polytechnique of Montreal, 4CIFAR
('25056820', 'Sina Honari', 'sina honari')
('2965424', 'Jason Yosinski', 'jason yosinski')
('1707326', 'Pascal Vincent', 'pascal vincent')
1{honaris, vincentp}@iro.umontreal.ca, 2yosinski@cs.cornell.edu, 3christopher.pal@polymtl.ca
31b58ced31f22eab10bd3ee2d9174e7c14c27c01
31835472821c7e3090abb42e57c38f7043dc3636Flow Counting Using Realboosted
Multi-sized Window Detectors
Lund University, Cognimatics AB
('38481779', 'Mikael Nilsson', 'mikael nilsson')
('3181258', 'Rikard Berthilsson', 'rikard berthilsson')
312b2566e315dd6e65bd42cfcbe4d919159de8a1An Accurate Algorithm for Generating a Music Playlist
International Journal of Computer Applications (0975 – 8887)
Volume 100– No.9, August 2014
based on Facial Expressions
Computer Science and Engineering Department
Amity School of Engineering & Technology,
Amity University, Noida, India
3152e89963b8a4028c4abf6e1dc19e91c4c5a8f4Exploring Stereotypes and Biased Data with the Crowd
Department of Computer Science
The University of Texas at Austin
Department of Computer Science
The University of Texas at Austin
Introduction
In 2016, Baidu and Google spent somewhere between
twenty and thirty billion dollars developing and acquir-
ing artificial intelligence and machine learning technolo-
gies (Bughin et al. 2017). A range of other sectors, includ
ing health care, education, and manufacturing, are also pre-
dicted to adopt these technologies at increasing rates. Ma-
chine learning and AI are proven to have the capacity to
greatly improve lives and spur innovation. However, as soci-
ety becomes increasingly dependent on these technologies,
it is crucial that we acknowledge some of the dangers, in-
cluding the capacity for these algorithms to absorb and am-
plify harmful cultural biases.
Algorithms are often praised for their objectivity, but ma-
chine learning algorithms have increasingly made news for a
number of problematic outcomes, ranging from Google Pho-
the judicial system using algorithms that are biased against
African Americans (Dougherty 2015; Angwin et al. 2016).
These harmful outcomes can be traced back to the data that
was used to train the models.
Machine learning applications put a heavy premium on
data quantity. Research communities generally believe that
the more training data there is, the better the learning out-
come of the models will be (Halevy, Norvig, and Pereira
2009). This has led to large scale data collection. How-
ever, unless extra care is taken by the researchers, these
large data sets will often contain bias that can profoundly
change the learning outcome. Even minimal bias within
a data set can end up being amplified by machine learn-
ing models, leading to skewed results. Researchers have
found that widely used image data sets imSitu and MS-
COCO, along with textual data sets mined from Google
News, contain significant gender bias (Zhao et al. 2017;
Bolukbasi et al. 2016). This research also found that train-
ing models with this data amplified the bias in the final out-
comes.
Once these algorithms have been improperly trained they
can then be implemented into feedback loops where systems
“define their own reality and use it to justify their results” as
Copyright c(cid:13) 2018 is held by the authors. Copies may be freely
made and distributed by others. Presented at the 2016 AAAI Con-
ference on Human Computation and Crowdsourcing (HCOMP).
Cathy O’Neil describes in her book Weapons of Math De-
struction. O’Neil discusses problematic systems like Pred-
Pol, a program that predicts where crimes are most likely to
occur based on past crime reports, which may unfairly target
poor communities.
It therefore becomes necessary to consider the bias that
may be introduced as a data set is being collected and to
attempt to prevent that bias from being absorbed by an al-
gorithm. We propose using the crowd to help uncover what
bias may reside in a specific data set.
The crowd has potential to be useful for this task. One
of the key difficulties in preventing bias is knowing what
to look for. The varied demographics of crowd workers pro-
vide an extended range of perspectives that can help uncover
stereotypes that may go unnoticed by a small group of re-
searchers. Some work has already been conducted in this
area, and Bolukbasi et al. (2016) found that the crowd was
useful in determining the level of stereotype associated with
ased words by asking the crowd to rate analogies such as
“she is to sewing as he is to carpentry”. We want to extend
our analysis to stereotypes beyond gender, including those
surrounding race and class.
The goal of our research is to contribute information about
how useful the crowd is at anticipating stereotypes that may
be biasing a data set without a researcher’s knowledge. The
results of the crowd’s prediction can potentially be used dur-
ing data collection to help prevent the suspected stereotypes
from introducing bias to the dataset. We conduct our re-
search by asking the crowd on Amazon’s Mechanical Turk
(AMT) to complete two similar Human Intelligence Tasks
(HITs) by suggesting stereotypes relating to their personal
experience. Our analysis of these responses focuses on de-
termining the level of diversity in the workers’ suggestions
and their demographics. Through this process we begin a
discussion on how useful the crowd can be in tackling this
difficult problem within machine learning data collection.
2 Related Work
2.1 Work on bias in data sets and amplification
As biased data sets get more coverage in the news, an in-
creasing amount of research has been conducted around de-
termining if data sets are biased and trying to mitigate the
('32193161', 'Zeyuan Hu', 'zeyuan hu')
('40410119', 'Julia Strout', 'julia strout')
iamzeyuanhu@utexas.edu
jstrout@utexas.edu
31ace8c9d0e4550a233b904a0e2aabefcc90b0e3Learning Deep Face Representation
Megvii Inc.
Megvii Inc.
Megvii Inc.
Megvii Inc.
Megvii Inc.
('1934546', 'Haoqiang Fan', 'haoqiang fan')
('2695115', 'Zhimin Cao', 'zhimin cao')
('1691963', 'Yuning Jiang', 'yuning jiang')
('2274228', 'Qi Yin', 'qi yin')
('2479859', 'Chinchilla Doudou', 'chinchilla doudou')
fhq@megvii.com
czm@megvii.com
jyn@megvii.com
yq@megvii.com
doudou@megvii.com
316d51aaa37891d730ffded7b9d42946abea837fCBMM Memo No. 23
April 27, 2015
Unsupervised learning of clutter-resistant visual
representations from natural videos
by
MIT, McGovern Institute, Center for Brains, Minds and Machines
('1694846', 'Qianli Liao', 'qianli liao')
31afdb6fa95ded37e5871587df38976fdb8c0d67QUANTIZED FUZZY LBP FOR FACE RECOGNITION
Jianfeng
Ren
Junsong
Yuan
BeingThere
Centre
Institute
of Media Innovation
Nanyang
50 Nanyang
Technological
Singapore
Drive,
637553.
University
School of Electrical
& Electronics
Engineering
Nanyang
50 Nanyang
Technological
Singapore
Avenue,
639798
University
('3307580', 'Xudong Jiang', 'xudong jiang')
31d60b2af2c0e172c1a6a124718e99075818c408Robust Facial Expression Recognition using Near Infrared Cameras
Paper: jc*-**-**-****
Robust Facial Expression Recognition using Near Infrared
Cameras
The University of Tokyo
Electronics and Communication Engineering, Chuo University
[Received 00/00/00; accepted 00/00/00]
('34415055', 'Hideki Hashimoto', 'hideki hashimoto')
('9181040', 'Takashi Kubota', 'takashi kubota')
31f1e711fcf82c855f27396f181bf5e565a2f58dUnconstrained Age Estimation with Deep Convolutional Neural Networks
Jun Cheng Chen1
University of Maryland
2Montgomery Blair High School
Rutgers University
('26988560', 'Rajeev Ranjan', 'rajeev ranjan')
('2349530', 'Sabrina Zhou', 'sabrina zhou')
('40080979', 'Amit Kumar', 'amit kumar')
('2943431', 'Azadeh Alavi', 'azadeh alavi')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
rranjan1@.umiacs.umd.edu, sabrina.zhou.m@gmail.com, {pullpull,akumar14,azadeh}@umiacs.umd.edu,
vishal.m.patel@rutgers.edu, Rama@umiacs.umd.edu
312afff739d1e0fcd3410adf78be1c66b3480396
3107085973617bbfc434c6cb82c87f2a952021b7Spatio-temporal Human Action Localisation and
Instance Segmentation in Temporally Untrimmed Videos
Oxford Brookes University
University of Oxford
Figure 1: A video sequence taken from the LIRIS-HARL dataset plotted in space-and time. (a) A top down view of the
video plotted with the detected action tubes of class ‘handshaking’ in green, and ‘person leaves baggage unattended’ in
red. Each action is located to be within a space-time tube. (b) A side view of the same space-time detections. Note that
no action is detected at the beginning of the video when there is human motion present in the video. (c) The detection
and instance segmentation result of two actions occurring simultaneously in a single frame.
('3017538', 'Suman Saha', 'suman saha')
('1931660', 'Gurkirt Singh', 'gurkirt singh')
('3019396', 'Michael Sapienza', 'michael sapienza')
('1730268', 'Philip H. S. Torr', 'philip h. s. torr')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
{suman.saha-2014, gurkirt.singh-2015, fabio.cuzzolin}@brookes.ac.uk
{michael.sapienza, philip.torr}@eng.ox.ac.uk
31182c5ffc8c5d8772b6db01ec98144cd6e4e8973D Face Reconstruction with Region Based Best Fit Blending Using
Mobile Phone for Virtual Reality Based Social Media
VALGMA 1∗
iCV Research Group, Institute of Technology, University of Tartu, Tartu 50411, Estonia
Hasan Kalyoncu University, Gaziantep, Turkey
('3087532', 'Gholamreza Anbarjafari', 'gholamreza anbarjafari')
('35447268', 'Rain Eric Haamer', 'rain eric haamer')
('7296001', 'Iiris Lüsi', 'iiris lüsi')
('12602781', 'Toomas Tikk', 'toomas tikk')
31bb49ba7df94b88add9e3c2db72a4a98927bb05
3146fabd5631a7d1387327918b184103d06c2211Person-independent 3D Gaze Estimation using Face Frontalization
L´aszl´o A. Jeni
Carnegie Mellon University
University of Pittsburgh
Pittsburgh, PA, USA
Pittsburgh, PA, USA
Figure 1: From a 2D image of a person’s face (a) a dense, part-based 3D deformable model is aligned (b) to reconstruct a partial frontal
view of the face (c). Binary features are extracted around eye and pupil markers (d) for the 3D gaze calculation (e).
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')laszlojeni@cmu.edu
jeffcohn@pitt.edu
91811203c2511e919b047ebc86edad87d985a4faExpression Subspace Projection for Face
Recognition from Single Sample per Person
('1782221', 'Hoda Mohammadzade', 'hoda mohammadzade')
91495c689e6e614247495c3f322d400d8098de43A Deep-Learning Approach to Facial Expression Recognition
with Candid Images
Wei Li
CUNY City College
Min Li
Alibaba. Inc
Zhong Su
IBM China Research Lab
Zhigang Zhu
CUNY Graduate Center and City College
lwei000@citymail.cuny.edu
mushi.lm@alibaba.inc
suzhong@cn.ibm.com
zhu@cs.ccny.cuny.edu
910524c0d0fe062bf806bb545627bf2c9a236a03Master Thesis
Improvement of Facial Expression Recognition through the
Evaluation of Dynamic and Static Features in Video Sequences
Submitted by:
Dated:
24th June, 2008
Supervisors:
Otto-von-Guericke University Magdeburg
Faculty of Computer Science
Department of Simulation und Graphics
Otto-von-Guericke University Magdeburg
Faculty of Electrical Engineering and Information Technology
Institute for Electronics, Signal Processing and Communications
('1692049', 'Klaus Toennies', 'klaus toennies')
('1741165', 'Ayoub Al-Hamadi', 'ayoub al-hamadi')
9117fd5695582961a456bd72b157d4386ca6a174Facial Expression
n Recognition Using Dee
ep Neural
Networks
Departm
ment of Electrical and Electronic Engineering
he University of Hong Kong, Pokfulam
Hong Kong
('8550244', 'Junnan Li', 'junnan li')
('1725389', 'Edmund Y. Lam', 'edmund y. lam')
91df860368cbcebebd83d59ae1670c0f47de171dCOCO Attributes:
Attributes for People, Animals, and Objects
Microsoft Research
Georgia Institute of Technology
('40541456', 'Genevieve Patterson', 'genevieve patterson')
('12532254', 'James Hays', 'james hays')
gen@microsoft.com
hays@gatech.edu
91067f298e1ece33c47df65236853704f6700a0bIJSTE - International Journal of Science Technology & Engineering | Volume 2 | Issue 11 | May 2016
ISSN (online): 2349-784X
Local Binary Pattern and Local Linear
Regression for Pose Invariant Face Recognition
M. Tech Student

Shreekumar T
Associate Professor
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Mangalore Institute of Engineering and Technology, Badaga
Mangalore Institute of Engineering and Technology, Badaga
Mijar, Moodbidri, Mangalore
Mijar, Moodbidri, Mangalore
Karunakara K
Professor & Head of Dept.
Department of Information Science & Engineering
Sri SidarthaInstitute of Technology, Tumkur
919d3067bce76009ce07b070a13728f549ebba49International Journal of Scientific and Research Publications, Volume 4, Issue 6, June 2014
ISSN 2250-3153
1
Time Based Re-ranking for Web Image Search
Ms. A.Udhayabharadhi *, Mr. R.Ramachandran **
MCA Student, Sri Manakula Vinayagar Engineering College, Pondicherry
Sri Manakula Vinayagar Engineering College, Pondicherry
9110c589c6e78daf4affd8e318d843dc750fb71aChapter 6
Facial Expression Synthesis Based on Emotion
Dimensions for Affective Talking Avatar
1 Key Laboratory of Pervasive Computing, Ministry of Education
Tsinghua National Laboratory for Information Science and Technology
Department of Computer Science and Technology,
Tsinghua University, Beijing 100084, China
Tsinghua-CUHK Joint Research Center for Media Sciences
Technologies and Systems,
Graduate School at Shenzhen, Tsinghua University, Shenzhen
3 Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong, HKSAR, China
('2180849', 'Shen Zhang', 'shen zhang')
('3860920', 'Zhiyong Wu', 'zhiyong wu')
('1702243', 'Helen M. Meng', 'helen m. meng')
('7239047', 'Lianhong Cai', 'lianhong cai')
zhangshen05@mails.tsinghua.edu.cn, john.zy.wu@gmail.com,
hmmeng@se.cuhk.edu.hk, clh-dcs@tsinghua.edu.cn
91e57667b6fad7a996b24367119f4b22b6892ecaProbabilistic Corner Detection for Facial Feature
Extraction
Article
Accepted version
E. Ardizzone, M. La Cascia, M. Morana
In Lecture Notes in Computer Science Volume 5716, 2009
It is advisable to refer to the publisher's version if you intend to cite
from the work.
Publisher: Springer
http://link.springer.com/content/pdf/10.1007%2F978-3-
642-04146-4_50.pdf
91883dabc11245e393786d85941fb99a6248c1fb
917bea27af1846b649e2bced624e8df1d9b79d6fUltra Power-Efficient CNN Domain Specific Accelerator with 9.3TOPS/Watt for
Mobile and Embedded Applications
Gyrfalcon Technology Inc.
1900 McCarthy Blvd. Milpitas, CA 95035
('47935028', 'Baohua Sun', 'baohua sun')
('49576071', 'Lin Yang', 'lin yang')
('46195424', 'Patrick Dong', 'patrick dong')
('49039276', 'Wenhan Zhang', 'wenhan zhang')
('35287113', 'Jason Dong', 'jason dong')
('48990565', 'Charles Young', 'charles young')
{baohua.sun,lin.yang,patrick.dong,wenhan.zhang,jason.dong,charles.yang}@gyrfalcontech.com
91b1a59b9e0e7f4db0828bf36654b84ba53b0557This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <

Simultaneous Hallucination and Recognition of
Low-Resolution Faces Based on Singular Value
Decomposition
(SVD)
for performing both
('1783889', 'Muwei Jian', 'muwei jian')
('1703078', 'Kin-Man Lam', 'kin-man lam')
911bef7465665d8b194b6b0370b2b2389dfda1a1RANJAN, ROMERO, BLACK: LEARNING HUMAN OPTICAL FLOW
Learning Human Optical Flow
1 MPI for Intelligent Systems
Tübingen, Germany
2 Amazon Inc.
('1952002', 'Anurag Ranjan', 'anurag ranjan')
('39040964', 'Javier Romero', 'javier romero')
('2105795', 'Michael J. Black', 'michael j. black')
aranjan@tuebingen.mpg.de
javier@amazon.com
black@tuebingen.mpg.de
91ead35d1d2ff2ea7cf35d15b14996471404f68dCombining and Steganography of 3D Face Textures ('38478675', 'Mohsen Moradi', 'mohsen moradi')
919d0e681c4ef687bf0b89fe7c0615221e9a1d30
912a6a97af390d009773452814a401e258b77640
91d513af1f667f64c9afc55ea1f45b0be7ba08d4Automatic Face Image Quality Prediction ('2180413', 'Lacey Best-Rowden', 'lacey best-rowden')
('6680444', 'Anil K. Jain', 'anil k. jain')
91e507d2d8375bf474f6ffa87788aa3e742333ceRobust Face Recognition Using Probabilistic
Facial Trait Code
†Department of Computer Science and Information Engineering, National Taiwan
Graduate Institute of Networking and Multimedia, National Taiwan University
National Taiwan University of Science and
University
Technology
('1822733', 'Ping-Han Lee', 'ping-han lee')
('38801529', 'Gee-Sern Hsu', 'gee-sern hsu')
('2250469', 'Szu-Wei Wu', 'szu-wei wu')
('1732064', 'Yi-Ping Hung', 'yi-ping hung')
918b72a47b7f378bde0ba29c908babf6dab6f833
91e58c39608c6eb97b314b0c581ddaf7daac075ePixel-wise Ear Detection with Convolutional
Encoder-Decoder Networks
('31834768', 'Luka Lan Gabriel', 'luka lan gabriel')
('34862665', 'Peter Peer', 'peter peer')
91d2fe6fdf180e8427c65ffb3d895bf9f0ec4fa0
9103148dd87e6ff9fba28509f3b265e1873166c9Face Analysis using 3D Morphable Models
Submitted for the Degree of
Doctor of Philosophy
from the
University of Surrey
Centre for Vision, Speech and Signal Processing
Faculty of Engineering and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
April 2015
('38819702', 'Guosheng Hu', 'guosheng hu')
('38819702', 'Guosheng Hu', 'guosheng hu')
9131c990fad219726eb38384976868b968ee9d9cDeep Facial Expression Recognition: A Survey ('39433609', 'Shan Li', 'shan li')
('1774956', 'Weihong Deng', 'weihong deng')
911505a4242da555c6828509d1b47ba7854abb7aIMPROVED ACTIVE SHAPE MODEL FOR FACIAL FEATURE LOCALIZATION
National Formosa University, Taiwan
('1711364', 'Hui-Yu Huang', 'hui-yu huang')
('2782376', 'Shih-Hang Hsu', 'shih-hang hsu')
Email: hyhuang@nfu.edu.tw
915d4a0fb523249ecbc88eb62cb150a60cf60fa0Comparison of Feature Extraction Techniques in Automatic
Face Recognition Systems for Security Applications
S . Cruz-Llanas, J. Ortega-Garcia, E. Martinez-Torrico, J. Gonzalez-Rodriguez
Dpto. Ingenieria Audiovisual y Comunicaciones, EUIT Telecomunicacion, Univ. PolitCcnica de Madrid, Spain
http://www.atvs.diac.upm.es
{cruzll, jortega, etorrico, jgonzalz}@atvs.diac.upm.es.
65126e0b1161fc8212643b8ff39c1d71d262fbc1Occlusion Coherence: Localizing Occluded Faces with a
Hierarchical Deformable Part Model
University of California, Irvine
('1898210', 'Golnaz Ghiasi', 'golnaz ghiasi'){gghiasi,fowlkes}@ics.uci.edu
65b737e5cc4a565011a895c460ed8fd07b333600Transfer Learning For Cross-Dataset Recognition: A Survey
This paper summarises and analyses the cross-dataset recognition transfer learning techniques with the
emphasis on what kinds of methods can be used when the available source and target data are presented
in different forms for boosting the target task. This paper for the first time summarises several transferring
criteria in details from the concept level, which are the key bases to guide what kind of knowledge to transfer
between datasets. In addition, a taxonomy of cross-dataset scenarios and problems is proposed according the
properties of data that define how different datasets are diverged, thereby review the recent advances on
each specific problem under different scenarios. Moreover, some real world applications and corresponding
commonly used benchmarks of cross-dataset recognition are reviewed. Lastly, several future directions are
identified.
Additional Key Words and Phrases: Cross-dataset, transfer learning, domain adaptation
1. INTRODUCTION
It has been explored how human would transfer learning in one context to another
similar context [Woodworth and Thorndike 1901; Perkins et al. 1992] in the field of
Psychology and Education. For example, learning to drive a car helps a person later
to learn more quickly to drive a truck, and learning mathematics prepares students to
study physics. The machine learning algorithms are mostly inspired by human brains.
However, most of them require a huge amount of training examples to learn a new
model from scratch and fail to apply knowledge learned from previous domains or
tasks. This may be due to that a basic assumption of statistical learning theory is
that the training and test data are drawn from the same distribution and belong to
the same task. Intuitively, learning from scratch is not realistic and practical, because
it violates how human learn things. In addition, manually labelling a large amount
of data for new domain or task is labour extensive, especially for the modern “data-
hungry” and “data-driven” learning techniques (i.e. deep learning). However, the big
data era provides a huge amount available data collected for other domains and tasks.
Hence, how to use the previously available data smartly for the current task with
scarce data will be beneficial for real world applications.
To reuse the previous knowledge for current tasks, the differences between old data
and new data need to be taken into account. Take the object recognition as an ex-
ample. As claimed by Torralba and Efros [2011], despite the great efforts of object
datasets creators, the datasets appear to have strong build-in bias caused by various
factors, such as selection bias, capture bias, category or label bias, and negative set
bias. This suggests that no matter how big the dataset is, it is impossible to cover
the complexity of the real visual world. Hence, the dataset bias needs to be consid-
ered before reusing data from previous datasets. Pan and Yang [2010] summarise that
the differences between different datasets can be caused by domain divergence (i.e.
distribution shift or feature space difference) or task divergence (i.e. conditional dis-
tribution shift or label space difference), or both. For example, in visual recognition,
the distributions between the previous and current data can be discrepant due to the
different environments, lighting, background, sensor types, resolutions, view angles,
and post-processing. Those external factors may cause the distribution divergence or
even feature space divergence between different domains. On the other hand, the task
divergence between current and previous data is also ubiquitous. For example, it is
highly possible that an animal species that we want to recognize have not been seen
ACM Journal Name, Vol. V, No. N, Article A, Publication date: January YYYY.
('38791459', 'Jing Zhang', 'jing zhang')
('1685696', 'Wanqing Li', 'wanqing li')
('1719314', 'Philip Ogunbona', 'philip ogunbona')
6582f4ec2815d2106957215ca2fa298396dde274JUNE 2007
1005
Discriminative Learning and Recognition
of Image Set Classes Using
Canonical Correlations
('1700968', 'Tae-Kyun Kim', 'tae-kyun kim')
('1748684', 'Josef Kittler', 'josef kittler')
('1745672', 'Roberto Cipolla', 'roberto cipolla')
65b1760d9b1541241c6c0222cc4ee9df078b593aEnhanced Pictorial Structures for Precise Eye Localization
Under Uncontrolled Conditions
1Department of Computer Science and Engineering
Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210093, China
('2248421', 'Xiaoyang Tan', 'xiaoyang tan')
('3075941', 'Fengyi Song', 'fengyi song')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
('1680768', 'Songcan Chen', 'songcan chen')
{x.tan, f.song, s.chen}@nuaa.edu.cn
zhouzh@lamda.nju.edu.cn
65d7f95fcbabcc3cdafc0ad38e81d1f473bb6220Face Recognition for the Visually Impaired
King Saud University, Riyadh, Saudi Arabia
2ISM-TEC LLC, Wilmington, Delaware, U.S.A
University of Georgia, Athens, GA, U.S.A
('2278811', 'Rabia Jafri', 'rabia jafri')
('2227653', 'Syed Abid Ali', 'syed abid ali')
('1712033', 'Hamid R. Arabnia', 'hamid r. arabnia')
65bba9fba03e420c96ec432a2a82521ddd848c09Connectionist Temporal Modeling for Weakly
Supervised Action Labeling
Stanford University
('38485317', 'De-An Huang', 'de-an huang')
('3216322', 'Li Fei-Fei', 'li fei-fei')
('9200530', 'Juan Carlos Niebles', 'juan carlos niebles')
{dahuang,feifeili,jniebles}@cs.stanford.edu
656531036cee6b2c2c71954bb6540ef6b2e016d0W. LIU ET AL.: JOINTLY LEARNING NON-NEGATIVE PROJECTION AND DICTIONARY 1
Jointly Learning Non-negative Projection
and Dictionary with Discriminative Graph
Constraints for Classification
Yandong Wen3
Rongmei Lin4
Meng Yang*1
College of Computer Science
Software Engineering,
Shenzhen University, China
2 School of ECE,
Peking University, China
3 Dept. of ECE,
Carnegie Mellon University, USA
4 Dept. of Math & Computer Science,
Emory University, USA
('36326884', 'Weiyang Liu', 'weiyang liu')
('1751019', 'Zhiding Yu', 'zhiding yu')
wyliu@pku.edu.cn
yzhiding@andrew.cmu.edu
yandongw@andrew.cmu.edu
rongmei.lin@emory.edu
yang.meng@szu.edu.cn
65b1209d38c259fe9ca17b537f3fb4d1857580aeInformation Constraints on Auto-Encoding Variational Bayes
University of California, Berkeley
University of California, Berkeley
Ragon Institute of MGH, MIT and Harvard
4Chan-Zuckerberg Biohub
('39848341', 'Romain Lopez', 'romain lopez')
('39967607', 'Jeffrey Regier', 'jeffrey regier')
('1694621', 'Michael I. Jordan', 'michael i. jordan')
('2163873', 'Nir Yosef', 'nir yosef')
{romain_lopez, regier, niryosef}@berkeley.edu
jordan@cs.berkeley.edu
655d9ba828eeff47c600240e0327c3102b9aba7cIEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 35, NO. 3, JUNE 2005
489
Kernel Pooled Local Subspaces for Classification
('40409453', 'Peng Zhang', 'peng zhang')
('1708023', 'Jing Peng', 'jing peng')
('1741392', 'Carlotta Domeniconi', 'carlotta domeniconi')
656a59954de3c9fcf82ffcef926af6ade2f3fdb5Convolutional Network Representation
for Visual Recognition
Doctoral Thesis
Stockholm, Sweden, 2017
('2835963', 'Ali Sharif Razavian', 'ali sharif razavian')
652aac54a3caf6570b1c10c993a5af7fa2ef31ffCARNEGIE MELLON UNIVERSITY
STATISTICAL MODELING FOR NETWORKED VIDEO:
CODING OPTIMIZATION, ERROR CONCEALMENT AND
TRAFFIC ANALYSIS
A DISSERTATION
SUBMITTED TO THE GRADUATE SCHOOL
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
for the degree
DOCTOR OF PHILOSOPHY
in
ELECTRICAL AND COMPUTER ENGINEERING
by
Pittsburgh, Pennsylvania
July, 2001
('1727257', 'Deepak Srinivas Turaga', 'deepak srinivas turaga')
656ef752b363a24f84cc1aeba91e4fa3d5dd66baRobust Open-Set Face Recognition for
Small-Scale Convenience Applications
Institute for Anthropomatics
Karlsruhe Institute of Technology
Karlsruhe, Germany
('1697965', 'Hua Gao', 'hua gao')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
Email: {hua.gao, ekenel, rainer.stiefelhagen}@kit.edu
656aeb92e4f0e280576cbac57d4abbfe6f9439eaJournal of Engineering Science and Technology
Vol. 12, No. 1 (2017) 155 - 167
School of Engineering, Taylor s University
USE OF IMAGE ENHANCEMENT TECHNIQUES
FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY
ON WEARABLE GADGETS
Asia Pacific University of Technology and Innovation, Kuala Lumpur 57000, Malaysia
Staffordshire University, Beaconside Stafford ST18 0AB, United Kingdom
('22422404', 'MUHAMMAD EHSAN RANA', 'muhammad ehsan rana')*Corresponding Author: muhd_ehsanrana@apu.edu.my
656f05741c402ba43bb1b9a58bcc5f7ce2403d9a('2319574', 'Danila Potapov', 'danila potapov')
6577c76395896dd4d352f7b1ee8b705b1a45fa90TOWARDS COMPUTATIONAL MODELS OF KINSHIP VERIFICATION
Cornell University
Cornell University
('2666471', 'Ruogu Fang', 'ruogu fang')
('1830653', 'Noah Snavely', 'noah snavely')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
650bfe7acc3f03eb4ba91d9f93da8ef0ae8ba772A Deep Learning Approach for Subject Independent Emotion
Recognition from Facial Expressions
*Faculty of Electronics, Telecommunications & Information Technology
Polytechnic University of Bucharest
Splaiul Independentei No. 313, Sector 6, Bucharest,
ROMANIA
**Department of Information Engineering and Computer Science
University of Trento
ITALY
('3178525', 'VICTOR-EMIL NEAGOE', 'victor-emil neagoe')victoremil@gmail.com, andreibarar@gmail.com, robitupaul@gmail.com
sebe@disi.unitn.it
65293ecf6a4c5ab037a2afb4a9a1def95e194e5fFace, Age and Gender Recognition
using Local Descriptors
by
Thesis submitted to the
Faculty of Graduate and Postdoctoral Studies
In partial fulfillment of the requirements
For the M.A.Sc. degree in
Electrical and Computer Engineering
School of Electrical Engineering and Computer Science
Faculty of Engineering
University of Ottawa
('15604275', 'Mohammad Esmaeel Mousa Pasandi', 'mohammad esmaeel mousa pasandi')
('15604275', 'Mohammad Esmaeel Mousa Pasandi', 'mohammad esmaeel mousa pasandi')
65817963194702f059bae07eadbf6486f18f4a0ahttp://dx.doi.org/10.1007/s11263-015-0814-0
WhittleSearch: Interactive Image Search with Relative Attribute
Feedback
Received: date / Accepted: date
('1770205', 'Adriana Kovashka', 'adriana kovashka')
6581c5b17db7006f4cc3575d04bfc6546854a785Contextual Person Identification
in Multimedia Data
zur Erlangung des akademischen Grades eines
Doktors der Ingenieurwissenschaften
der Fakultät für Informatik
des Karlsruher Instituts für Technologie (KIT)
genehmigte
Dissertation
von
aus Erlangen
Tag der mündlichen Prüfung:
18. November 2014
Hauptreferent:
Korreferent:
Prof. Dr. Rainer Stiefelhagen
Karlsruher Institut für Technologie
Prof. Dr. Gerhard Rigoll
Technische Universität München
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft
www.kit.edu
('1931707', 'Martin Bäuml', 'martin bäuml')
6515fe829d0b31a5e1f4dc2970a78684237f6edbConstrained Maximum Likelihood Learning of
Bayesian Networks for Facial Action Recognition
1 Electrical, Computer and Systems Eng. Dept.
Rensselaer Polytechnic Institute
Troy, NY, USA
2 Visualization and Computer Vision Lab
GE Global Research Center
Niskayuna, NY, USA
('1686235', 'Yan Tong', 'yan tong')
('1726583', 'Qiang Ji', 'qiang ji')
653d19e64bd75648cdb149f755d59e583b8367e3Decoupling “when to update” from “how to
update”
School of Computer Science, The Hebrew University, Israel
('19201820', 'Eran Malach', 'eran malach')
('2554670', 'Shai Shalev-Shwartz', 'shai shalev-shwartz')
65babb10e727382b31ca5479b452ee725917c739Label Distribution Learning ('1735299', 'Xin Geng', 'xin geng')
62dccab9ab715f33761a5315746ed02e48eed2a0A Short Note about Kinetics-600
Jo˜ao Carreira
('51210148', 'Eric Noland', 'eric noland')
('51215438', 'Andras Banki-Horvath', 'andras banki-horvath')
('38961760', 'Chloe Hillier', 'chloe hillier')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
joaoluis@google.com
enoland@google.com
bhandras@google.com
chillier@google.com
zisserman@google.com
62d1a31b8acd2141d3a994f2d2ec7a3baf0e6dc4Ding et al. EURASIP Journal on Image and Video Processing (2017) 2017:43
DOI 10.1186/s13640-017-0188-z
EURASIP Journal on Image
and Video Processing
R ES EAR CH
Noise-resistant network: a deep-learning
method for face recognition under noise
Open Access
('3012331', 'Yuanyuan Ding', 'yuanyuan ding')
('1976669', 'Yongbo Cheng', 'yongbo cheng')
('1847689', 'Xiaoliu Cheng', 'xiaoliu cheng')
('4869582', 'Baoqing Li', 'baoqing li')
('2757480', 'Xing You', 'xing you')
('38334864', 'Xiaobing Yuan', 'xiaobing yuan')
62694828c716af44c300f9ec0c3236e98770d7cfPadrón-Rivera, G., Rebolledo-Mendez, G., Parra, P. P., & Huerta-Pacheco, N. S. (2016). Identification of Action Units Related to
Identification of Action Units Related to Affective States in a Tutoring System
1Facultad de Estadística e Informática, Universidad Veracruzana, Mexico // 2Universidad Juárez Autónoma de
for Mathematics
Huerta-Pacheco1
*Corresponding author
('2221778', 'Gustavo Padrón-Rivera', 'gustavo padrón-rivera')
('1731562', 'Genaro Rebolledo-Mendez', 'genaro rebolledo-mendez')
Tabasco, Mexico // zS12020111@estudiantes.uv.mx // grebolledo@uv.mx // pilar.pozos@ujat.mx //
nehuerta@uv.mx
6261eb75066f779e75b02209fbd3d0f02d3e1e45Fudan-Huawei at MediaEval 2015: Detecting Violent
Scenes and Affective Impact in Movies with Deep Learning
School of Computer Science, Fudan University, Shanghai, China
2Media Lab, Huawei Technologies Co. Ltd., China
('9227981', 'Qi Dai', 'qi dai')
('3066866', 'Rui-Wei Zhao', 'rui-wei zhao')
('3099139', 'Zuxuan Wu', 'zuxuan wu')
('31825486', 'Xi Wang', 'xi wang')
('2650085', 'Zichen Gu', 'zichen gu')
('2273062', 'Wenhai Wu', 'wenhai wu')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
622daa25b5e6af69f0dac3a3eaf4050aa0860396Greedy Feature Selection for Subspace Clustering
Greedy Feature Selection for Subspace Clustering
Department of Electrical & Computer Engineering
Rice University, Houston, TX, 77005, USA
Department of Electrical & Computer Engineering
Carnegie Mellon University, Pittsburgh, PA, 15213, USA
Department of Electrical & Computer Engineering
Rice University, Houston, TX, 77005, USA
Editor:
('1746363', 'Eva L. Dyer', 'eva l. dyer')
('1745861', 'Aswin C. Sankaranarayanan', 'aswin c. sankaranarayanan')
('1746260', 'Richard G. Baraniuk', 'richard g. baraniuk')
e.dyer@rice.edu
saswin@ece.cmu.edu
richb@rice.edu
620339aef06aed07a78f9ed1a057a25433faa58b
62b3598b401c807288a113796f424612cc5833ca
62f0d8446adee6a5e8102053a63a61af07ac4098FACIAL POINT DETECTION USING CONVOLUTIONAL NEURAL NETWORK
TRANSFERRED FROM A HETEROGENEOUS TASK
**Tome R&D
Chubu University
1200, Matsumoto-cho, Kasugai, AICHI
('1687819', 'Takayoshi Yamashita', 'takayoshi yamashita')
628a3f027b7646f398c68a680add48c7969ab1d9Plan for Final Year Project:
HKU-Face: A Large Scale Dataset for Deep Face
Recognition
3035140108
3035141841
Introduction
Face recognition has been one of the most successful techniques in the field of artificial intelligence
because of its surpassing human-level performance in academic experiments and broad application in
the industrial world. Gaussian-face[1] and Facenet[2] hold state-of-the-art record using statistical
method and deep-learning method respectively. What’s more, face recognition has been applied
in various areas like authority checking and recording, fostering a large number of start-ups like
Face++.
Our final year project will deal with the face recognition task by building a large-scaled and carefully-
filtered dataset. Our project plan specifies our roadmap and current research process. This plan first
illustrates the significance and potential enhancement in constructing large-scale face dataset for
both academics and companies. Then objectives to accomplish and related literature review will be
expressed in detail. Next, methodologies used, scope of our project and challenges faced by us are
described. The detailed timeline for this project follows as well as a small summary.
2 Motivation
Nowadays most of the face recognition tasks are supervised learning tasks which use dataset annotated
by human beings. This contains mainly two drawbacks: (1) limited size of dataset due to limited
human effort; (2) accuracy problem resulted from human perceptual bias.
Parkhi et al.[3] discuss the first problem, showing that giant companies hold private face databases
with larger size of data (See the comparison in Table 1). Other research institution could only get
access to public but smaller databases like LFW[4, 5], which acts like a barricade to even higher
performance.
Dataset
IJB-A [6]
LFW [4, 5]
YFD [7]
CelebFaces [8]
CASIA-WebFace [9]
MS-Celeb-1M [10]
Facebook
Google
Availability
public
public
public
public
public
public
private
private
identities
500
5K
1595
10K
10K
100K
4K
8M
images
5712
13K
3425 videos
202K
500K
about 10M
4400K
100-200M
Table 1: Face recognition datasets
('3347561', 'Haicheng Wang', 'haicheng wang')
('40456402', 'Haoyu Li', 'haoyu li')
626913b8fcbbaee8932997d6c4a78fe1ce646127Learning from Millions of 3D Scans for Large-scale 3D Face Recognition
(This the preprint of the paper published in CVPR 2018)
School of Computer Science and Software Engineering,
The University of Western Australia
('1746166', 'Syed Zulqarnain Gilani', 'syed zulqarnain gilani')
('46332747', 'Ajmal Mian', 'ajmal mian')
{zulqarnain.gilani,ajmal.mian}@uwa.edu.au
62374b9e0e814e672db75c2c00f0023f58ef442cFrontalfaceauthenticationusingdiscriminatinggridswith
morphologicalfeaturevectors
A.Tefas
C.Kotropoulos
I.Pitas
AristotleUniversityofThessaloniki
Box,Thessaloniki,GREECE
EDICSnumbers:-KNOWContentRecognitionandUnderstanding
-MODAMultimodalandMultimediaEnvironments
Anovelelasticgraphmatchingprocedurebasedonmultiscalemorphologicaloperations,thesocalled
morphologicaldynamiclinkarchitecture,isdevelopedforfrontalfaceauthentication.Fastalgorithms
forimplementingmathematicalmorphologyoperationsarepresented.Featureselectionbyemploying
linearprojectionalgorithmsisproposed.Discriminatorypowercoe(cid:14)cientsthatweighthematching
errorateachgridnodearederived.Theperformanceofmorphologicaldynamiclinkarchitecturein
frontalfaceauthenticationisevaluatedintermsofthereceiveroperatingcharacteristicontheMVTS
faceimagedatabase.Preliminaryresultsforfacerecognitionusingtheproposedtechniquearealso
presented.
Correspondingauthor:I.Pitas
DRAFT
September,
E-mail:fcostas,tefas,pitasg@zeus.csd.auth.gr
6257a622ed6bd1b8759ae837b50580657e676192
6226f2ea345f5f4716ac4ddca6715a47162d5b92PERSPECTIVE
published: 19 November 2015
doi: 10.3389/frobt.2015.00029
Object Detection: Current and
Future Directions
1 Advanced Mining Technology Center, Universidad de Chile, Santiago, Chile, 2 Department of Electrical Engineering,
Universidad de Chile, Santiago, Chile
Object detection is a key ability required by most computer and robot vision systems.
The latest research on this area has been making great progress in many directions. In
the current manuscript, we give an overview of past research on object detection, outline
the current main research directions, and discuss open problems and possible future
directions.
Keywords: object detection, perspective, mini review, current directions, open problems
1. INTRODUCTION
During the last years, there has been a rapid and successful expansion on computer vision research.
Parts of this success have come from adopting and adapting machine learning methods, while others
from the development of new representations and models for specific computer vision problems
or from the development of efficient solutions. One area that has attained great progress is object
detection. The present works gives a perspective on object detection research.
Given a set of object classes, object detection consists in determining the location and scale of all
object instances, if any, that are present in an image. Thus, the objective of an object detector is to find
all object instances of one or more given object classes regardless of scale, location, pose, view with
respect to the camera, partial occlusions, and illumination conditions.
In many computer vision systems, object detection is the first task being performed as it allows
to obtain further information regarding the detected object and about the scene. Once an object
instance has been detected (e.g., a face), it is be possible to obtain further information, including: (i
to recognize the specific instance (e.g., to identify the subject’s face), (ii) to track the object over an
image sequence (e.g., to track the face in a video), and (iii) to extract further information about the
object (e.g., to determine the subject’s gender), while it is also possible to (a) infer the presence or
location of other objects in the scene (e.g., a hand may be near a face and at a similar scale) and (b) to
better estimate further information about the scene (e.g., the type of scene, indoor versus outdoor,
etc.), among other contextual information.
Object detection has been used in many applications, with the most popular ones being: (i)
human-computer interaction (HCI), (ii) robotics (e.g., service robots), (iii) consumer electronics
(e.g., smart-phones), (iv) security (e.g., recognition, tracking), (v) retrieval (e.g., search engines,
photo management), and (vi) transportation (e.g., autonomous and assisted driving). Each of these
applications has different requirements, including: processing time (off-line, on-line, or real-time
robustness to occlusions, invariance to rotations (e.g., in-plane rotations), and detection under pose
changes. While many applications consider the detection of a single object class (e.g., faces) and from
a single view (e.g., frontal faces), others require the detection of multiple object classes (humans,
vehicles, etc.), or of a single class from multiple views (e.g., side and frontal view of vehicles).
In general, most systems can detect only a single object class from a restricted set of views and
poses.
Edited by:
Venkatesh Babu Radhakrishnan,
Indian Institute of Science Bangalore
India
Reviewed by:
Juxi Leitner,
Queensland University of Technology
Australia
George Azzopardi,
University of Groningen, Netherlands
Soma Biswas,
Indian Institute of Science Bangalore
India
*Correspondence:
†Present address:
Graduate School of Informatics,
Kyoto University, Kyoto, Japan
Specialty section:
This article was submitted to Vision
Systems Theory, Tools and
Applications, a section of the
journal Frontiers in Robotics and AI
Received: 20 July 2015
Accepted: 04 November 2015
Published: 19 November 2015
Citation:
Verschae R and Ruiz-del-Solar J
(2015) Object Detection: Current and
Future Directions.
Front. Robot. AI 2:29.
doi: 10.3389/frobt.2015.00029
Frontiers in Robotics and AI | www.frontiersin.org
November 2015 | Volume 2 | Article 29
('1689681', 'Rodrigo Verschae', 'rodrigo verschae')
('1737300', 'Javier Ruiz-del-Solar', 'javier ruiz-del-solar')
('1689681', 'Rodrigo Verschae', 'rodrigo verschae')
('1689681', 'Rodrigo Verschae', 'rodrigo verschae')
rodrigo@verschae.org
62e913431bcef5983955e9ca160b91bb19d9de42Facial Landmark Detection with Tweaked Convolutional Neural Networks
USC Information Sciences Institute
The Open University of Israel
('1746738', 'Yue Wu', 'yue wu')
('1756099', 'Tal Hassner', 'tal hassner')
626859fe8cafd25da13b19d44d8d9eb6f0918647Activity Recognition based on a
Magnitude-Orientation Stream Network
Smart Surveillance Interest Group, Department of Computer Science
Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
('2119408', 'Carlos Caetano', 'carlos caetano')
('1679142', 'William Robson Schwartz', 'william robson schwartz')
{carlos.caetano,victorhcmelo,jefersson,william}@dcc.ufmg.br
624e9d9d3d941bab6aaccdd93432fc45cac28d4bObject-Scene Convolutional Neural Networks for Event Recognition in Images
The Chinese University of Hong Kong
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
('33345248', 'Limin Wang', 'limin wang')
('1915826', 'Zhe Wang', 'zhe wang')
('35031371', 'Wenbin Du', 'wenbin du')
('33427555', 'Yu Qiao', 'yu qiao')
07wanglimin@gmail.com, buptwangzhe2012@gmail.com, wb.du@siat.ac.cn, yu.qiao@siat.ac.cn
620e1dbf88069408b008347cd563e16aeeebeb83
624496296af19243d5f05e7505fd927db02fd0ceGauss-Newton Deformable Part Models for Face Alignment in-the-Wild
1. School of Computer Science
University of Lincoln, U.K
2. Department of Computing
Imperial College London, U.K
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')gtzimiropoulos@lincoln.ac.uk
62fd622b3ca97eb5577fd423fb9efde9a849cbefTurning a Blind Eye: Explicit Removal of Biases and
Variation from Deep Neural Network Embeddings
Visual Geometry Group, University of Oxford
University of Oxford
Big Data Institute, University of Oxford
('1688869', 'Andrew Zisserman', 'andrew zisserman')
621ff353960d5d9320242f39f85921f72be69dc8Explicit Occlusion Detection based Deformable Fitting for
Facial Landmark Localization
1Department of Computer Science
Rutgers University
617 Bowser Road, Piscataway, N.J, USA
('39960064', 'Xiang Yu', 'xiang yu')
('1684164', 'Fei Yang', 'fei yang')
('1768190', 'Junzhou Huang', 'junzhou huang')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
{xiangyu,feiyang,dnm}@cs.rutgers.edu
62007c30f148334fb4d8975f80afe76e5aef8c7fEye In-Painting with Exemplar Generative Adversarial Networks
Facebook Inc.
1 Hacker Way, Menlo Park (CA), USA
('8277405', 'Brian Dolhansky', 'brian dolhansky'){bdol, ccanton}@fb.com
62a30f1b149843860938de6dd6d1874954de24b7418
Fast Algorithm for Updating the Discriminant Vectors
of Dual-Space LDA
('40608983', 'Wenming Zheng', 'wenming zheng')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
621e8882c41cdaf03a2c4a986a6404f0272ba511On Robust Biometric Identity Verification via
Sparse Encoding of Faces: Holistic vs Local Approaches
The University of Queensland, School of ITEE, QLD 4072, Australia
('3026404', 'Yongkang Wong', 'yongkang wong')
('1781182', 'Conrad Sanderson', 'conrad sanderson')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
62e0380a86e92709fe2c64e6a71ed94d152c6643Facial Emotion Recognition With Expression Energy
Albert Cruz
Center for Research in
Intelligent Systems
216 Winston Chung Hall
Center for Research in
Intelligent Systems
216 Winston Chung Hall
Center for Research in
Intelligent Systems
216 Winston Chung Hall
Riverside, CA, 92521-0425,
Riverside, CA, 92521-0425,
Riverside, CA, 92521-0425,
USA
USA
USA
('1707159', 'Bir Bhanu', 'bir bhanu')
('3254753', 'Ninad Thakoor', 'ninad thakoor')
acruz006@student.ucr.edu
bhanu@ee.ucr.edu
ninadt@ee.ucr.edu
621f656fedda378ceaa9c0096ebb1556a42e5e0fSingle Sample Face Recognition from Video via
Stacked Supervised Auto-encoder
Ponti cal Catholic University of Rio de Janeiro, Brazil
Rio de Janeiro State University, Brazil
('8730918', 'Pedro J. Soto Vega', 'pedro j. soto vega')
('2017816', 'Raul Queiroz Feitosa', 'raul queiroz feitosa')
('2222679', 'Patrick Nigri Happ', 'patrick nigri happ')
{psoto, raul, vhaymaq, patrick}@ele.puc-rio.br
965f8bb9a467ce9538dec6bef57438964976d6d9Recognizing Human Faces under Disguise and Makeup
The Hong Kong Polytechnic University
Hung Hom, Kowloon, Hong Kong
('17671202', 'Tsung Ying Wang', 'tsung ying wang')
('35680604', 'Ajay Kumar', 'ajay kumar')
cstywang@comp.polyu.edu.hk, csajaykr@comp.polyu.edu.hk
961a5d5750f18e91e28a767b3cb234a77aac8305Face Detection without Bells and Whistles
1 ESAT-PSI/VISICS, iMinds, KU Leuven, Belgium
2 MPI Informatics, Saarbrücken, Germany
3 D-ITET/CVL, ETH Zürich, Switzerland
('11983029', 'Markus Mathias', 'markus mathias')
('1798000', 'Rodrigo Benenson', 'rodrigo benenson')
('3048367', 'Marco Pedersoli', 'marco pedersoli')
('1681236', 'Luc Van Gool', 'luc van gool')
96f0e7416994035c91f4e0dfa40fd45090debfc5Unsupervised Learning of Face Representations
Georgia Institute of Technology, CVIT, IIIT Hyderabad, IIT Kanpur
('19200118', 'Samyak Datta', 'samyak datta')
('39396475', 'Gaurav Sharma', 'gaurav sharma')
9626bcb3fc7c7df2c5a423ae8d0a046b2f69180cUPTEC STS 17033
Examensarbete 30 hp
November 2017
A deep learning approach for
action classification in American
football video sequences
('5845058', 'Jacob Westerberg', 'jacob westerberg')
963d0d40de8780161b70d28d2b125b5222e75596Convolutional Experts Network for Facial Landmark Detection
Carnegie Mellon University
Tadas Baltruˇsaitis∗
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213, USA
5000 Forbes Ave, Pittsburgh, PA 15213, USA
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213, USA
('1783029', 'Amir Zadeh', 'amir zadeh')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
abagherz@cs.cmu.edu
tbaltrus@cs.cmu.edu
morency@cs.cmu.edu
968b983fa9967ff82e0798a5967920188a3590a82013, Vol. 139, No. 2, 271–299
© 2013 American Psychological Association
0033-2909/13/$12.00 DOI: 10.1037/a0031640
Children’s Recognition of Disgust in Others
Sherri C. Widen and James A. Russell
Boston College
Disgust has been theorized to be a basic emotion with a facial signal that is easily, universally,
automatically, and perhaps innately recognized by observers from an early age. This article questions one
key part of that theory: the hypothesis that children recognize disgust from its purported facial signal.
Over the first 5 years, children experience disgust, produce facial expressions of disgust, develop a
concept of disgust, understand and produce the word disgust or a synonym, know about disgust’s causes
and consequences, and infer disgust in others from a situation or a behavior. Yet, only gradually do these
children come to “recognize” disgust specifically from the “disgust face” found in standardized sets of
the facial expressions of basic emotions. Improvement is gradual, with more than half of children
matching the standard disgust face to disgust only at around 9 years of age and with subsequent
improvement continuing gradually until the late teens or early adulthood. Up to age 8, a majority of
children studied believe that the standard disgust face indicates anger. Rather than relying on an already
known signal value, children may be actively learning to interpret the expression.
Keywords: facial expression, disgust, anger, emotion recognition, disgust face
Disgust has been theorized to be important for many reasons: its
status as one of only a handful of basic human emotions and hence
as a building block of other emotions (Rozin, Haidt, & McCauley,
2008); its role in avoidance of poisons, parasites, disease, and
contaminants (Curtis, De Barra, & Aunger, 2011; Hart, 1990;
Oaten, Stevenson, & Case, 2009; Schaller & Park, 2011); its role
in determining food preferences (Rozin & Fallon, 1987); its rela-
tion to psychiatric disorders, especially obsessive-compulsive dis-
order, phobias, and other anxiety disorders (Olatunji & McKay,
2007; Phillips, Fahy, David, & Senior, 1998); its diagnostic role in
neurological disorders such as Huntington’s disease (Spren-
gelmeyer et al., 1996); and, increasingly, its role in reactions to
cheating and other social and moral infractions (Haidt, 2003;
Prinz, 2007). According to Giner-Sorolla, Bosson, Caswell, and
Hettinger (2012), disgust plays “a powerful role in shaping cultural
attitudes, policy, and law” (p. 1). Articles, books, and conferences
demonstrate a surge of vigorous scientific theorizing and research
on disgust. One result of this surge of research is that the idea of
disgust as a simple reaction is giving way to a more complex story.
As Herz (2012) summarized, “Our age, our personality, our cul-
ture, our thoughts and beliefs, our mood, our morals, whom we’re
with, where we are, and which of our senses is giving us the
Sherri C. Widen and James A. Russell, Department of Psychology,
Boston College
This article was funded by Grant 1025563 from the National Science
Foundation.
We thank Nicole Nelson, Mary Kayyal, Joe Pochedly, Alyssa McCarthy,
Nicole Trauffer, Cara D’Arcy, Marissa DiGirolamo, Anne Yoder, and Erin
Heitzman for their comments on a draft of this article.
Correspondence concerning this article should be addressed to Sherri C.
Widen, Department of Psychology, McGuinn Hall, 140 Commonwealth
bc.edu
271
feeling, all shape whether and how strongly we are able to feel
disgusted” (p. 57).
Much of the theorizing and research on disgust to date have
been guided, explicitly or implicitly, by a research program cen-
tered on the concept of basic emotions—indeed, that research
program has provided the standard account of disgust. Theories
within this research program (Ekman & Cordaro, 2011; Izard,
1971, 1994; Tomkins, 1962) place facial expressions at the center
of emotion. In this article, we question one key part of the standard
account of disgust: the hypothesis that, from an early age, a child
recognizes disgust in others from their facial expressions. Our
review finds evidence that is inconsistent with this hypothesis, and
we suggest that the field examine alternative accounts. To place
this evidence in a broader context, we also review evidence on
closely related topics, such as children’s disgust reactions, their
acquisition of a word for disgust, their inference of disgust from
nonfacial cues, and adults’ recognition of disgust from facial
expressions.
The Standard Account
The widely assumed standard account of disgust stems from the
classic work of Allport (1924) and Tomkins (1962) and those they
influenced (Ekman & Cordaro, 2011; Izard, 2011; Levenson,
2011). In this simple, elegant, and plausible account, so-called
basic emotions—including disgust— have dedicated neural cir-
cuitry, are triggered by specific releasing stimuli, and produce a
coordinated response pattern that includes specific autonomic ner-
vous system activation, a behavioral tendency, and a facial expres-
sion. Ekman, Friesen, and Ellsworth (1972) described this last
aspect of their theory as follows:
Regardless of the language, of whether the culture is Western or
Eastern, industrialized or preliterate, [certain] facial expressions are
labeled with the same emotion terms . . . Our neuro-cultural theory
postulates a facial affect program, located within the nervous system
Avenue, Boston College, Chestnut Hill, MA 02467. E-mail: widensh@
969fd48e1a668ab5d3c6a80a3d2aeab77067c6ceEnd-to-End Spatial Transform Face Detection and Recognition
Zhejiang University
Zhejiang University
Rokid.inc
('39106061', 'Liying Chi', 'liying chi')
('35028106', 'Hongxin Zhang', 'hongxin zhang')
('9932177', 'Mingxiu Chen', 'mingxiu chen')
charrin0531@gmail.com
zhx@cad.zju.edu.cn
cmxnono@rokid.com
96a9ca7a8366ae0efe6b58a515d15b44776faf6eGrid Loss: Detecting Occluded Faces
Institute for Computer Graphics and Vision
Graz University of Technology
('34847524', 'Michael Opitz', 'michael opitz')
('1903921', 'Georg Waltner', 'georg waltner')
('1762885', 'Georg Poier', 'georg poier')
('1720811', 'Horst Possegger', 'horst possegger')
('3628150', 'Horst Bischof', 'horst bischof')
{michael.opitz,waltner,poier,possegger,bischof}@icg.tugraz.at
9696b172d66e402a2e9d0a8d2b3f204ad8b98cc4J Inf Process Syst, Vol.9, No.1, March 2013
pISSN 1976-913X
eISSN 2092-805X
Region-Based Facial Expression Recognition in
Still Images
('2648759', 'Gawed M. Nagi', 'gawed m. nagi')
('2057896', 'Fatimah Khalid', 'fatimah khalid')
964a3196d44f0fefa7de3403849d22bbafa73886
96e1ccfe96566e3c96d7b86e134fa698c01f2289Published in Proc. of 11th IAPR International Conference on Biometrics (ICB 2018). Gold Coast, Australia, Feb. 2018
Semi-Adversarial Networks: Convolutional Autoencoders for Imparting Privacy
to Face Images
Anoop Namboodiri 2
Michigan State University, East Lansing, USA
International Institute of Information Technology, Hyderabad, India
('5456235', 'Vahid Mirjalili', 'vahid mirjalili')
('2562040', 'Sebastian Raschka', 'sebastian raschka')
('1698707', 'Arun Ross', 'arun ross')
mirjalil@msu.edu
raschkas@msu.edu
anoop@iiit.ac.in
rossarun@cse.msu.edu
96f4a1dd1146064d1586ebe86293d02e8480d181COMPARATIVE ANALYSIS OF RERANKING
TECHNIQUES FOR WEB IMAGE SEARCH
Pune Institute of Computer Technology, Pune, ( India
9606b1c88b891d433927b1f841dce44b8d3af066Principal Component Analysis with Tensor Train
Subspace
('2329741', 'Wenqi Wang', 'wenqi wang')
('1732805', 'Vaneet Aggarwal', 'vaneet aggarwal')
('1980683', 'Shuchin Aeron', 'shuchin aeron')
9627f28ea5f4c389350572b15968386d7ce3fe49Load Balanced GANs for Multi-view Face Image Synthesis
1National Laboratory of Pattern Recognition, CASIA
2Center for Research on Intelligent Perception and Computing, CASIA
3Center for Excellence in Brain Science and Intelligence Technology, CAS
University of Chinese Academy of Sciences, Beijing, 100049, China
5Noah’s Ark Lab of Huawei Technologies
('1680853', 'Jie Cao', 'jie cao')
('49995036', 'Yibo Hu', 'yibo hu')
('49828394', 'Bing Yu', 'bing yu')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
{jie.cao,yibo.hu}@cripac.ia.ac.cn, yubing5@huawei.com, {rhe, znsun}@nlpr.ia.ac.cn
966e36f15b05ef8436afecf57a97b73d6dcada94Dimensionality Reduction using Relative
Attributes
Institute for Human-Machine Communication, Technische Universit at M unchen
Iran
The Remote Sensing Technology Institute (IMF), German Aerospace Center
1 Introduction
Visual attributes are high-level semantic description of visual data that are close
to the language of human. They have been intensively used in various appli-
cations such as image classification [1,2], active learning [3,4], and interactive
search [5]. However, the usage of attributes in dimensionality reduction has not
been considered yet. In this work, we propose to utilize relative attributes as
semantic cues in dimensionality reduction. To this end, we employ Non-negative
Matrix Factorization (NMF) [6] constrained by embedded relative attributes to
come up with a new algorithm for dimensionality reduction, namely attribute
regularized NMF (ANMF).
2 Approach
We assume that X ∈ RD×N denotes N data points (e.g., images) represented by
D dimensional low-level feature vectors. The NMF decomposes the non-negative
matrix X into two non-negative matrices U ∈ RD×K and V ∈ RN×K such that
the multiplication of U and V approximates the original matrix X. Here, U
represents the bases and V contains the coefficients, which are considered as
new representation of the original data. The NMF objective function is:
F =(cid:13)(cid:13)X − U V T(cid:13)(cid:13)2
s.t. U = [uik] ≥ 0
V = [vjk] ≥ 0.
(1)
Additionally, we assume that M semantic attributes have been predefined
for the data and the relative attributes of each image are available. Precisely,
the matrix of relative attributes, Q ∈ RM×N , has been learned by some ranking
function (e,.g, rankSVM). Intuitively, those images which own similar relative
attributes have similar semantic contents and therefore belong to the same se-
mantic class. This concept can be formulated as a regularizer to be added to the
('2133342', 'Mohammadreza Babaee', 'mohammadreza babaee')
('2165157', 'Stefanos Tsoukalas', 'stefanos tsoukalas')
('3281049', 'Maryam Babaee', 'maryam babaee')
('1705843', 'Gerhard Rigoll', 'gerhard rigoll')
('1777167', 'Mihai Datcu', 'mihai datcu')
{reza.babaee,rigoll}@tum.de, s.tsoukalas@mytum.de
babaee@eng.ui.ac.ir
mihai.datcu@dlr.de
96b1000031c53cd4c1c154013bb722ffd87fa7daContextVP: Fully Context-Aware Video
Prediction
1 NVIDIA, Santa Clara, CA, USA
2 ETH Zurich, Zurich, Switzerland
3 The Swiss AI Lab IDSIA, Manno, Switzerland
4 NNAISENSE, Lugano, Switzerland
('2387035', 'Wonmin Byeon', 'wonmin byeon')
('1794816', 'Qin Wang', 'qin wang')
('2100612', 'Rupesh Kumar Srivastava', 'rupesh kumar srivastava')
('1802604', 'Petros Koumoutsakos', 'petros koumoutsakos')
wbyeon@nvidia.com
96578785836d7416bf2e9c154f687eed8f93b1e4Automated video-based facial expression analysis
of neuropsychiatric disorders
a Section of Biomedical Image Analysis, University of Pennsylvania, 3600 Market, Suite 380, Philadelphia, PA 19104, USA
b Brain Behavior Center, University of Pennsylvania Medical Center, Hospital of the University of Pennsylvania
3400 Spruce Street, 10th Floor Gates Building Philadelphia, PA 19104, USA
c School of Arts and Sciences, University of Pennsylvania Medical Center, Hospital of the University of Pennsylvania
University of Pennsylvania Medical Center, Hospital of the University of Pennsylvania
3400 Spruce Street, 10th Floor Gates Building Philadelphia, PA 19104, USA
3400 Spruce Street, 10th Floor Gates Building Philadelphia, PA 19104, USA
University of Pennsylvania Medical Center, Hospital of the University of Pennsylvania
f Neuropsychiatry Section, University of Pennsylvania Medical Center, Hospital of the University of Pennsylvania
3400 Spruce Street, 10th Floor Gates Building Philadelphia, PA 19104, USA
3400 Spruce Street, 10th Floor Gates Building Philadelphia, PA 19104, USA
Received 16 July 2007; received in revised form 20 September 2007; accepted 20 September 2007
('37761073', 'Peng Wang', 'peng wang')
('28501509', 'Frederick Barrett', 'frederick barrett')
('2953329', 'Elizabeth Martin', 'elizabeth martin')
('5747394', 'Marina Milonova', 'marina milonova')
('1826037', 'Christian Kohler', 'christian kohler')
('7467718', 'Ragini Verma', 'ragini verma')
96e0cfcd81cdeb8282e29ef9ec9962b125f379b0The MegaFace Benchmark: 1 Million Faces for Recognition at Scale
Department of Computer Science and Engineering
University of Washington
(a) FaceScrub + MegaFace
(b) FGNET + MegaFace
Figure 1. The MegaFace challenge evaluates identification and verification as a function of increasing number of gallery distractors (going
from 10 to 1 Million). We use two different probe sets (a) FaceScrub–photos of celebrities, (b) FGNET–photos with a large variation in
age per person. We present rank-1 identification of state of the art algorithms that participated in our challenge. On the left side of each
plot is current major benchmark LFW scale (i.e., 10 distractors, see how all the top algorithms are clustered above 95%). On the right is
mega-scale (with a million distractors). Observe that rates drop with increasing numbers of distractors, even though the probe set is fixed,
and that algorithms trained on larger sets (dashed lines) generally perform better. Participate at: http://megaface.cs.washington.edu.
('2419955', 'Ira Kemelmacher-Shlizerman', 'ira kemelmacher-shlizerman')
('1679223', 'Steven M. Seitz', 'steven m. seitz')
('2721528', 'Evan Brossard', 'evan brossard')
968f472477a8afbadb5d92ff1b9c7fdc89f0c009Firefly-based Facial Expression Recognition
96c6f50ce8e1b9e8215b8791dabd78b2bbd5f28dDynamic Attention-controlled Cascaded Shape Regression Exploiting Training
Data Augmentation and Fuzzy-set Sample Weighting
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, UK
School of IoT Engineering, Jiangnan University, Wuxi 214122, China
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('1748684', 'Josef Kittler', 'josef kittler')
{z.feng, j.kittler, w.christmas, p.huber}@surrey.ac.uk, wu xiaojun@jiangnan.edu.cn
96e731e82b817c95d4ce48b9e6b08d2394937cf8Unconstrained Face Verification using Deep CNN Features
University of Maryland, College Park
Rutgers, The State University of New Jersey
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
pullpull@cs.umd.edu, vishal.m.patel@rutgers.edu, rama@umiacs.umd.edu
9686dcf40e6fdc4152f38bd12b929bcd4f3bbbccInternational Journal of Engineering Research and General Science Volume 3, Issue 1, January-February, 2015
ISSN 2091-2730
Emotion Based Music Player
1Department of Computer Science and Engineering
2Department of Computer Science and Engineering
3Department of Computer Science and Engineering
4Asst. Professor, Department of Computer Science and Engineering
M.H Saboo Siddik College of Engineering, University of Mumbai, India
('9928295', 'Sharik Khan', 'sharik khan')
('1762886', 'Omar Khan', 'omar khan')
('16079307', 'Shabana Tadvi', 'shabana tadvi')
Email:-kabani152@gmail.com
9636c7d3643fc598dacb83d71f199f1d2cc34415
3a27d164e931c422d16481916a2fa6401b74bcefAnti-Makeup: Learning A Bi-Level Adversarial Network for Makeup-Invariant
Face Verification
National Laboratory of Pattern Recognition, CASIA
Center for Research on Intelligent Perception and Computing, CASIA
Center for Excellence in Brain Science and Intelligence Technology, CAS
University of Chinese Academy of Sciences, Beijing 100190, China
('2496686', 'Yi Li', 'yi li')
('3051419', 'Lingxiao Song', 'lingxiao song')
('2225749', 'Xiang Wu', 'xiang wu')
('1705643', 'Ran He', 'ran he')
('1688870', 'Tieniu Tan', 'tieniu tan')
yi.li@cripac.ia.ac.cn, {lingxiao.song, rhe, tnt}@nlpr.ia.ac.cn, alfredxiangwu@gmail.com
3af8d38469fb21368ee947d53746ea68cd64eeaeMultimodal Intelligent Affect Detection with Kinect
(Doctoral Consortium)
Northumbria University
United Kingdom
Northumbria University
United Kingdom
Northumbria University
United Kingdom
('1886853', 'Li Zhang', 'li zhang')
('2004913', 'Alamgir Hossain', 'alamgir hossain')
('39617655', 'Yang Zhang', 'yang zhang')
li.zhang@northumbria.ac.uk
Yang4.zhang@northumbria.ac.uk
3a2fc58222870d8bed62442c00341e8c0a39ec87Probabilistic Local Variation
Segmentation
Technion - Computer Science Department - M.Sc. Thesis MSC-2014-02 - 2014
('3139600', 'Michael Baltaxe', 'michael baltaxe')
3a3f75e0ffdc0eef07c42b470593827fcd4020b4NORMAL SIMILARITY NETWORK FOR GENERATIVE MODELLING
School of Computing, National University of Singapore
('40456486', 'Jay Nandy', 'jay nandy')
('1725063', 'Wynne Hsu', 'wynne hsu')
3a76e9fc2e89bdd10a9818f7249fbf61d216efc4Face Sketch Matching via Coupled Deep Transform Learning
IIIT-Delhi, India, 2West Virginia University
('1925017', 'Shruti Nagpal', 'shruti nagpal')
('2220719', 'Maneet Singh', 'maneet singh')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('2487227', 'Afzel Noore', 'afzel noore')
('2641605', 'Angshul Majumdar', 'angshul majumdar')
{shrutin, maneets, rsingh, mayank, angshul}@iiitd.ac.in, afzel.noore@mail.wvu.edu
3a2c90e0963bfb07fc7cd1b5061383e9a99c39d2End-to-End Deep Learning for Steering Autonomous
Vehicles Considering Temporal Dependencies
The American University in Cairo, Egypt
2Valeo Schalter und Sensoren GmbH, Germany
('2150605', 'Hesham M. Eraqi', 'hesham m. eraqi')
('2233511', 'Mohamed N. Moustafa', 'mohamed n. moustafa')
('11300101', 'Jens Honer', 'jens honer')
3a0ea368d7606030a94eb5527a12e6789f727994Categorization by Learning
and Combining Object Parts
Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA
Tomaso Poggio
Honda RandD Americas, Inc., Boston, MA, USA
University of Siena, Siena, Italy
Computer Graphics Research Group, University of Freiburg, Freiburg, Germany
 heisele,serre,tp
('1684626', 'Bernd Heisele', 'bernd heisele')@ai.mit.edu pontil@ing.unisi.it
vetter@informatik.uni-freiburg.de
3a804cbf004f6d4e0b041873290ac8e07082b61fLanguage-Action Tools for Cognitive Artificial Agents: Papers from the 2011 AAAI Workshop (WS-11-14)
A Corpus-Guided Framework for Robotic Visual Perception
University of Maryland Institute for Advanced Computer Studies, College Park, MD
('7607499', 'Yezhou Yang', 'yezhou yang')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
{cteo, yzyang, hal, fer, yiannis}@umiacs.umd.edu
3a04eb72aa64760dccd73e68a3b2301822e4cdc3Scalable Sparse Subspace Clustering
Machine Intelligence Laboratory, College of Computer Science, Sichuan University
Chengdu, 610065, China.
('8249791', 'Xi Peng', 'xi peng')
('36794849', 'Lei Zhang', 'lei zhang')
('9276020', 'Zhang Yi', 'zhang yi')
pangsaai@gmail.com, {leizhang, zhangyi}@scu.edu.cn
3af130e2fd41143d5fc49503830bbd7bafd01f8bHow Do We Evaluate the Quality of Computational Editing Systems?
1 Inria, Univ. Grenoble Alpes & CNRS (LJK), Grenoble, France
University of Wisconsin-Madison, Madison, WI, USA
('2869929', 'Christophe Lino', 'christophe lino')
('1810286', 'Quentin Galvane', 'quentin galvane')
('1776507', 'Michael Gleicher', 'michael gleicher')
3a2cf589f5e11ca886417b72c2592975ff1d8472Spontaneously Emerging Object Part Segmentation
Machine Learning Department
Carnegie Mellon University
Machine Learning Department
Carnegie Mellon University
('1696365', 'Yijie Wang', 'yijie wang')
('1705557', 'Katerina Fragkiadaki', 'katerina fragkiadaki')
yijiewang@cmu.edu
katef@cs.cmu.edu
3ada7640b1c525056e6fcd37eea26cd638815cd6Abnormal Object Recognition:
A Comprehensive Study
Rutgers University
University of Washington
('3139794', 'Babak Saleh', 'babak saleh')
('2270286', 'Ali Farhadi', 'ali farhadi')
3abc833f4d689f37cc8a28f47fb42e32deaa4b17Noname manuscript No.
(will be inserted by the editor)
Large Scale Retrieval and Generation of Image Descriptions
Received: date / Accepted: date
('2004053', 'Vicente Ordonez', 'vicente ordonez')
('38390487', 'Margaret Mitchell', 'margaret mitchell')
('34176020', 'Jesse Dodge', 'jesse dodge')
('1699545', 'Yejin Choi', 'yejin choi')
3acb6b3e3f09f528c88d5dd765fee6131de931ea(cid:49)(cid:50)(cid:57)(cid:40)(cid:47)(cid:3)(cid:53)(cid:40)(cid:51)(cid:53)(cid:40)(cid:54)(cid:40)(cid:49)(cid:55)(cid:36)(cid:55)(cid:44)(cid:50)(cid:49)(cid:3)(cid:41)(cid:50)(cid:53)(cid:3)(cid:39)(cid:53)(cid:44)(cid:57)(cid:40)(cid:53)(cid:3)(cid:40)(cid:48)(cid:50)(cid:55)(cid:44)(cid:50)(cid:49)(cid:3)(cid:53)(cid:40)(cid:38)(cid:50)(cid:42)(cid:49)(cid:44)(cid:55)(cid:44)(cid:50)(cid:49)(cid:3)(cid:3)
(cid:44)(cid:49)(cid:3)(cid:48)(cid:50)(cid:55)(cid:50)(cid:53)(cid:3)(cid:57)(cid:40)(cid:43)(cid:44)(cid:38)(cid:47)(cid:40)(cid:3)(cid:57)(cid:44)(cid:39)(cid:40)(cid:50)(cid:54)(cid:3)
(cid:53)(cid:68)(cid:77)(cid:78)(cid:88)(cid:80)(cid:68)(cid:85)(cid:3)(cid:55)(cid:75)(cid:72)(cid:68)(cid:74)(cid:68)(cid:85)(cid:68)(cid:77)(cid:68)(cid:81)(cid:13)(cid:15)(cid:3)(cid:37)(cid:76)(cid:85)(cid:3)(cid:37)(cid:75)(cid:68)(cid:81)(cid:88)(cid:13)(cid:15)(cid:3)(cid:36)(cid:79)(cid:69)(cid:72)(cid:85)(cid:87)(cid:3)(cid:38)(cid:85)(cid:88)(cid:93)(cid:130)(cid:15)(cid:3)(cid:37)(cid:72)(cid:79)(cid:76)(cid:81)(cid:71)(cid:68)(cid:3)(cid:47)(cid:72)(cid:13)(cid:15)(cid:3)(cid:36)(cid:86)(cid:82)(cid:81)(cid:74)(cid:88)(cid:3)(cid:55)(cid:68)(cid:80)(cid:69)(cid:82)(cid:13)(cid:3)
(cid:3)
Center for Research in Intelligent Systems, University of California, Riverside, CA 92521, USA
cid:130) Computer Perception Lab, California State University, Bakersfield, CA 93311, USA
(cid:36)(cid:37)(cid:54)(cid:55)(cid:53)(cid:36)(cid:38)(cid:55)(cid:3)
the background
(cid:3)
A novel feature representation of human facial expressions
for emotion recognition is developed. The representation
leveraged
texture removal ability of
Anisotropic Inhibited Gabor Filtering (AIGF) with the
compact representation of spatiotemporal
local binary
patterns. The emotion recognition system incorporated face
detection and registration followed by the proposed feature
representation: Local Anisotropic Inhibited Binary Patterns
in Three Orthogonal
and
classification. The system is evaluated on videos from Motor
(cid:55)(cid:85)(cid:72)(cid:81)(cid:71)(cid:3)(cid:48)(cid:68)(cid:74)(cid:68)(cid:93)(cid:76)(cid:81)(cid:72)(cid:182)(cid:86)(cid:3)(cid:37)(cid:72)(cid:86)(cid:87)(cid:3)(cid:39)(cid:85)(cid:76)(cid:89)(cid:72)(cid:85)(cid:3)(cid:38)(cid:68)(cid:85)(cid:3)(cid:82)(cid:73)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:60)(cid:72)(cid:68)(cid:85) 2014-2016.
The results showed improved performance compared to
other state-of-the-art feature representations.(cid:3)
(LAIBP-TOP)
Index(cid:3)Terms(cid:178)(cid:3)Facial expression, emotion recognition,
feature extraction, background texture, anisotropic Gabor
filter.(cid:3)
(cid:3)
Planes
(cid:20)(cid:17)(cid:3)(cid:44)(cid:49)(cid:55)(cid:53)(cid:50)(cid:39)(cid:56)(cid:38)(cid:55)(cid:44)(cid:50)(cid:49)(cid:3)
Facial expressions are crucial to non-verbal communication
of emotion. Automatic facial emotion recognition software
has applications in lie detection, human behavior analysis,
medical applications, and human-computer interfaces. We
develop a system to detect stress and inattention of a motor
vehicle operator from a single camera. Previous work in
observation of motor vehicle operators employed multiple
cameras for 3-D reconstruction [1], but multi-camera
systems may introduce too much complexity and too many
constraints in the design of a system. Another possible
solution is gaze, but as of yet there is no consensus on how
to detect inattention from gaze [2]. The goal of our work is a
system that can extrapolate high stress and inattention from
valence and arousal measurements on a low-cost platform so
as to prevent motor vehicle accidents.
To this end, we present a novel dynamic local appearance
feature that can compactly describe the spatiotemporal
behavior of a local neighborhood in the video. The method
is based on Local Binary Patterns in Three Orthogonal
Planes (LBP-TOP) [3] and background suppressing Gabor
Energy filtering [4], but it is significantly different. We
demonstrate that the background suppression concept can be
applied to LBP-TOP to improve performance. The system is
tested on three data sets collected from the Motor Trend
(cid:48)(cid:68)(cid:74)(cid:68)(cid:93)(cid:76)(cid:81)(cid:72)(cid:182)(cid:86)(cid:3) (cid:37)(cid:72)(cid:86)(cid:87)(cid:3) (cid:39)(cid:85)(cid:76)(cid:89)(cid:72)(cid:85)(cid:3) (cid:38)(cid:68)r of the Year 2014, 2015 and
2016.
(cid:21)(cid:17)(cid:3)(cid:53)(cid:40)(cid:47)(cid:36)(cid:55)(cid:40)(cid:39)(cid:3)(cid:58)(cid:50)(cid:53)(cid:46)(cid:3)(cid:36)(cid:49)(cid:39)(cid:3)(cid:38)(cid:50)(cid:49)(cid:55)(cid:53)(cid:44)(cid:37)(cid:56)(cid:55)(cid:44)(cid:50)(cid:49)(cid:54)(cid:3)
(cid:3)
The current challenge to dynamic facial emotion recognition
is the detection of emotion despite the various extrinsic and
intrinsic imaging conditions, and intra-personnel differences
in expression. While deep learning has been a growing trend
in image processing and computer vision, the effects of
transfer learning (cid:178) using expression data from other
datasets [5] (cid:178) are diminished possibly because of various
factors [6]. Thus, hand-crafted features, not learned from the
neural networks, are still of great interest to unconstrained
facial emotion recognition. This work focuses on the
development of a novel hand-crafted feature representation.
Local Binary Pattern (LBP) is the most commonly used
appearance-based feature extraction method [7]. LBP is a
static texture descriptor and is not suitable for dynamic
facial expressions in videos.
A variation of LBP, Volume Local Binary Patterns
(VLBP), was developed to capture dynamic textures [8].
VLBP uses 3 parallel planes in the spatiotemporal domain
where the center pixel is on the center plane, and it records
the dynamic patterns in the neighborhood of each pixel into
a (3(cid:81)+2) dimensional histogram, where (cid:81) is the number of
neighboring pixels.
The high dimensionality of VLBP is 23(cid:81)+2, makes it
impractical to use due to the rapid increase in dimensionality
as the size of the neighborhood increases. An alternate
solution to VLBP is the Local Binary Patterns in Three
Orthogonal Planes (LBP-TOP). The dimensionality of LBP-
TOP (3*2(cid:81)) is significantly lower than VLBP. The working
of LBP-TOP is described in section 3.
The other major type of appearance feature is the Gabor
filter. Traditional Gabor filters are
in
unconstrained settings; it captures all edges within an image,
noise included. Cruz (cid:72)(cid:87)(cid:3) (cid:68)(cid:79)(cid:17)(cid:3) [4] proposed Anisotropic
Inhibited Gabor Filter (AIGF) that is robust to background
noise and computationally efficient. Almaev (cid:72)(cid:87)(cid:3) (cid:68)(cid:79)(cid:17)(cid:3) [9]
too sensitive
(cid:28)(cid:26)(cid:27)(cid:16)(cid:20)(cid:16)(cid:24)(cid:19)(cid:28)(cid:19)(cid:16)(cid:21)(cid:20)(cid:26)(cid:24)(cid:16)(cid:27)(cid:18)(cid:20)(cid:26)(cid:18)(cid:7)(cid:22)(cid:20)(cid:17)(cid:19)(cid:19)(cid:3)(cid:139)(cid:21)(cid:19)(cid:20)(cid:26)(cid:3)(cid:44)(cid:40)(cid:40)(cid:40)
(cid:27)(cid:20)(cid:19)
(cid:44)(cid:38)(cid:44)(cid:51)(cid:3)(cid:21)(cid:19)(cid:20)(cid:26)
3a60678ad2b862fa7c27b11f04c93c010cc6c430JANUARY-MARCH 2012
A Multimodal Database for
Affect Recognition and Implicit Tagging
('2463695', 'Mohammad Soleymani', 'mohammad soleymani')
('2796371', 'Jeroen Lichtenauer', 'jeroen lichtenauer')
('1809085', 'Thierry Pun', 'thierry pun')
('1694605', 'Maja Pantic', 'maja pantic')
3a591a9b5c6d4c62963d7374d58c1ae79e3a4039Driver Cell Phone Usage Detection From HOV/HOT NIR Images
Xerox Research Center Webster
800 Phillips Rd. Webster NY 14580
('1762503', 'Yusuf Artan', 'yusuf artan')
('2415287', 'Orhan Bulan', 'orhan bulan')
('1736673', 'Robert P. Loce', 'robert p. loce')
('5942563', 'Peter Paul', 'peter paul')
yusuf.artan,orhan.bulan,robert.loce,peter.paul@xerox.com
3aa9c8c65ce63eb41580ba27d47babb1100df8a3Annals of the  
University of North Carolina Wilmington
Master of Science in  
Computer Science and Information Systems 
3a0a839012575ba455f2b84c2d043a35133285f9444
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 444–454,
Edinburgh, Scotland, UK, July 27–31, 2011. c(cid:13)2011 Association for Computational Linguistics
3af1a375c7c1decbcf5c3a29774e165cafce390cQuantifying Facial Expression Abnormality in Schizophrenia by Combining
2D and 3D Features
1 Department of Radiology
University of Pennsylvania
2 Department of Psychiatry
University of Pennsylvania
Philadelphia, PA 19104
Philadelphia, PA 19104
('1722767', 'Peng Wang', 'peng wang')
('15741672', 'Fred Barrett', 'fred barrett')
('7467718', 'Ragini Verma', 'ragini verma')
{wpeng@ieee.org, ragini.verma@uphs.upenn.edu }
{kohler, fbarrett, raquel, gur}@bbl.med.upenn.edu
3a9681e2e07be7b40b59c32a49a6ff4c40c962a2Biometrics & Biostatistics International Journal
Comparing treatment means: overlapping standard
errors, overlapping confidence intervals, and tests of
hypothesis
3a846704ef4792dd329a5c7a2cb8b330ab6b8b4ein any current or
future media,
for all other uses,
© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained
including
reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any
copyrighted component of this work in other works.
Pre-print of article that appeared at the IEEE Computer Society Workshop on Biometrics
2010.
The published article can be accessed from:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5544597
3a2a37ca2bdc82bba4c8e80b45d9f038fe697c7dHandling Uncertain Tags in Visual Recognition
School of Computing Science, Simon Fraser University, Canada
('3214848', 'Arash Vahdat', 'arash vahdat')
('10771328', 'Greg Mori', 'greg mori')
{avahdat, mori}@cs.sfu.ca
3a95eea0543cf05670e9ae28092a114e3dc3ab5cConstructing the L2-Graph for Robust Subspace
Learning and Subspace Clustering
('8249791', 'Xi Peng', 'xi peng')
('1751019', 'Zhiding Yu', 'zhiding yu')
('3134548', 'Huajin Tang', 'huajin tang')
('9276020', 'Zhang Yi', 'zhang yi')
3a4f522fa9d2c37aeaed232b39fcbe1b64495134ISSN (Online) 2321 – 2004
ISSN (Print) 2321 – 5526
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
IJIREEICE
Face Recognition and Retrieval Using Cross
Age Reference Coding
Sricharan H S1, Srinidhi K S1, Rajath D N1, Tejas J N1, Chandrakala B M2
BE, DSCE, Bangalore1
Assistant Professor, DSCE, Bangalore2
54948ee407b5d32da4b2eee377cc44f20c3a7e0cRight for the Right Reason: Training Agnostic
Networks
Intelligent Systems Laboratory, University of Bristol, Bristol BS8 1UB, UK
use of classifiers in “out of domain” situations, a problem that
leads to research questions in domain adaptation [6], [18].
Other concerns are also created around issues of bias, e.g.
classifiers incorporating biases that are present in the data
and are not intended to be used [2], which run the risk of
reinforcing or amplifying cultural (and other) biases [20].
Therefore, both predictive accuracy and fairness are heavily
influenced by the choices made when developing black-box
machine-learning models.
('1805367', 'Sen Jia', 'sen jia')
('2031978', 'Thomas Lansdall-Welfare', 'thomas lansdall-welfare')
('1685083', 'Nello Cristianini', 'nello cristianini')
{sen.jia, thomas.lansdall-welfare, nello.cristianini}@bris.ac.uk
540b39ba1b8ef06293ed793f130e0483e777e278ORIGINAL RESEARCH
published: 13 July 2018
doi: 10.3389/fpsyg.2018.01191
Biologically Inspired Emotional
Expressions for Artificial Agents
Optics and Engineering Informatics, Budapest University of Technology and Economics
Budapest, Hungary, E tv s Lor nd University, Budapest, Hungary, 3 Institute for Computer Science
and Control, Hungarian Academy of Sciences, Budapest, Hungary, Chuo University
Tokyo, Japan, 5 MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary, 6 Department of Telecommunications
and Media Informatics, Budapest University of Technology and Economics, Budapest, Hungary
A special area of human-machine interaction,
the expression of emotions gains
importance with the continuous development of artificial agents such as social robots or
('31575111', 'Beáta Korcsok', 'beáta korcsok')
('3410664', 'Veronika Konok', 'veronika konok')
('10791722', 'György Persa', 'györgy persa')
('2725581', 'Tamás Faragó', 'tamás faragó')
('1701851', 'Mihoko Niitsuma', 'mihoko niitsuma')
('1769570', 'Péter Baranyi', 'péter baranyi')
('3131165', 'Márta Gácsi', 'márta gácsi')
54bb25a213944b08298e4e2de54f2ddea890954aAgeDB: the first manually collected, in-the-wild age database
Imperial College London
Imperial College London
Imperial College London, On do
Imperial College London
Middlesex University London
Imperial College London
('24278037', 'Stylianos Moschoglou', 'stylianos moschoglou')
('40598566', 'Athanasios Papaioannou', 'athanasios papaioannou')
('3320415', 'Christos Sagonas', 'christos sagonas')
('3234063', 'Jiankang Deng', 'jiankang deng')
('1754270', 'Irene Kotsia', 'irene kotsia')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
s.moschoglou@imperial.ac.uk
a.papaioannou11@imperial.ac.uk
c.sagonas@imperial.ac.uk
j.deng16@imperial.ac.uk
i.kotsia@mdx.ac.uk
s.zafeiriou@imperial.ac.uk
54bae57ed37ce50e859cbc4d94d70cc3a84189d5FACE RECOGNITION COMMITTEE MACHINE
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Shatin, Hong Kong
('2899702', 'Ho-Man Tang', 'ho-man tang')
('1681775', 'Michael R. Lyu', 'michael r. lyu')
('1706259', 'Irwin King', 'irwin king')
hmtang, lyu, king @cse.cuhk.edu.hk
54f442c7fa4603f1814ebd8eba912a00dceb5cb2The Indian Buffet Process:
Scalable Inference and Extensions
A Thesis
Presented to the Fellowship of
The University of Cambridge
in Candidacy for the Degree of
Master of Science
Department of Engineering
Zoubin Ghahramani, supervisor
August 2009
('2292194', 'Finale Doshi-Velez', 'finale doshi-velez')
543f21d81bbea89f901dfcc01f4e332a9af6682dPublished as a conference paper at ICLR 2016
UNSUPERVISED AND SEMI-SUPERVISED LEARNING
WITH CATEGORICAL GENERATIVE ADVERSARIAL
NETWORKS
University of Freiburg
79110 Freiburg, Germany
('2060551', 'Jost Tobias Springenberg', 'jost tobias springenberg')springj@cs.uni-freiburg.de
54969bcd728b0f2d3285866c86ef0b4797c2a74dIEEE TRANSACTION SUBMISSION
Learning for Video Compression
('31482866', 'Zhibo Chen', 'zhibo chen')
('50258851', 'Tianyu He', 'tianyu he')
('50562569', 'Xin Jin', 'xin jin')
('1697194', 'Feng Wu', 'feng wu')
5456166e3bfe78a353df988897ec0bd66cee937fImproved Boosting Performance by Exclusion
of Ambiguous Positive Examples
Computer Vision and Active Perception, KTH, Stockholm 10800, Sweden
Keywords:
Boosting, Image Classification, Algorithm Evaluation, Dataset Pruning, VOC2007.
('1750517', 'Miroslav Kobetski', 'miroslav kobetski')
('1736906', 'Josephine Sullivan', 'josephine sullivan')
{kobetski, sullivan}@kth.se
54a9ed950458f4b7e348fa78a718657c8d3d0e05Learning Neural Models for End-to-End
Clustering
1 ZHAW Datalab & School of Engineering, Winterthur, Switzerland
2 ARGUS DATA INSIGHTS Schweiz AG, Zurich, Switzerland
Ca Foscari University of Venice, Venice, Italy
Institute of Neural Information Processing, Ulm University, Germany
Institute for Optical Systems, HTWG Konstanz, Germany
('50415299', 'Benjamin Bruno Meier', 'benjamin bruno meier')
('3469013', 'Ismail Elezi', 'ismail elezi')
('1985672', 'Mohammadreza Amirian', 'mohammadreza amirian')
('3238279', 'Oliver Dürr', 'oliver dürr')
('2793787', 'Thilo Stadelmann', 'thilo stadelmann')
541f1436c8ffef1118a0121088584ddbfd3a0a8aA Spatio-Temporal Feature based on Triangulation of Dense SURF
The University of Electro-Communications, Tokyo
1-5-1 Chofu, Tokyo 182-0021 JAPAN
('2274625', 'Do Hang Nga', 'do hang nga')
('1681659', 'Keiji Yanai', 'keiji yanai')
dohang@mm.cs.uec.ac.jp, yanai@cs.uec.ac.jp
54aacc196ffe49b3450059fccdf7cd3bb6f6f3c3A Joint Learning Framework for Attribute Models and Object Descriptions
Dhruv Mahajan
Yahoo! Labs, Bangalore, India
('1779926', 'Sundararajan Sellamanickam', 'sundararajan sellamanickam')
('4989209', 'Vinod Nair', 'vinod nair')
{dkm,ssrajan,vnair}@yahoo-inc.com
54ce3ff2ab6e4465c2f94eb4d636183fa7878ab7Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Local Centroids Structured Non-Negative Matrix Factorization
University of Texas at Arlington, Texas, USA
School of Computer Science, OPTIMAL, Northwestern Polytechnical University, Xian 710072, Shaanxi, P. R. China
('2141896', 'Hongchang Gao', 'hongchang gao')
('1688370', 'Feiping Nie', 'feiping nie')
{hongchanggao, feipingnie}@gmail.com, heng@uta.edu
541bccf19086755f8b5f57fd15177dc49e77d675('2154872', 'Lijin Aryananda', 'lijin aryananda')
5495e224ac7b45b9edc5cfeabbb754d8a40a879bFeature Reconstruction Disentangling for Pose-invariant Face Recognition
Supplementary Material
Rutgers, The State University of New Jersey
University of California, San Diego
‡ NEC Laboratories America
1. Summary of The Supplementary
This supplementary file includes two parts: (a) Addi-
tional implementation details are presented to improve the
reproducibility; (b) More experimental results are presented
to validate our approach in different aspects, which are not
shown in the main submission due to the space limitation.
2. Additional Implementation Details
Pose-variant face generation We designed a network to
predict 3DMM parameters from a single face image. The
design is mainly based on VGG16 [4]. We use the same num-
ber of convolutional layers as VGG16 but replacing all max
pooling layers with stride-2 convolutional operations. The
fully connected (fc) layers are also different: we first use two
fc layers, each of which has 1024 neurons, to connect with
the convolutional modules; then, a fc layer of 30 neurons is
used for identity parameters, a fc layer of 29 neurons is used
for expression parameters, and a fc layer of 7 neurons is used
for pose parameters. Different from [8] uses 199 parameters
to represent the identity coefficients, we truncate the num-
ber of identity eigenvectors to 30 which preserves 90% of
variations. This truncation leads to fast convergence and less
overfitting. For texture, we only generate non-frontal faces
from frontal ones, which significantly mitigate the halluci-
nating texture issue caused by self occlusion and guarantee
high-fidelity reconstruction. We apply the Z-Buffer algo-
rithm used in [8] to prevent ambiguous pixel intensities due
to same image plane position but different depths.
Rich feature embedding The design of the rich em-
bedding network is mainly based on the architecture of
CASIA-net [6] since it is wildly used in former approach
and achieves strong performance in face recognition. During
training, CASIA+MultiPIE or CASIA+300WLP are used.
As shown in Figure 3 of the main submission, after the con-
volutional layers of CASIA-net, we use a 512-d FC for the
rich feature embedding, which is further branched into a
256-d identity feature and a 128-d non-identity feature. The
128-d non-identity feature is further connected with a 136-d
landmark prediction and a 7-d pose prediction. Notice that
in the face generation network, the number of pose parame-
ters is 7 instead of 3 because we need to uniquely depict the
projection matrix from the 3D model and the 2D face shape
in image domain, which includes scale, pitch, yaw, roll, x
translation, y translation, and z translations.
Disentanglement by feature reconstruction Once the
rich embedding network is trained, we feed genius pair that
share the same identity but different viewpoints into the
network to obtain the corresponding rich embedding, identity
and non-identity features. To disentangle the identity and
pose factors, we concatenate the identity and non-identity
features and roll though two 512-d fully connected layers
to output a reconstructed rich embedding depicted by 512
neurons. Both self and cross reconstruction loss are designed
to eventually push the two identity features close to each
other. At the same time, a cross-entropy loss is applied on the
near-frontal identity feature to maintain the discriminative
power of the learned representation. The disentanglement
of the identity and pose is finally achieved by the proposed
feature reconstruction based metric learning.
3. Additional Experimental Results
In addition to the main submission, we present more
experimental results in this section to further validate our
approach in different aspects.
3.1. P1 and P2 protocol on MultiPIE
In the main submission, due to space considerations, we
only report the mean accuracy over 10 random training and
testing splits, on MultiPIE and 300WLP separately. In Ta-
ble 1, we report the standard deviation of our method as a
more complete comparison. From the results, the standard
deviation of our method is also very small, which suggests
that the performance is consistent across all the trials. We
('4340744', 'Xi Peng', 'xi peng')
('15644381', 'Xiang Yu', 'xiang yu')
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
('1711560', 'Dimitris N. Metaxas', 'dimitris n. metaxas')
('2099305', 'Manmohan Chandraker', 'manmohan chandraker')
{xipeng.cs, dnm}@rutgers.edu, {xiangyu,ksohn,manu}@nec-labs.com
54756f824befa3f0c2af404db0122f5b5bbf16e0Research Statement
Computer Vision — Visual Recognition
Computational visual recognition concerns identifying what is in an image, video, or other visual data, enabling
applications such as measuring location, pose, size, activity, and identity as well as indexing for search by content.
Recent progress in making economical sensors and improvements in network, storage, and computational power
make visual recognition practical and relevant in almost all experimental sciences and commercial applications
such as image search. My work in visual recognition brings together machine learning, insights from psychology
and physiology, computer graphics, algorithms, and a great deal of computation.
While I am best known for my work on general object category detection – creating techniques and building
systems for some of the best performing approaches to categorizing and localizing objects in images, recognizing
action in video, and searching large collections of video and images – my research extends widely across visual
recognition including:
• Creating low-level image descriptors – procedures for converting pixel values to features that can be used
to model appearance for recognition. These include widely used descriptors for category recognition in
images [4, 2], object detection in images and video [11, 10, 2], and optical flow based descriptors for action
recognition in video [8].
• Developing models for recognition – ranging from what is becoming seminal work in recognizing human
actions in video [8], to formulating object localization as approximate subgraph isomorphism [2], to models
for parsing architectural images [3], to a novel approach for face recognition based on high level describable
visual attributes [9].
• Deriving machine learning techniques – this includes both techniques for increasing the accuracy of clas-
sification [15] and techniques that provide improvements in the trade-off between accuracy and efficiency
of classification for detection and categorization [11, 10] – making some approaches exponentially faster
and therefore useful for a new range of applications.
• Applications to web scale visual data – introducing novel techniques to automatically extract useful in-
formation from web-scale data. Successful applications include extracting models of face appearance [7]
and representative iconic images [5]. Some of my work on machine learning techniques for visual recogni-
tion [11, 10] is making possible very large scale visual recognition both in my own ongoing work, including
a collaboration with the ImageNet (10 million images in 10 thousand categories) team at Princeton and
Stanford, and efforts by other researchers in industry (Google and Yahoo!) and academia.
• Applications to analyzing imagery of people – probably the most important type of content in images and
video. Several of my projects address analyzing imagery of people, from detection [10], to identification by
face recognition [9, 7, 6], to localizing limbs (pose estimation) [14], and recognizing actions [8].
All of this work is part of an attempt to understand the structure of visual data and build better systems
for extracting information from visual signals. Such systems are useful in practice because, although for many
application areas human perceptual abilities far outstrip the ability of computational systems, automated systems
already have the upper hand in running constantly over vast amounts of data, e.g. surveillance systems and process
monitoring, and in making metric decisions about specific quantities such as size, distance, or orientation, where
humans have difficulty. Surveillance illustrates the need for recognition in order to increase performance. From
watching cells under a microscope to observing research mice in habitats to guarding national borders, surveillance
systems are limited by false detections produced due to spurious and unimportant activity. This cost can be reduced
by visual recognition algorithms that identify either activities of interest or the commonly occurring unimportant
activity.
Part of my work at Yahoo! Research emphasized another key application area for visual recognition, extracting
useful information from the vast and ever changing image and video data available on the world wide web. For
some of this data people provide partial annotation in the form of tags, captions, and freeform text on web pages.
One major challenge is to combine results from computational visual recognition with these partial annotations to
('39668247', 'Alexander C. Berg', 'alexander c. berg')
54204e28af73c7aca073835a14afcc5d8f52a515Fine-Pruning: Defending Against Backdooring Attacks
on Deep Neural Networks
New York University, Brooklyn, NY, USA
('48087922', 'Kang Liu', 'kang liu')
('3337066', 'Brendan Dolan-Gavitt', 'brendan dolan-gavitt')
('1696125', 'Siddharth Garg', 'siddharth garg')
{kang.liu,brendandg,siddharth.garg}@nyu.edu
549c719c4429812dff4d02753d2db11dd490b2aeYouTube-BoundingBoxes: A Large High-Precision
Human-Annotated Data Set for Object Detection in Video
Google Brain
Google Brain
Google Research
Google Brain
Google Brain
('2892780', 'Esteban Real', 'esteban real')
('1789737', 'Jonathon Shlens', 'jonathon shlens')
('30554825', 'Stefano Mazzocchi', 'stefano mazzocchi')
('3165011', 'Xin Pan', 'xin pan')
('2657155', 'Vincent Vanhoucke', 'vincent vanhoucke')
ereal@google.com
shlens@google.com
stefanom@google.com
xpan@google.com
vanhoucke@google.com
98b2f21db344b8b9f7747feaf86f92558595990c
98142103c311b67eeca12127aad9229d56b4a9ffGazeDirector: Fully Articulated Eye Gaze Redirection in Video
University of Cambridge, UK 2Carnegie Mellon University, USA
Max Planck Institute for Informatics, Germany
4Microsoft
('34399452', 'Erroll Wood', 'erroll wood')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
9820920d4544173e97228cb4ab8b71ecf4548475ORIGINAL RESEARCH
published: 11 September 2015
doi: 10.3389/fpsyg.2015.01386
Automated facial coding software
outperforms people in recognizing
neutral faces as neutral from
standardized datasets
The Amsterdam School of Communication Research, University of Amsterdam
Amsterdam, Netherlands
Little is known about people’s accuracy of recognizing neutral faces as neutral. In this
paper, I demonstrate the importance of knowing how well people recognize neutral
faces. I contrasted human recognition scores of 100 typical, neutral front-up facial
images with scores of an arguably objective judge – automated facial coding (AFC)
software. I hypothesized that the software would outperform humans in recognizing
neutral faces because of the inherently objective nature of computer algorithms. Results
confirmed this hypothesis. I provided the first-ever evidence that computer software
(90%) was more accurate in recognizing neutral faces than people were (59%). I posited
two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion,
as possible explanations for my findings.
Keywords: non-verbal communication, facial expression, face recognition, neutral face, automated facial coding
Introduction
lack of anger,
face should indicate lack of emotion, e.g.,
Recognizing a neutral face as neutral is vital in social interactions. By virtue of “expressing”
“nothing” (for a separate discussion on faces “expressing” something, see Russell and Fernández-
Dols, 1997), a neutral
fear, or
disgust. This article’s inspiration was the interesting observation that in the literature on
facial recognition, little attention has been paid to neutral face recognition scores of human
raters. Russell (1994) and Nelson and Russell (2013), who provided the two most important
overviews on the topic, did not include or discuss recognition rates of lack of emotion
(neutral) in neutral faces. They provided overviews of matching scores (i.e., accuracy) for
six basic emotions, but they were silent on the issue of recognition accuracy of neutral
faces.
A distinct lack of articles that explicitly report accuracy scores for recognition of neutral face
could explain the silence of researchers in this field. One notable exception is the Amsterdam
Dynamic Facial Expression Set (ADFES; van der Schalk et al., 2011), where the authors provide
an average matching score of 0.67 for their neutral faces. This score is considerably low when one
considers that an average for six basic emotions is also in this range ( 0.67, see Nelson and Russell,
2013, Table A1 for datasets between pre-1994 and 2010).
Edited by:
Paola Ricciardelli,
University of Milano-Bicocca, Italy
Reviewed by:
Luis J. Fuentes,
Universidad de Murcia, Spain
Francesca Gasparini,
University of Milano-Bicocca, Italy
*Correspondence:
The Amsterdam School
of Communication Research,
Department of Communication
Science, University of Amsterdam
Postbus 15793,
1001 NG Amsterdam, Netherlands
Specialty section:
This article was submitted to
Cognition,
a section of the journal
Frontiers in Psychology
Received: 22 April 2015
Accepted: 31 August 2015
Published: 11 September 2015
Citation:
Lewinski P (2015) Automated facial
coding software outperforms people
in recognizing neutral faces as neutral
from standardized datasets.
Front. Psychol. 6:1386.
doi: 10.3389/fpsyg.2015.01386
Frontiers in Psychology | www.frontiersin.org
September 2015 | Volume 6 | Article 1386
('6402753', 'Peter Lewinski', 'peter lewinski')
('6402753', 'Peter Lewinski', 'peter lewinski')
p.lewinski@uva.nl
9853136dbd7d5f6a9c57dc66060cab44a86cd662International Journal of Computer Applications (0975 – 8887)
Volume 34– No.2, November 2011
Improving the Neural Network Training for Face
Recognition using Adaptive Learning Rate, Resilient
Back Propagation and Conjugate Gradient Algorithm
M.Sc. Student
Department of Electrical
Engineering, Iran University
of Science and Technology,
Tehran, Iran
Saeid Sanei
Associate Professor
Department of Computing,
Faculty of Engineering and
Physical Sciences, University
of Surrey, UK
Karim Mohammadi
Professor
Department of Electrical
Engineering, Iran University
of Science and Technology,
Tehran, Iran
('47250218', 'Hamed Azami', 'hamed azami')
989332c5f1b22604d6bb1f78e606cb6b1f694e1aRecurrent Face Aging
University of Trento, Italy
National University of Singapore
Research Center for Learning Science, Southeast University, Nanjing, China
Arti cial Intelligence Institute, China
('39792736', 'Wei Wang', 'wei wang')
('10338111', 'Zhen Cui', 'zhen cui')
('32059677', 'Yan Yan', 'yan yan')
('33221685', 'Jiashi Feng', 'jiashi feng')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('2287686', 'Xiangbo Shu', 'xiangbo shu')
('1703601', 'Nicu Sebe', 'nicu sebe')
{wei.wang,yan.yan,niculae.sebe}@unitn.it {elefjia,eleyans}@nus.edu.sg
zhen.cui@seu.edu.cn shuxb104@gmail.com
982f5c625d6ad0dac25d7acbce4dabfb35dd7f23Facial Expression Recognition by SVM-based Two-stage Classifier on
Gabor Features
School of Information Science
Japan Advanced Institute of Science and Technology
Ashahi-dai 1-8, Nomi, Ishikawa 923-1292, Japan
('1753878', 'Fan Chen', 'fan chen')
('1791753', 'Kazunori Kotani', 'kazunori kotani')
chen-fan@jaist.ac.jp
ikko@jaist.ac.jp
98af221afd64a23e82c40fd28d25210c352e41b7ISCA Archive
http://www.isca-speech.org/archive
AVSP 2010 -- International Conference
on Audio-Visual Speech Processing
Hakone, Kanagawa, Japan
September 30--October 3, 2010
Exploring Visual Features Through Gabor Representations for Facial
Expression Detection
Image and Video Research Laboratory, Queensland University of Technology
GPO Box 2424, Brisbane 4001, Australia
Robotics Institute, Carnegie Mellon University
University of Pittsburgh, Pittsburgh, USA
('2739248', 'Sien W. Chew', 'sien w. chew')
('1713496', 'Patrick Lucey', 'patrick lucey')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
('3140440', 'Clinton Fookes', 'clinton fookes')
s4.chew@student.qut.edu.au, patlucey@andrew.cmu.edu, {s.sridharan;c.fookes}@qut.edu.au
9893865afdb1de55fdd21e5d86bbdb5daa5fa3d5Illumination Normalization Using Logarithm Transforms
for Face Authentication
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, USA
('1794486', 'Marios Savvides', 'marios savvides')msavvid@ri.cmu.edu
kumar@ece.cmu.edu
988d1295ec32ce41d06e7cf928f14a3ee079a11eSemantic Deep Learning
September 29, 2015
('36097730', 'Hao Wang', 'hao wang')
98c548a4be0d3b62971e75259d7514feab14f884Deep generative-contrastive networks for facial expression recognition
Samsung Advanced Institute of Technology (SAIT), KAIST
('2310577', 'Youngsung Kim', 'youngsung kim')
('1757573', 'ByungIn Yoo', 'byungin yoo')
('9942811', 'Youngjun Kwak', 'youngjun kwak')
('36995891', 'Changkyu Choi', 'changkyu choi')
('1769295', 'Junmo Kim', 'junmo kim')
yo.s.ung.kim@samsung.com, byungin.yoo@kaist.ac.kr, yjk.kwak@samsung.com, changkyu choi@samsung.com,
junmo.kim@ee.kaist.ac.kr
9887ab220254859ffc7354d5189083a87c9bca6eGeneric Image Classification Approaches Excel on Face Recognition
Nanjing University of Science and Technology, China
The University of Adelaide, Australia
('2731972', 'Fumin Shen', 'fumin shen')
('1780381', 'Chunhua Shen', 'chunhua shen')
985cd420c00d2f53965faf63358e8c13d1951fa8Pixel-Level Hand Detection with Shape-aware
Structured Forests
Department of Computer Science
The University of Hong Kong
Pokfulam Road, Hong Kong
('35130187', 'Xiaolong Zhu', 'xiaolong zhu')
('34760532', 'Xuhui Jia', 'xuhui jia')
{xlzhu,xhjia,kykwong}@cs.hku.hk
981449cdd5b820268c0876477419cba50d5d1316Learning Deep Features for One-Class
Classification
('15206897', 'Pramuditha Perera', 'pramuditha perera')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
9821669a989a3df9d598c1b4332d17ae8e35e294Minimal Correlation Classification
The Blavatnik School of Computer Science, Tel Aviv University, Israel
('21494706', 'Noga Levy', 'noga levy')
('1776343', 'Lior Wolf', 'lior wolf')
9854145f2f64d52aac23c0301f4bb6657e32e562An Improved Face Verification Approach based on
Speedup Robust Features and Pairwise Matching
Center for Electrical Engineering and Informatics (CEEI)
Federal University of Campina Grande (UFCG
Campina Grande, Para´ıba, Brazil
('2092178', 'Herman Martins Gomes', 'herman martins gomes')Email: {edumoura,hmg}@dsc.ufcg.edu.br, carvalho@dee.ufcg.edu.br
9865fe20df8fe11717d92b5ea63469f59cf1635aYUCEL ET AL.: WILDEST FACES
Wildest Faces: Face Detection and
Recognition in Violent Settings
Pinar Duygulu1
1 Department of Computer Science
Hacettepe University
Ankara, Turkey
2 Department of Computer Engineering
Middle East Technical University
Ankara, Turkey
* indicates equal contribution.
('46234524', 'Mehmet Kerim Yucel', 'mehmet kerim yucel')
('39032755', 'Yunus Can Bilge', 'yunus can bilge')
('46437368', 'Oguzhan Oguz', 'oguzhan oguz')
('2011587', 'Nazli Ikizler-Cinbis', 'nazli ikizler-cinbis')
('1939006', 'Ramazan Gokberk Cinbis', 'ramazan gokberk cinbis')
mkerimyucel@hacettepe.edu.tr
yunuscan.bilge@hacettepe.edu.tr
oguzhan.oguz@hacettepe.edu.tr
nazli@cs.hacettepe.edu.tr
pinar@cs.hacettepe.edu.tr
gcinbis@ceng.metu.edu.tr
98c2053e0c31fab5bcb9ce5386335b647160cc09A Distributed Framework for Spatio-temporal Analysis on Large-scale Camera
Networks
Georgia Institute of Technology
University of Stuttgart
†SUNY Buffalo
('5540701', 'Kirak Hong', 'kirak hong')
('1723877', 'Venu Govindaraju', 'venu govindaraju')
('1752885', 'Bharat Jayaraman', 'bharat jayaraman')
('1751741', 'Umakishore Ramachandran', 'umakishore ramachandran')
{khong9, rama}@cc.gatech.edu
marco.voelz@ipvs.uni-stuttgart.de
{govind, bharat}@buffalo.edu
98127346920bdce9773aba6a2ffc8590b9558a4aNoname manuscript No.
(will be inserted by the editor)
Efficient Human Action Recognition using
Histograms of Motion Gradients and
VLAD with Descriptor Shape Information
Received: date / Accepted: date
('3429470', 'Ionut C. Duta', 'ionut c. duta')
('1796198', 'Bogdan Ionescu', 'bogdan ionescu')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
98a660c15c821ea6d49a61c5061cd88e26c18c65IOSR Journal of Engineering (IOSRJEN)
e-ISSN: 2250-3021, p-ISSN: 2278-8719
Vol. 3, Issue 4 (April. 2013), ||V1 || PP 43-48
Face Databases for 2D and 3D Facial Recognition: A Survey
R.Senthilkumar1, Dr.R.K.Gnanamurthy2
Institute of Road and
Odaiyappa College of
Transport Technology,Erode-638 316.
Engineering and Technology,Theni-625 531.
982fed5c11e76dfef766ad9ff081bfa25e62415a
98fb3890c565f1d32049a524ec425ceda1da5c24A Robust Learning Framework Using PSM and
Ameliorated SVMs for Emotional Recognition
Graduate School of System Informatics, Kobe University, Kobe, 657-8501, Japan
('2866465', 'Jinhui Chen', 'jinhui chen')
('21172382', 'Yosuke Kitano', 'yosuke kitano')
('3207738', 'Yiting Li', 'yiting li')
('1744026', 'Tetsuya Takiguchi', 'tetsuya takiguchi')
('1678564', 'Yasuo Ariki', 'yasuo ariki')
{ianchen, kitano, liyiting }@me.cs.scitec.kobe-u.ac.jp
{takigu, ariki}@kobe-u.ac.jp
98519f3f615e7900578bc064a8fb4e5f429f3689Dictionary-based Domain Adaptation Methods
for the Re-identification of Faces
('2077648', 'Qiang Qiu', 'qiang qiu')
('38811046', 'Jie Ni', 'jie ni')
('9215658', 'Rama Chellappa', 'rama chellappa')
9825aa96f204c335ec23c2b872855ce0c98f9046International Journal of Ethics in Engineering & Management Education
Website: www.ijeee.in (ISSN: 2348-4748, Volume 1, Issue 5, May2014)
FACE AND FACIAL EXPRESSION
RECOGNITION IN 3-D USING MASKED
PROJECTION UNDER OCCLUSION
Jyoti patil *
M.Tech (CSE)
GNDEC Bidar-585401
BIDAR, INDIA
M.Tech (CSE)
GNDEC Bidar- 585401
BIDAR, INDIA
M.Tech (CSE)
VKIT, Bangalore- 560040
BANGALORE, INDIA
('39365176', 'Gouri Patil', 'gouri patil')
('4787347', 'Snehalata Patil', 'snehalata patil')
Email-jyoti.spatil35@gmail.com Email-greatgouri@gmail.com
Email-snehasharad09@gmail.com
9825c4dddeb2ed7eaab668b55403aa2c38bc3320Aerial Imagery for Roof Segmentation: A Large-Scale Dataset
towards Automatic Mapping of Buildings
aCenter for Spatial Information Science, University of Tokyo, Kashiwa 277-8568, Japan
University of Waterloo, Waterloo, ON N2L 3G1, Canada
cFaculty of Information Engineering, China University of Geosciences (Wuhan), Wuhan 430074, China
dAtlasAI Inc., Waterloo, ON N2L 3G1, Canada
('1783637', 'Qi Chen', 'qi chen')
('48169641', 'Lei Wang', 'lei wang')
('50117915', 'Yifan Wu', 'yifan wu')
('3043983', 'Guangming Wu', 'guangming wu')
('40477085', 'Zhiling Guo', 'zhiling guo')
980266ad6807531fea94252e8f2b771c20e173b3Continuous Regression for
Non-Rigid Image Alignment
Enrique S´anchez-Lozano1
Daniel Gonz´alez-Jim´enez1
1Multimodal Information Area, Gradiant, Vigo, Pontevedra, 36310. Spain.
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 15213. USA
('1707876', 'Fernando De la Torre', 'fernando de la torre'){esanchez,dgonzalez}@gradiant.org
ftorre@cs.cmu.edu
53d78c8dbac7c9be8eb148c6a9e1d672f1dd72f9Discriminative vs. Generative Object Recognition:
Objects, Faces, and the Web
Thesis by
In Partial Fulfillment of the Requirements
for the Degree of
Doctor of Philosophy
California Institute of Technology
Pasadena, California
2007
(Defended April 30, 2007)
('3075121', 'Alex Holub', 'alex holub')
533d14e539ae5cdca0ece392487a2b19106d468aBidirectional Multirate Reconstruction for Temporal Modeling in Videos
University of Technology Sydney
('2948393', 'Linchao Zhu', 'linchao zhu')
('2351434', 'Zhongwen Xu', 'zhongwen xu')
('1698559', 'Yi Yang', 'yi yang')
{zhulinchao7, zhongwen.s.xu, yee.i.yang}@gmail.com
5334ac0a6438483890d5eef64f6db93f44aacdf4
53dd25350d3b3aaf19beb2104f1e389e3442df61
53e081f5af505374c3b8491e9c4470fe77fe7934Unconstrained Realtime Facial Performance Capture
University of Southern California
† Industrial Light & Magic
Figure 1: Calibration-free realtime facial performance capture on highly occluded subjects using an RGB-D sensor.
('2519072', 'Pei-Lun Hsieh', 'pei-lun hsieh')
('1797422', 'Chongyang Ma', 'chongyang ma')
('2977637', 'Jihun Yu', 'jihun yu')
('1706574', 'Hao Li', 'hao li')
53698b91709112e5bb71eeeae94607db2aefc57cTwo-Stream Convolutional Networks
for Action Recognition in Videos
Visual Geometry Group, University of Oxford
('34838386', 'Karen Simonyan', 'karen simonyan')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
{karen,az}@robots.ox.ac.uk
5394d42fd27b7e14bd875ec71f31fdd2fcc8f923Visual Recognition Using Directional Distribution Distance
National Key Laboratory for Novel Software Technology
Nanjing University, China
Minieye, Youjia Innovation LLC
('1808816', 'Jianxin Wu', 'jianxin wu')
('2226422', 'Bin-Bin Gao', 'bin-bin gao')
('15527784', 'Guoqing Liu', 'guoqing liu')
guoqing@minieye.cc
wujx2001@nju.edu.cn, gaobb@lamda.nju.edu.cn
530243b61fa5aea19b454b7dbcac9f463ed0460e
5397c34a5e396658fa57e3ca0065a2878c3cced7Lighting Normalization with Generic Intrinsic Illumination Subspace for Face
Recognition
Institute of Information Science, Academia Sinica, Taipei, Taiwan
('1686057', 'Chia-Ping Chen', 'chia-ping chen')
('1720473', 'Chu-Song Chen', 'chu-song chen')
{cpchen, song}@iis.sinica.edu.tw
539ca9db570b5e43be0576bb250e1ba7a727d640
539287d8967cdeb3ef60d60157ee93e8724efcacLearning Deep (cid:96)0 Encoders
Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
University of Science and Technology of China, Hefei, 230027, China
('2969311', 'Zhangyang Wang', 'zhangyang wang')
('1682497', 'Qing Ling', 'qing ling')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
532f7ec8e0c8f7331417dd4a45dc2e89308740666060
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
Box 451, Thessaloniki 54124, GREECE
Aristotle University of Thessaloniki
Department of Informatics
tel: +30 2310 996361
1. INTRODUCTION
('1905139', 'Olga Zoidi', 'olga zoidi')
('1718330', 'Nikos Nikolaidis', 'nikos nikolaidis')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
{ozoidi, nikolaid, pitas}@aiia.csd.auth.gr
53c8cbc4a3a3752a74f79b74370ed8aeed97db85
53c36186bf0ffbe2f39165a1824c965c6394fe0dI Know How You Feel: Emotion Recognition with Facial Landmarks
Tooploox 2Polish-Japanese Academy of Information Technology 3Warsaw University of Technology
('22188614', 'Ivona Tautkute', 'ivona tautkute')
('1760267', 'Tomasz Trzcinski', 'tomasz trzcinski')
('48657002', 'Adam Bielski', 'adam bielski')
{firstname.lastname}@tooploox.com
5366573e96a1dadfcd4fd592f83017e378a0e185Böhlen, Chandola and Salunkhe
Server, server in the cloud.
Who is the fairest in the crowd?
53a41c711b40e7fe3dc2b12e0790933d9c99a6e0Recurrent Memory Addressing for describing videos
Indian Institute of Technology Kharagpur
('7284555', 'Arnav Kumar Jain', 'arnav kumar jain')
('6565766', 'Kumar Krishna Agrawal', 'kumar krishna agrawal')
('1781070', 'Pabitra Mitra', 'pabitra mitra')
{arnavkj95, abhinavagarawalla, kumarkrishna, pabitra}@iitkgp.ac.in
53bfe2ab770e74d064303f3bd2867e5bf7b86379Learning to Synthesize and Manipulate Natural Images
By
A dissertation submitted in partial satisfaction of the
requirements for the degree of
Doctor of Philosophy
in
Engineering - Electrical Engineering and Computer Science
in the
Graduate Division
of the
University of California, Berkeley
Committee in charge:
Professor Alexei A. Efros, Chair
Professor Jitendra Malik
Professor Ren Ng
Professor Michael DeWeese
Fall 2017
('3132726', 'Junyan Zhu', 'junyan zhu')
533bfb82c54f261e6a2b7ed7d31a2fd679c56d18Technical Report MSU-CSE-14-1
Unconstrained Face Recognition: Identifying a
Person of Interest from a Media Collection
('2180413', 'Lacey Best-Rowden', 'lacey best-rowden')
('34393045', 'Hu Han', 'hu han')
('40653304', 'Charles Otto', 'charles otto')
('1817623', 'Brendan Klare', 'brendan klare')
('6680444', 'Anil K. Jain', 'anil k. jain')
537d8c4c53604fd419918ec90d6ef28d045311d0Active Collaborative Ensemble Tracking
Graduate School of Informatics, Kyoto University
Yoshida-Honmachi, Sakyo Ward, Kyoto 606–8501, Japan
('2146623', 'Kourosh Meshgi', 'kourosh meshgi')
('31095396', 'Maryam Sadat Mirzaei', 'maryam sadat mirzaei')
('38809507', 'Shigeyuki Oba', 'shigeyuki oba')
('2851612', 'Shin Ishii', 'shin ishii')
meshgi-k@sys.i.kyoto-u.ac.jp
530ce1097d0681a0f9d3ce877c5ba31617b1d709
53ce84598052308b86ba79d873082853022aa7e9Optimized Method for Real-Time Face Recognition System Based
on PCA and Multiclass Support Vector Machine
IEEE Member, Shahid Rajaee Teacher training University
Tehran, Iran
Institute of Computer science, Shahid Bahonar University
Shiraz, Iran
Islamic Azad University, Science and Research Campus
Hamedan, Iran
('1763181', 'Reza Azad', 'reza azad')
('39864738', 'Babak Azad', 'babak azad')
('2904132', 'Iman Tavakoli Kazerooni', 'iman tavakoli kazerooni')
rezazad68@gmail.com
babak.babi72@gmail.com
iman_tavakoli2008@yahoo.com
3fbd68d1268922ee50c92b28bd23ca6669ff87e5598
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 4, APRIL 2001
A Shape- and Texture-Based Enhanced Fisher
Classifier for Face Recognition
('39664966', 'Chengjun Liu', 'chengjun liu')
('1781577', 'Harry Wechsler', 'harry wechsler')
3fe4109ded039ac9d58eb9f5baa5327af30ad8b6Spatio-Temporal GrabCut Human Segmentation for Face and Pose Recovery
Antonio Hern´andez1
University of Barcelona, Gran Via de les Corts Catalanes 585, 08007 Barcelona, Spain
1 Computer Vision Center, Campus UAB, 08193 Bellaterra, Barcelona, Spain.
('3276130', 'Miguel Reyes', 'miguel reyes')
('7855312', 'Sergio Escalera', 'sergio escalera')
('1724155', 'Petia Radeva', 'petia radeva')
ahernandez@cvc.uab.es
mreyese@gmail.com
sergio@maia.ub.es
petia@cvc.uab.es
3f22a4383c55ceaafe7d3cfed1b9ef910559d639JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Robust Kronecker Component Analysis
('11352680', 'Mehdi Bahri', 'mehdi bahri')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
3fefc856a47726d19a9f1441168480cee6e9f5bbCarnegie Mellon University
Dissertations
Summer 8-2014
Theses and Dissertations
Perceptually Valid Dynamics for Smiles and Blinks
Carnegie Mellon University
Follow this and additional works at: http://repository.cmu.edu/dissertations
Recommended Citation
Trutoiu, Laura, "Perceptually Valid Dynamics for Smiles and Blinks" (2014). Dissertations. Paper 428.
('2048839', 'Laura Trutoiu', 'laura trutoiu')Research Showcase @ CMU
This Dissertation is brought to you for free and open access by the Theses and Dissertations at Research Showcase @ CMU. It has been accepted for
inclusion in Dissertations by an authorized administrator of Research Showcase @ CMU. For more information, please contact research-
showcase@andrew.cmu.edu.
3fdcc1e2ebcf236e8bb4a6ce7baf2db817f30001A top-down approach for a synthetic
autobiographical memory system
1Sheffield Centre for Robotics (SCentRo), Univ. of Sheffield, Sheffield, S10 2TN, UK
2Dept. of Computer Science, Univ. of Sheffield, Sheffield, S1 4DP, UK
3 CVAP Lab, KTH, Stockholm, Sweden
('2484138', 'Carl Henrik Ek', 'carl henrik ek')
('1739851', 'Neil D. Lawrence', 'neil d. lawrence')
('1750570', 'Tony J. Prescott', 'tony j. prescott')
andreas.damianou@shef.ac.uk
3f7cf52fb5bf7b622dce17bb9dfe747ce4a65b96Person identity label propagation in stereo videos
Department of Informatics
Aristotle University of Thessaloniki
Box 451, Thessaloniki 54124, GREECE
tel: +30 2310 996361
('1905139', 'Olga Zoidi', 'olga zoidi')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1718330', 'Nikos Nikolaidis', 'nikos nikolaidis')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
{tefas, nikolaid, pitas}@aiia.csd.auth.gr
3f0c51989c516a7c5dee7dec4d7fb474ae6c28d9Not Afraid of the Dark: NIR-VIS Face Recognition via Cross-spectral
Hallucination and Low-rank Embedding
IIE, Universidad de la Rep ublica, Uruguay. 2ECE, Duke University, USA
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
3f848d6424f3d666a1b6dd405a48a35a797dd147GHODRATI et al.: IS 2D INFORMATION ENOUGH FOR VIEWPOINT ESTIMATION?
Is 2D Information Enough For Viewpoint
Estimation?
KU Leuven, ESAT - PSI, iMinds
Leuven, Belgium
('3060081', 'Amir Ghodrati', 'amir ghodrati')
('3048367', 'Marco Pedersoli', 'marco pedersoli')
('1704728', 'Tinne Tuytelaars', 'tinne tuytelaars')
amir.ghodrati@esat.kuleuven.be
marco.pedersoli@esat.kuleuven.be
tinne.tuytelaars@esat.kuleuven.be
3fa738ab3c79eacdbfafa4c9950ef74f115a3d84DaMN – Discriminative and Mutually Nearest:
Exploiting Pairwise Category Proximity
for Video Action Recognition
1 Center for Research in Computer Vision at UCF, Orlando, USA
2 Google Research, Mountain View, USA
http://crcv.ucf.edu/projects/DaMN/
('2099254', 'Rui Hou', 'rui hou')
('40029556', 'Amir Roshan Zamir', 'amir roshan zamir')
('1694199', 'Rahul Sukthankar', 'rahul sukthankar')
('1745480', 'Mubarak Shah', 'mubarak shah')
3fb26f3abcf0d287243646426cd5ddeee33624d4Joint Training of Cascaded CNN for Face Detection
Grad. School at Shenzhen, Tsinghua University
Tsinghua University 4SenseTime
('2137185', 'Hongwei Qin', 'hongwei qin')
('1721677', 'Junjie Yan', 'junjie yan')
('2693308', 'Xiu Li', 'xiu li')
('1705418', 'Xiaolin Hu', 'xiaolin hu')
{qhw12@mails., li.xiu@sz., xlhu@}tsinghua.edu.cn yanjunjie@outlook.com
3f9ca2526013e358cd8caeb66a3d7161f5507cbcImproving Sparse Representation-Based Classification
Using Local Principal Component Analysis
Department of Mathematics
University of California, Davis
One Shields Avenue
Davis, California, 95616, United States
('32898818', 'Chelsea Weaver', 'chelsea weaver')
('3493752', 'Naoki Saito', 'naoki saito')
3f57c3fc2d9d4a230ccb57eed1d4f0b56062d4d5Face Recognition Across Poses Using A Single 3D Reference Model
National Taiwan University of Science and Technology
No. 43, Sec.4, Keelung Rd., Taipei, 106, Taiwan
('38801529', 'Gee-Sern Hsu', 'gee-sern hsu')
('3329222', 'Hsiao-Chia Peng', 'hsiao-chia peng')
∗jison@mail.ntust.edu.tw
3feb69531653e83d0986a0643e4a6210a088e3e5Using Group Prior to Identify People in Consumer Images
Carnegie Mellon University
Pittsburgh, Pennsylvania
Carnegie Mellon University
Pittsburgh, Pennsylvania
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
agallagh@cmu.edu
tsuhan@cmu.edu
3f12701449a82a5e01845001afab3580b92da858Joint Object Class Sequencing and Trajectory
Triangulation (JOST)
The University of North Carolina, Chapel Hill
('2873326', 'Enliang Zheng', 'enliang zheng')
('1751643', 'Ke Wang', 'ke wang')
('29274093', 'Enrique Dunn', 'enrique dunn')
('40454588', 'Jan-Michael Frahm', 'jan-michael frahm')
3fb98e76ffd8ba79e1c22eda4d640da0c037e98aConvolutional Neural Networks for Crop Yield Prediction using Satellite Images
H. Russello
3fde656343d3fd4223e08e0bc835552bff4bda40Available Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 2, Issue. 4, April 2013, pg.232 – 237
RESEARCH ARTICLE
Character Identification Using Graph
Matching Algorithm
Anna University Chennai, India
5Assistant Professor, Department Of Computer Science and Engineering,
K.S.R. College Of Engineering, Tiruchengode, India
('1795761', 'S. Bharathi', 's. bharathi')
('36510121', 'Ranjith Kumar', 'ranjith kumar')
1 rathiranya@gmail.com ; 2 manirathnam60@gmail.com ; 3 ramya1736@yahoo.com ; 4 ranjith.rhl@gmail.com
3f957142ef66f2921e7c8c7eadc8e548dccc1327Merging SVMs with Linear Discriminant Analysis: A Combined Model
Imperial College London, United Kingdom
EEMCS, University of Twente, Netherlands
('1793625', 'Symeon Nikitidis', 'symeon nikitidis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{s.nikitidis,s.zafeiriou,m.pantic}@imperial.ac.uk
3fdfd6fa7a1cc9142de1f53e4ac7c2a7ac64c2e3Intensity-Depth Face Alignment Using Cascade
Shape Regression
1 Center for Brain-like Computing and Machine Intelligence
Department of Computer Science and Engineering
Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
2 Key Laboratory of Shanghai Education Commission for
Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
('1740511', 'Yang Cao', 'yang cao')
('1715839', 'Bao-Liang Lu', 'bao-liang lu')
3f540faf85e1f8de6ce04fb37e556700b67e4ad3Article
Face Verification with Multi-Task and Multi-Scale
Feature Fusion
College of Sciences, Northeastern University, Shenyang 110819, China
New York University Shanghai, 1555 Century Ave, Pudong
Academic Editor: Maxim Raginsky
Received: 18 March 2017; Accepted: 13 May 2017; Published: 17 May 2017
('26337951', 'Xiaojun Lu', 'xiaojun lu')
('1983143', 'Yue Yang', 'yue yang')
('8030754', 'Weilin Zhang', 'weilin zhang')
('40435166', 'Qi Wang', 'qi wang')
('2295608', 'Yang Wang', 'yang wang')
luxiaojun@mail.neu.edu.cn (X.L.); YangY1503@163.com (Y.Y.); wangy_neu@163.com (Y.W.)
Shanghai 200122, China; wz723@nyu.edu
* Correspondence: wangqi@mail.neu.edu.cn; Tel.: +86-24-8368-7680
3f4bfa4e3655ef392eb5ad609d31c05f29826b45ROBUST MULTI-CAMERA VIEW FACE RECOGNITION
Department of Computer Science and Engineering
Dr. B. C. Roy Engineering College
Durgapur - 713206
India
Department of Computer Science and Engineering
National Institute of Technology Rourkela
Rourkela – 769008
India
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
Kanpur – 208016
India
Department of Computer Science and Engineering
Jadavpur University
Kolkata – 700032,
India
face
recognition
to face
filter banks
system uses Gabor
images produces Gabor
('1810015', 'Dakshina Ranjan Kisku', 'dakshina ranjan kisku')
('1868921', 'Hunny Mehrotra', 'hunny mehrotra')
('1687389', 'Phalguni Gupta', 'phalguni gupta')
('1786127', 'Jamuna Kanta Sing', 'jamuna kanta sing')
drkisku@ieee.org; hunny04@gmail.com; pg@cse.iitk.ac.in; , jksing@ieee.org
3f5cf3771446da44d48f1d5ca2121c52975bb3d3
3fb4bf38d34f7f7e5b3df36de2413d34da3e174aTHOMAS AND KOVASHKA: PERSUASIVE FACES: GENERATING FACES IN ADS
Persuasive Faces: Generating Faces in
Advertisements
Department of Computer Science
University of Pittsburgh
Pittsburgh, PA USA
('40540691', 'Christopher Thomas', 'christopher thomas')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
chris@cs.pitt.edu
kovashka@cs.pitt.edu
3f14b504c2b37a0e8119fbda0eff52efb2eb24615727
Joint Facial Action Unit Detection and Feature
Fusion: A Multi-Conditional Learning Approach
('2308430', 'Stefanos Eleftheriadis', 'stefanos eleftheriadis')
('1729713', 'Ognjen Rudovic', 'ognjen rudovic')
('1694605', 'Maja Pantic', 'maja pantic')
3fac7c60136a67b320fc1c132fde45205cd2ac66Remarks on Computational Facial Expression
Recognition from HOG Features Using
Quaternion Multi-layer Neural Network
Information Systems Design, Doshisha University, Kyoto, Japan
Graduate School of Doshisha University, Kyoto, Japan
Intelligent Information Engineering and Science, Doshisha University, Kyoto, Japan
('39452921', 'Kazuhiko Takahashi', 'kazuhiko takahashi')
('10728256', 'Sae Takahashi', 'sae takahashi')
('1824476', 'Yunduan Cui', 'yunduan cui')
('2565962', 'Masafumi Hashimoto', 'masafumi hashimoto')
{katakaha@mail,buj1078@mail4}.doshisha.ac.jp
dum3101@mail4.doshisha.ac.jp
mhashimo@mail.doshisha.ac.jp
3f9a7d690db82cf5c3940fbb06b827ced59ec01eVIP: Finding Important People in Images
Virginia Tech
Google Inc.
Virginia Tech
Project: https://computing.ece.vt.edu/~mclint/vip/
Demo: http://cloudcv.org/vip/
('3085140', 'Clint Solomon Mathialagan', 'clint solomon mathialagan')
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1746610', 'Dhruv Batra', 'dhruv batra')
3fd90098551bf88c7509521adf1c0ba9b5dfeb57Page 1 of 21
*****For Peer Review Only*****
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Attribute-Based Classification for Zero-Shot
Visual Object Categorization
('1787591', 'Christoph H. Lampert', 'christoph h. lampert')
('1748758', 'Hannes Nickisch', 'hannes nickisch')
('1734990', 'Stefan Harmeling', 'stefan harmeling')
3f623bb0c9c766a5ac612df248f4a59288e4d29fGenetic Programming for Region Detection,
Feature Extraction, Feature Construction and
Classification in Image Data
School of Engineering and Computer Science,
Victoria University of Wellington, PO Box 600, Wellington 6140, New Zealand
('39251110', 'Andrew Lensen', 'andrew lensen')
('2480750', 'Harith Al-Sahaf', 'harith al-sahaf')
('1679067', 'Mengjie Zhang', 'mengjie zhang')
('1712740', 'Bing Xue', 'bing xue')
{Andrew.Lensen,Harith.Al-Sahaf,Mengjie.Zhang,Bing.Xue}@ecs.vuw.ac.nz
3f4798c7701da044bdb7feb61ebdbd1d53df5cfeVECTOR QUANTIZATION WITH CONSTRAINED LIKELIHOOD FOR FACE
RECOGNITION
University of Geneva
Computer Science Department, Stochastic Information Processing Group
7 Route de Drize, Geneva, Switzerland
('36133844', 'Dimche Kostadinov', 'dimche kostadinov')
('8995309', 'Sviatoslav Voloshynovskiy', 'sviatoslav voloshynovskiy')
('2771643', 'Maurits Diephuis', 'maurits diephuis')
('1682792', 'Sohrab Ferdowsi', 'sohrab ferdowsi')
3f4c262d836b2867a53eefb959057350bf7219c9Eastern Mediterranean University
Gazimağusa, Mersin 10, TURKEY.

Occlusions
Recognizing Faces under Facial Expression Variations and Partial
('2108310', 'TIWUYA H. FAAYA', 'tiwuya h. faaya')
3f7723ab51417b85aa909e739fc4c43c64bf3e84Improved Performance in Facial Expression
Recognition Using 32 Geometric Features
University of Bari, Bari, Italy
National Institute of Optics, National Research Council, Arnesano, LE, Italy
('2235498', 'Giuseppe Palestra', 'giuseppe palestra')
('39814343', 'Adriana Pettinicchio', 'adriana pettinicchio')
('33097940', 'Marco Del Coco', 'marco del coco')
('4730472', 'Marco Leo', 'marco leo')
('1741861', 'Cosimo Distante', 'cosimo distante')
giuseppe.palestra@gmail.com
3f5e8f884e71310d7d5571bd98e5a049b8175075Making a Science of Model Search: Hyperparameter Optimization
in Hundreds of Dimensions for Vision Architectures
J. Bergstra
Rowland Institute at Harvard
100 Edwin H. Land Boulevard
Cambridge, MA 02142, USA
D. Yamins
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139, USA
D. D. Cox
Rowland Institute at Harvard
100 Edwin H. Land Boulevard
Cambridge, MA 02142, USA
3f63f9aaec8ba1fa801d131e3680900680f14139Facial Expression Recognition using Local Binary
Patterns and Kullback Leibler Divergence
AnushaVupputuri, SukadevMeher

divergence.
role
3f0e0739677eb53a9d16feafc2d9a881b9677b63Efficient Two-Stream Motion and Appearance 3D CNNs for
Video Classification
ESAT-KU Leuven
Ali Pazandeh
Sharif UTech
ESAT-KU Leuven, ETH Zurich
('3310120', 'Ali Diba', 'ali diba')
('1681236', 'Luc Van Gool', 'luc van gool')
ali.diba@esat.kuleuven.be
pazandeh@ee.sharif.ir
luc.vangool@esat.kuleuven.be
3f5693584d7dab13ffc12122d6ddbf862783028bRanking CGANs: Subjective Control over Semantic Image
Attributes
University of Bath
('41020280', 'Yassir Saquil', 'yassir saquil')
('1808255', 'Kwang In Kim', 'kwang in kim')
30b15cdb72760f20f80e04157b57be9029d8a1abFace Aging with Identity-Preserved
Conditional Generative Adversarial Networks
Shanghaitech University
Baidu
Shanghaitech University
('50219041', 'Zongwei Wang', 'zongwei wang')
('48785141', 'Xu Tang', 'xu tang')
('2074878', 'Weixin Luo', 'weixin luo')
('1702868', 'Shenghua Gao', 'shenghua gao')
wangzw@shanghaitech.edu.cn
tangxu02@baidu.com
{luowx, gaoshh}@shanghaitech.edu.cn
3039627fa612c184228b0bed0a8c03c7f754748cRobust Regression on Image Manifolds for Ordered Label Denoising
University of North Carolina at Charlotte
('1873911', 'Hui Wu', 'hui wu')
('1690110', 'Richard Souvenir', 'richard souvenir')
{hwu13,souvenir}@uncc.edu
30870ef75aa57e41f54310283c0057451c8c822bOvercoming Catastrophic Forgetting with Hard Attention to the Task ('50101040', 'Marius Miron', 'marius miron')
303065c44cf847849d04da16b8b1d9a120cef73a
305346d01298edeb5c6dc8b55679e8f60ba97efbArticle
Fine-Grained Face Annotation Using Deep
Multi-Task CNN
Systems and Communication, University of Milano-Bicocca
Received: 3 July 2018; Accepted: 13 August 2018; Published: 14 August 2018
('3390122', 'Luigi Celona', 'luigi celona')
('2217051', 'Simone Bianco', 'simone bianco')
('1743714', 'Raimondo Schettini', 'raimondo schettini')
viale Sarca, 336 Milano, Italy; bianco@disco.unimib.it (S.B.); schettini@disco.unimib.it (R.S.)
* Correspondence: luigi.celona@disco.unimib.it
303a7099c01530fa0beb197eb1305b574168b653Occlusion-free Face Alignment: Deep Regression Networks Coupled with
De-corrupt AutoEncoders
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
3CAS Center for Excellence in Brain Science and Intelligence Technology
('1698586', 'Jie Zhang', 'jie zhang')
('1693589', 'Meina Kan', 'meina kan')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{jie.zhang,meina.kan,shiguang.shan,xilin.chen}@vipl.ict.ac.cn
30cd39388b5c1aae7d8153c0ab9d54b61b474ffeDeep Cascaded Regression for Face Alignment
School of Data and Computer Science, Sun Yat-Sen University, China
National University of Singapore, Singapore
algorithm refines the shape by estimating a shape increment
∆S. In particular, a shape increment at stage k is calculated
as:
('3124720', 'Shengtao Xiao', 'shengtao xiao')
('10338111', 'Zhen Cui', 'zhen cui')
('40080379', 'Yan Pan', 'yan pan')
('3029624', 'Chunyan Xu', 'chunyan xu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
303517dfc327c3004ae866a6a340f16bab2ee3e3Inte rnational Journal of Engineering Technology, Manage ment and Applied Sciences
www.ijetmas.com August 2014, Volume 2 Issue 3, ISSN 2349-4476

Using Locality Preserving Projections in
Face Recognition
Galaxy Global Imperial Technical Campus
Galaxy Global Imperial Technical Campus
DIT UNIVERSITY, DEHRADUN
('34272062', 'PRACHI BANSAL', 'prachi bansal')
30fd1363fa14965e3ab48a7d6235e4b3516c1da1A Deep Semi-NMF Model for Learning Hidden Representations
Stefanos Zafeiriou
Bj¨orn W. Schuller
Imperial College London, United Kingdom
('2814229', 'George Trigeorgis', 'george trigeorgis')
('2732737', 'Konstantinos Bousmalis', 'konstantinos bousmalis')
GEORGE.TRIGEORGIS08@IMPERIAL.AC.UK
K.BOUSMALIS@IMPERIAL.AC.UK
S.ZAFEIRIOU@IMPERIAL.AC.UK
BJOERN.SCHULLER@IMPERIAL.AC.UK
309e17e6223e13b1f76b5b0eaa123b96ef22f51bFace Recognition based on a 3D Morphable Model
University of Siegen
H¤olderlinstr. 3
57068 Siegen, Germany
('2880906', 'Volker Blanz', 'volker blanz')blanz@informatik.uni-siegen.de
3046baea53360a8c5653f09f0a31581da384202eDeformable Face Alignment via Local
Measurements and Global Constraints
('2398245', 'Jason M. Saragih', 'jason m. saragih')
3026722b4cbe9223eda6ff2822140172e44ed4b1Jointly Estimating Demographics and Height with a Calibrated Camera
Eastman Kodak Company
Eastman Kodak Company
Cornell University
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('2224373', 'Andrew C. Blose', 'andrew c. blose')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
andrew.gallagher@kodak.com
andrew.blose@kodak.com
tsuhan@ece.cornell.edu
3028690d00bd95f20842d4aec84dc96de1db6e59Leveraging Union of Subspace Structure to Improve Constrained Clustering ('1782134', 'John Lipor', 'john lipor')
30c96cc041bafa4f480b7b1eb5c45999701fe0661090
Discrete Cosine Transform Locality-Sensitive
Hashes for Face Retrieval
('1784929', 'Mehran Kafai', 'mehran kafai')
('1745657', 'Kave Eshghi', 'kave eshghi')
('1707159', 'Bir Bhanu', 'bir bhanu')
306957285fea4ce11a14641c3497d01b46095989FACE RECOGNITION UNDER VARYING LIGHTING BASED ON
DERIVATES OF LOG IMAGE
2ICT-ISVISION Joint R&D Laboratory for Face Recognition, CAS, Beijing 100080, China
1Graduate School, CAS, Beijing, 100039, China
('2343895', 'Laiyun Qing', 'laiyun qing')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1698902', 'Wen Gao', 'wen gao')
304b1f14ca6a37552dbfac443f3d5b36dbe1a451Collaborative Low-Rank Subspace Clustering
aSchool of Computing and Mathematics, Charles Sturt University, Bathurst, NSW
bDiscipline of Business Analytics, The University of Sydney Business School
The University of Sydney, NSW 2006, Australia
cCentre for Research in Mathematics, School of Computing, Engineering and Mathematics,
Western Sydney University, Parramatta, NSW 2150, Australia
Australia
('40635684', 'Stephen Tierney', 'stephen tierney')
('1767638', 'Yi Guo', 'yi guo')
('1750488', 'Junbin Gao', 'junbin gao')
306127c3197eb5544ab1e1bf8279a01e0df26120Sparse Coding and Dictionary Learning with Linear Dynamical Systems∗
Tsinghua University, State Key Lab. of Intelligent
Technology and Systems, Tsinghua National Lab. for Information Science and Technology (TNList);
Australian National University and NICTA, Australia
('36823190', 'Fuchun Sun', 'fuchun sun')
('1678783', 'Deli Zhao', 'deli zhao')
('2641547', 'Huaping Liu', 'huaping liu')
('23911916', 'Mehrtash Harandi', 'mehrtash harandi')
1{huangwb12@mails, fcsun@mail, caoll12@mails, hpliu@mail}.tsinghua.edu.cn,
2zhaodeli@gmail.com, 3Mehrtash.Harandi@nicta.com.au,
307a810d1bf6f747b1bd697a8a642afbd649613dAn affordable contactless security system access
for restricted area
Laboratory Le2i
University Bourgogne Franche-Comt , France
2 Odalid compagny, France
Keywords – Smart Camera, Real-time Image Processing, Biometrics, Face Detection, Face Verifica-
tion, EigenFaces, Support Vector Machine,
We present in this paper a security system based on
identity verification process and a low-cost smart cam-
era, intended to avoid unauthorized access to restricted
area. The Le2i laboratory has a longstanding experi-
ence in smart cameras implementation and design [1],
for example in the case of real-time classical face de-
tection [2] or human fall detection [3].
The principle of the system, fully thought and designed
in our laboratory, is as follows: the allowed user pre-
sents a RFID card to the reader based on Odalid system
[4]. The card ID, time and date of authorized access are
checked using connection to an online server. In the
same time, multi-modality identity verification is per-
formed using the camera.
There are many ways to perform face recognition and
face verification. As a first approach, we implemented a
standard face localization using Haar cascade [5] and
verification process based on Eigenfaces (feature ex-
traction), with the ORL face data base (or AT&T) [6], and
SVM (verification) [7].
On the one hand, the training step has been performed
with 10-folds cross validation using the 3000 first faces
from LFW face database [8] as unauthorized class and
20 known faces were used for the authorized class. On
the other hand, the testing step has been performed us-
ing the rest of the LFW data base and 40 other faces
from the same known person. The false positive and
false negative rates are respectively 0,004% and 1,39%
with a standard deviation of respectively 0,006% and
2,08%, considering a precision of 98,9% and a recall of
98,6%.
The current PC based implementation has been de-
signed to be easily deployed on a Raspberry Pi3 or sim-
ilar based target. A combination of Eigenfaces [9], Fish-
erfaces [9] , Local Binary Patterns [9] and Generalized
Fourier Descriptors [10] will be also studied.
However, it is known that the use of single modality such
as standard face luminosity for identity control leads of-
ten to ergonomics problems due to the high intra-varia-
bility of human faces [11]. Recent work published in the
literature and developed in our laboratory showed that
it is possible to extract precise multispectral body infor-
mation from standard camera.
The next step and originality of our system will resides
in the fact that we will consider Near Infrared or multi-
spectral approach in order to improve the security level
(decrease false positive rate) as well as ergonomics
(decrease false negative rate).
The proposed platform enables security access to be
improved and original solutions based on specific illumi-
nation to be investigated.
ACKNOWLEDGMENT
This research was supported by the Conseil Regional
de Bourgogne Franche-Comte, and institut Carnot
ARTS
REFERENCES
[1] R. Mosqueron, J. Dubois, M. Mattavelli, D. Mauvilet, Smart camera
based on embedded HW/SW coprocessor, EURASIP Journal on Em-
bedded Systems, p.3:1-3:13, Hindawi Publishing Corp, 2008.
[2] K. Khattab, J. Dubois, J. Miteran, Cascade Boosting Based Object
Detection from High-Level Description to Hardware Implementation,
EURASIP Journal on Embedded System, August 2009
[3] B. Senouci, I. Charfi, B. Barthelemy, J. Dubois, J. Miteran, Fast
prototyping of a SoC-based smart-camera: a real-time fall detection
case study, Journal of Real-Time Image Processing, p.1-14, 2014.
[4] http://odalid.com/
[5] P. Viola, M.J. Jones, Robust Real-Time Face Detection, Interna-
tional Journal of Computer Vision, p137-154, May 2004
[6] www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
[7] K. Jonsson, J. Kittler, Y.P. Li, J. Matas, Support Vector Machines
for Face Authentication, Image and Vision Computing, p543-553,
1999
[8] G.B. Huang, M. Ramesh, T. Berg, E. Learned-Miller, Labeled
Faces in the Wild: A Database for Studying Face Recognition in Un-
constrained Environments, Tehcnical Report p07-49, October 2007
[9] R. Jafri, H.R. Arabnia, A Survey of Face Recognition Techniques,
Journal of Information Processing Systems, p41-68, June 2009
[10] F. Smach, C. Lemaitre, J-P. Gauthier, J. Miteran, M. Atri, Gener-
alized Fourier Descriptors With Applications to Objects Recognition in
SVM Context, Journal of Mathematical Imaging and Vision, p43-47,
2007
[11] T. Bourlai, B. Cukic, Multi-Spectral Face Recognition: Identifica-
tion of People in Difficult Environments, p196-201, June 2012
('2787483', 'Johel Mitéran', 'johel mitéran')
('2274333', 'Barthélémy Heyrman', 'barthélémy heyrman')
('1873153', 'Dominique Ginhac', 'dominique ginhac')
('33359945', 'Julien Dubois', 'julien dubois')
Contact julien.dubois@u-bourgogne.fr
30180f66d5b4b7c0367e4b43e2b55367b72d6d2aTemplate Adaptation for Face Verification and Identification
1 Systems and Technology Research, Woburn MA USA
2 Visionary Systems and Research, Framingham, MA USA
Visual Geometry Group, University of Oxford, Oxford UK
('3390731', 'Nate Crosswhite', 'nate crosswhite')
('36067742', 'Jeffrey Byrne', 'jeffrey byrne')
('34712076', 'Chris Stauffer', 'chris stauffer')
('1954340', 'Qiong Cao', 'qiong cao')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
3083d2c6d4f456e01cbb72930dc2207af98a624416
Perceived Age Estimation from Face Images
1NEC Soft, Ltd.
Tokyo Institute of Technology
Japan
1. Introduction
In recent years, demographic analysis in public places such as shopping malls and stations
is attracting a great deal of attention. Such demographic information is useful for various
purposes, e.g., designing effective marketing strategies and targeted advertisement based
on customers’ gender and age.
For this reason, a number of approaches have been
explored for age estimation from face images (Fu et al., 2007; Geng et al., 2006; Guo et al.,
2009), and several databases became publicly available recently (FG-Net Aging Database,
n.d.; Phillips et al., 2005; Ricanek & Tesafaye, 2006).
It has been reported that age can be
accurately estimated under controlled environment such as frontal faces, no expression, and
static lighting conditions. However, it is not straightforward to achieve the same accuracy
level in a real-world environment due to considerable variations in camera settings, facial
poses, and illumination conditions. The recognition performance of age prediction systems is
significantly influenced by such factors as the type of camera, camera calibration, and lighting
variations. On the other hand, the publicly available databases were mainly collected in
semi-controlled environments. For this reason, existing age prediction systems built upon
such databases tend to perform poorly in a real-world environment.
In this chapter, we address the problem of perceived age estimation from face images, and
describe our new approaches proposed in Ueki et al. (2010) and Ueki et al. (2011), which
involve three novel aspects.
The first novelty of our proposed approaches is to take the heterogeneous characteristics of
human age perception into account.
It is rare to misjudge the age of a 5-year-old child as
15 years old, but the age of a 35-year-old person is often misjudged as 45 years old. Thus,
magnitude of the error is different depending on subjects’ age. We carried out a large-scale
questionnaire survey for quantifying human age perception characteristics, and propose to
utilize the quantified characteristics in the framework of weighted regression.
The second is an efficient active learning strategy for reducing the cost of labeling face
samples. Given a large number of unlabeled face samples, we reveal the cluster structure
of the data and propose to label cluster-representative samples for covering as many
clusters as possible. This simple sampling strategy allows us to boost the performance of
a manifold-based semi-supervised learning method only with a relatively small number of
labeled samples.
The third contribution is to apply a recently proposed machine learning technique called
covariate shift adaptation (Shimodaira, 2000; Sugiyama & Kawanabe, 2011; Sugiyama et al.,
('2163491', 'Kazuya Ueki', 'kazuya ueki')
('1853974', 'Yasuyuki Ihara', 'yasuyuki ihara')
('1719221', 'Masashi Sugiyama', 'masashi sugiyama')
30cbd41e997445745b6edd31f2ebcc7533453b61What Makes a Video a Video: Analyzing Temporal Information in Video
Understanding Models and Datasets
Stanford University, 2Facebook, 3Dartmouth College
('38485317', 'De-An Huang', 'de-an huang')
('34066479', 'Vignesh Ramanathan', 'vignesh ramanathan')
('49274550', 'Dhruv Mahajan', 'dhruv mahajan')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
('2210374', 'Manohar Paluri', 'manohar paluri')
('3216322', 'Li Fei-Fei', 'li fei-fei')
('9200530', 'Juan Carlos Niebles', 'juan carlos niebles')
302c9c105d49c1348b8f1d8cc47bead70e2acf08This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2017.2710120, IEEE
Transactions on Circuits and Systems for Video Technology
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Unconstrained Face Recognition Using A Set-to-Set
Distance Measure
('4712803', 'Jiaojiao Zhao', 'jiaojiao zhao')
('1783847', 'Jungong Han', 'jungong han')
304a306d2a55ea41c2355bd9310e332fa76b3cb0
301b0da87027d6472b98361729faecf6e1d5e5f6HEAD POSE ESTIMATION IN FACE RECOGNITION ACROSS
POSE SCENARIOS
Computer vision and Remote Sensing, Berlin university of Technology
Sekr. FR-3-1, Franklinstr. 28/29, D-10587, Berlin, Germany.
Keywords:
Pose estimation, facial pose, face recognition, local energy models, shape description, local features, head
pose classification.
('4241648', 'M. Saquib Sarfraz', 'm. saquib sarfraz')
('2962236', 'Olaf Hellwich', 'olaf hellwich')
{saquib;hellwich}@fpk.tu-berlin.de
30b103d59f8460d80bb9eac0aa09aaa56c98494fEnhancing Human Action Recognition with Region Proposals
Australian Centre for Robotic Vision(ACRV), School of Electrical Engineering and Computer Science
Queensland University of Technology(QUT
('2256817', 'Fahimeh Rezazadegan', 'fahimeh rezazadegan')
('34686772', 'Sareh Shirazi', 'sareh shirazi')
('1771913', 'Niko Sünderhauf', 'niko sünderhauf')
('1809144', 'Michael Milford', 'michael milford')
('1803115', 'Ben Upcroft', 'ben upcroft')
fahimeh.rezazadegan@qut.edu.au
5e59193a0fc22a0c37301fb05b198dd96df94266Example-Based Modeling of Facial Texture from Deficient Data
1 IMB / LaBRI, Universit´e de Bordeaux, France
University of York, UK
('34895713', 'Arnaud Dessein', 'arnaud dessein')
('1679753', 'Edwin R. Hancock', 'edwin r. hancock')
('1687021', 'William A. P. Smith', 'william a. p. smith')
('1718243', 'Richard C. Wilson', 'richard c. wilson')
5e6f546a50ed97658be9310d5e0a67891fe8a102Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?
National Institute of Advanced Industrial Science and Technology (AIST
Tsukuba, Ibaraki, Japan
('2199251', 'Kensho Hara', 'kensho hara')
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
{kensho.hara, hirokatsu.kataoka, yu.satou}@aist.go.jp
5e0eb34aeb2b58000726540336771053ecd335fcLow-Quality Video Face Recognition with Deep
Networks and Polygonal Chain Distance
Vision and Fusion Lab, Karlsruhe Institute of Technology KIT, Karlsruhe, Germany
†Fraunhofer IOSB, Karlsruhe, Germany
('37646107', 'Christian Herrmann', 'christian herrmann')
('1783486', 'Dieter Willersinn', 'dieter willersinn')
{christian.herrmann|dieter.willersinn|juergen.beyerer}@iosb.fraunhofer.de
5e7e055ef9ba6e8566a400a8b1c6d8f827099553Accepted manuscripts are peer-reviewed but have not been through the copyediting, formatting, or proofreadingprocess.Copyright © 2018 the authorsThis Accepted Manuscript has not been copyedited and formatted. The final version may differ from this version.Research Articles: Behavioral/CognitiveOn the role of cortex-basal ganglia interactions for category learning: Aneuro-computational approachFrancesc Villagrasa1, Javier Baladron1, Julien Vitay1, Henning Schroll1, Evan G. Antzoulatos2, Earl K.Miller3 and Fred H. Hamker11Chemnitz University of Technology, Department of Computer Science, 09107 Chemnitz, Germany2UC Davis Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, Davis, CA95616, United States3The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences,Massachusetts Institute of Technology, Cambridge, MA 02139, United StatesDOI: 10.1523/JNEUROSCI.0874-18.2018Received: 5 April 2018Revised: 7 August 2018Accepted: 28 August 2018Published: 18 September 2018Author contributions: F.V., J.V., E.G.A., and F.H.H. performed research; F.V., J.B., J.V., H.S., E.G.A., andE.K.M. analyzed data; F.V. wrote the first draft of the paper; J.B. and F.H.H. designed research; J.B., J.V., H.S.,E.G.A., E.K.M., and F.H.H. edited the paper; F.H.H. wrote the paper.Conflict of Interest: The authors declare no competing financial interests.This work has been supported by the German Research Foundation (DFG, grant agreements no. HA2630/4-2and HA2630/8-1), the European Social Fund and the Free State of Saxony (ESF, grant agreement no.ESF-100269974), the NIMH R01MH065252, and the MIT Picower Institute Innovation Fund.Corresponding author: Fred H. Hamker, fred.hamker@informatik.tu-chemnitz.de, 09107 Chemnitz, GermanyCite as: J. Neurosci ; 10.1523/JNEUROSCI.0874-18.2018Alerts: Sign up at www.jneurosci.org/cgi/alerts to receive customized email alerts when the fully formattedversion of this article is published.
5e28673a930131b1ee50d11f69573c17db8fff3eAuthor manuscript, published in "Workshop on Faces in 'Real-Life' Images: Detection, Alignment, and Recognition, Marseille : France
(2008)"
5ea9063b44b56d9c1942b8484572790dff82731eMULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL
SCALING FOR FACIAL EXPRESSION RECOGNITION
Irene Kotsiay, Stefanos Zafeiriouy, Nikolaos Nikolaidisy and Ioannis Pitasy
yAristotle University of Thessaloniki
Thessaloniki, Greece
email: fekotsia, dralbert, nikolaid, pitasg@aiia.csd.auth.gr
5e16f10f2d667d17c029622b9278b6b0a206d394Learning to Rank Binary Codes
Columbia University
IBM T. J. Watson Research Center
Columbia University
('1710567', 'Jie Feng', 'jie feng')
('1722649', 'Wei Liu', 'wei liu')
('1678691', 'Yan Wang', 'yan wang')
5ef3e7a2c8d2876f3c77c5df2bbaea8a777051a7Rendering or normalization?
An analysis of the 3D-aided pose-invariant face recognition
Computational Biomedicine Lab
University of Houston, Houston, TX, USA
('2461369', 'Yuhang Wu', 'yuhang wu')
('2700399', 'Shishir K. Shah', 'shishir k. shah')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
ywu36@uh.edu {sshah,ikakadia}@central.uh.edu
5ea165d2bbd305dc125415487ef061bce75dac7dEfficient Human Action Recognition by Luminance Field Trajectory and Geometry Information
Hong Kong Polytechnic University, Hong Kong, China
2BBN Technologies, Cambridge, MA 02138, USA
('3079962', 'Haomian Zheng', 'haomian zheng')
('2659956', 'Zhu Li', 'zhu li')
('1708679', 'Yun Fu', 'yun fu')
{cshmzheng,cszli}@comp.polyu.edu.hk, yfu@bbn.com
5e6ba16cddd1797853d8898de52c1f1f44a73279Face Identification with Second-Order Pooling ('2731972', 'Fumin Shen', 'fumin shen')
('1780381', 'Chunhua Shen', 'chunhua shen')
('1724393', 'Heng Tao Shen', 'heng tao shen')
5ea9cba00f74d2e113a10c484ebe4b5780493964Automated Drowsiness Detection For Improved
Driving Safety
Sabanci University
Faculty of
Engineering and Natural Sciences
Orhanli, Istanbul
University of California San Diego
Institute of
Neural Computation
La Jolla, San Diego
('40322754', 'Esra Vural', 'esra vural')
('21691177', 'Mujdat Cetin', 'mujdat cetin')
('2724380', 'Gwen Littlewort', 'gwen littlewort')
('1858421', 'Marian Bartlett', 'marian bartlett')
('29794862', 'Javier Movellan', 'javier movellan')
5e80e2ffb264b89d1e2c468fbc1b9174f0e27f43Naming Every Individual in News Video Monologues
School of Computer Science
Carnegie Mellon University
5000 Forbes Ave., Pittsburgh, PA 15213, USA
1-412-268-{9747, 1448}
('38936351', 'Jun Yang', 'jun yang')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
{juny, alex}@cs.cmu.edu
5ec94adc9e0f282597f943ea9f4502a2a34ecfc2Leveraging the Power of Gabor Phase for Face
Identification: A Block Matching Approach
KTH, Royal Institute of Technology
('39750744', 'Yang Zhong', 'yang zhong')
('40565290', 'Haibo Li', 'haibo li')
5e0e516226413ea1e973f1a24e2fdedde98e7ec0The Invariance Hypothesis and the Ventral Stream
by
B.S./M.S. Brandeis University
Submitted to the Department of Brain and Cognitive Sciences
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
February 2014
Massachusetts Institute of Technology 2014. All rights reserved
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Brain and Cognitive Sciences
September 5, 2013
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Thesis Supervisor
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sherman Fairchild Professor of Neuroscience and Picower Scholar
Director of Graduate Education for Brain and Cognitive Sciences
('1700356', 'Joel Zaidspiner Leibo', 'joel zaidspiner leibo')
('5856191', 'Tomaso Poggio', 'tomaso poggio')
('1724891', 'Eugene McDermott', 'eugene mcdermott')
('3034182', 'Matthew Wilson', 'matthew wilson')
5e821cb036010bef259046a96fe26e681f20266e
5e7cb894307f36651bdd055a85fdf1e182b7db30A Comparison of Multi-class Support Vector Machine Methods for
Face Recognition
Department of Electrical and Computer Engineering
The University of Maryland
December 6, 2007
Naotoshi Seo, sonots@umd.edu
5b693cb3bedaa2f1e84161a4261df9b3f8e77353Proc. VIIth Digital Image Computing: Techniques and Applications, Sun C., Talbot H., Ourselin S. and Adriaansen T. (Eds.), 10-12 Dec. 2003, Sydney
Robust Face Localisation Using Motion, Colour
& Fusion
Speech, Audio, Image and Video Technologies Program
Faculty of Built Environment and Engineering
Queensland University of Technology
GPO Box 2434, Brisbane QLD 4001, Australia
http://www.bee.qut.edu.au/research/prog_saivt.shtml
('1763662', 'Chris McCool', 'chris mccool')
('33258846', 'Matthew McKay', 'matthew mckay')
('40453073', 'Scott Lowther', 'scott lowther')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
5b73b7b335f33cda2d0662a8e9520f357b65f3acIntensity Rank Estimation of Facial Expressions
Based on A Single Image
Institute of Information Science, Academia Sinica, Taipei, Taiwan
National Taiwan University, Taipei, Taiwan
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan
('34692779', 'Kuang-Yu Chang', 'kuang-yu chang')
('1720473', 'Chu-Song Chen', 'chu-song chen')
('1732064', 'Yi-Ping Hung', 'yi-ping hung')
Email: song@iis.sinica.edu.tw
5b6d05ce368e69485cb08dd97903075e7f517aedRobust Active Shape Model for
Landmarking Frontal Faces
Department of Electrical and Computer Engineering
Carnegie Mellon University Pittsburgh, PA - 15213, USA
June 15, 2009
('2363348', 'Keshav Seshadri', 'keshav seshadri')
('1794486', 'Marios Savvides', 'marios savvides')
kseshadr@andrew.cmu.edu, msavvid@cs.cmu.edu
5b0bf1063b694e4b1575bb428edb4f3451d9bf04Facial shape tracking via spatio-temporal cascade shape regression
Nanjing University of Information Science and Technology
Nanjing, China
('37953909', 'Jing Yang', 'jing yang')
('3234063', 'Jiankang Deng', 'jiankang deng')
('3198263', 'Kaihua Zhang', 'kaihua zhang')
('1734954', 'Qingshan Liu', 'qingshan liu')
nuist yj@126.com
jiankangdeng@gmail.com
zhkhua@gmail.com
qsliu@nuist.edu.cn
5b59e6b980d2447b2f3042bd811906694e4b0843Two-stage Cascade Model for Unconstrained
Face Detection
Darijan Marčetić, Tomislav Hrkać, Slobodan Ribarić
University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia
{darijan.marcetic, tomislav.hrkac, slobodan.ribaric}@fer.hr
5bb53fb36a47b355e9a6962257dd465cd7ad6827Mask-off: Synthesizing Face Images in the Presence of Head-mounted Displays
University of Kentucky
North Carolina Central University
Figure 1: Our system automatically reconstruct photo-realistic face videos for users wearing HMD. (Left) Input NIR eye images. (Middle)
Input face image with upper face blocked by HMD device. (Right) The output of our system.
('2613340', 'Yajie Zhao', 'yajie zhao')
('8285167', 'Qingguo Xu', 'qingguo xu')
('2257812', 'Xinyu Huang', 'xinyu huang')
('38958903', 'Ruigang Yang', 'ruigang yang')
5b89744d2ac9021f468b3ffd32edf9c00ed7fed7Beyond Mahalanobis Metric: Cayley-Klein Metric Learning
Institute of Automation, Chinese Academy of Sciences
Beijing, 100190, China
('2495602', 'Yanhong Bi', 'yanhong bi')
('1684958', 'Bin Fan', 'bin fan')
('3104867', 'Fuchao Wu', 'fuchao wu')
{yanhong.bi, bfan, fcwu}@nlpr.ia.ac.cn
5bfc32d9457f43d2488583167af4f3175fdcdc03International Journal of Science and Research (IJSR), India Online ISSN: 2319-7064
Local Gray Code Pattern (LGCP): A Robust
Feature Descriptor for Facial Expression
Recognition
('7484236', 'Mohammad Shahidul Islam', 'mohammad shahidul islam')
5b7cb9b97c425b52b2e6f41ba8028836029c4432Smooth Representation Clustering
1State Key Laboratory on Intelligent Technology and Systems, TNList
Tsinghua University
Key Lab. of Machine Perception, School of EECS, Peking University
('40234323', 'Han Hu', 'han hu')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('2632601', 'Jianjiang Feng', 'jianjiang feng')
('39491387', 'Jie Zhou', 'jie zhou')
huh04@mails.thu.edu.cn, zlin@pku.edu.cn, {jfeng,jzhou}@tsinghua.edu.cn
5ba7882700718e996d576b58528f1838e5559225This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAFFC.2016.2628787, IEEE
Transactions on Affective Computing
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. X, NO. X, OCTOBER 2016
Predicting Personalized Image Emotion
Perceptions in Social Networks
('1755487', 'Sicheng Zhao', 'sicheng zhao')
('1720100', 'Hongxun Yao', 'hongxun yao')
('33375873', 'Yue Gao', 'yue gao')
('38329336', 'Guiguang Ding', 'guiguang ding')
('1684968', 'Tat-Seng Chua', 'tat-seng chua')
5b6f0a508c1f4097dd8dced751df46230450b01aFinding Lost Children
Ashley Michelle Eden
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2010-174
http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-174.html
December 20, 2010
5b9d41e2985fa815c0f38a2563cca4311ce82954Exploitation of 3D Images for Face Authentication Under Pose and Illumination
Variations
1Information Processing Laboratory, Electrical and Computer Engineering Department,
Aristotle University of Thessaloniki, Thessaloniki 541 24, Greece
Informatics and Telematics Institute, Centre for Research and Technology Hellas
1st Km Thermi-Panorama Rd, Thessaloniki 57001, Greece
('1807962', 'Filareti Tsalakanidou', 'filareti tsalakanidou')
('1744180', 'Sotiris Malassiotis', 'sotiris malassiotis')
('1721460', 'Michael G. Strintzis', 'michael g. strintzis')
Email: filareti@iti.gr, malasiot@iti.gr, strintzi@eng.auth.gr
5b6593a6497868a0d19312952d2b753232414c23Face Recognition by 3D Registration for the
Visually Impaired Using a RGB-D Sensor
The City College of New York, New York, NY 10031, USA
Beihang University, Beijing 100191, China
3 The CUNY Graduate Center, New York, NY 10016, USA
('40617554', 'Wei Li', 'wei li')
('3042950', 'Xudong Li', 'xudong li')
('40152663', 'Martin Goldberg', 'martin goldberg')
('4697712', 'Zhigang Zhu', 'zhigang zhu')
lwei000@citymail.cuny.edu, xdli@buaa.edu.cn,
mgoldberg@gc.cuny.edu, zhu@cs.ccny.cuny.edu
5bb684dfe64171b77df06ba68997fd1e8daffbe1
5b719410e7829c98c074bc2947697fac3b505b64ACTIVE APPEARANCE MODELS FOR AFFECT RECOGNITION USING FACIAL
EXPRESSIONS
Matthew Stephen Ratliff
University of North Carolina Wilmington in Partial Ful llment
A Thesis Submitted to the
of the Requirements for the Degree of
Master of Science
Department of Computer Science
Department of Information Systems and Operations Management
University of North Carolina Wilmington
2010
Approved by
Advisory Committee
Curry Guinn
Thomas Janicki
Eric Patterson
Chair
Accepted by
Dean, Graduate School
5bae9822d703c585a61575dced83fa2f4dea1c6dMOTChallenge 2015:
Towards a Benchmark for Multi-Target Tracking
('34761498', 'Anton Milan', 'anton milan')
('34493380', 'Stefan Roth', 'stefan roth')
('1803034', 'Konrad Schindler', 'konrad schindler')
5b0008ba87667085912ea474025d2323a14bfc90SoS-RSC: A Sum-of-Squares Polynomial Approach to Robustifying Subspace
Clustering Algorithms∗
Electrical and Computer Engineering
Northeastern University, Boston, MA
('1687866', 'Mario Sznaier', 'mario sznaier'){msznaier,camps}@coe.neu.edu
5b97e997b9b654373bd129b3baf5b82c2def13d13D Face Tracking and Texture Fusion in the Wild
Centre for Vision, Speech and Signal Processing
Image Understanding and Interactive Robotics
University of Surrey
Guildford, GU2 7XH, United Kingdom
Contact: http://www.patrikhuber.ch
Reutlingen University
D-72762 Reutlingen, Germany
('39976184', 'Patrik Huber', 'patrik huber')
('1748684', 'Josef Kittler', 'josef kittler')
('49330989', 'Philipp Kopp', 'philipp kopp')
5bd3d08335bb4e444a86200c5e9f57fd9d719e143D Face Morphable Models “In-the-Wild”
,∗
Stefanos Zafeiriou1
Imperial College London, UK
2Amazon, Berlin, Germany
University of Oulu, Finland
('47456731', 'James Booth', 'james booth')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2015036', 'Stylianos Ploumpis', 'stylianos ploumpis')
('2814229', 'George Trigeorgis', 'george trigeorgis')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
1{james.booth,s.ploumpis,g.trigeorgis,i.panagakis,s.zafeiriou}@imperial.ac.uk
2antonak@amazon.com
5babbad3daac5c26503088782fd5b62067b94fa5Are You Sure You Want To Do That?
Classification with Verification
('31920847', 'Harris Chan', 'harris chan')
('36964031', 'Atef Chaudhury', 'atef chaudhury')
('50715871', 'Kevin Shen', 'kevin shen')
hchan@cs.toronto.edu
atef@cs.toronto.edu
shenkev@cs.toronto.edu
5bb87c7462c6c1ec5d60bde169c3a785ba5ea48fTargeting Ultimate Accuracy: Face Recognition via Deep Embedding
Baidu Research Institute of Deep Learning
('2272123', 'Jingtuo Liu', 'jingtuo liu')
5b9d9f5a59c48bc8dd409a1bd5abf1d642463d65Evolving Systems. manuscript No.
(will be inserted by the editor)
An evolving spatio-temporal approach for gender and age
group classification with Spiking Neural Networks
Received: date / Accepted: date
('39323169', 'Fahad Bashir Alvi', 'fahad bashir alvi')
('2662466', 'Russel Pears', 'russel pears')
('1686744', 'Nikola Kasabov', 'nikola kasabov')
5bf70c1afdf4c16fd88687b4cf15580fd2f26102Accepted in Pattern Recognition Letters
Pattern Recognition Letters
journal homepage: www.elsevier.com
Residual Codean Autoencoder for Facial Attribute Analysis
IIIT-Delhi, New Delhi, India
Article history:
Received 29 March 2017
('40639989', 'Akshay Sethi', 'akshay sethi')
('2220719', 'Maneet Singh', 'maneet singh')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
5b2cfee6e81ef36507ebf3c305e84e9e0473575a
5b01d4338734aefb16ee82c4c59763d3abc008e6A Robust Face Recognition Algorithm Based on Kernel Regularized
Relevance-Weighted Discriminant Analysis

Hunan Provincial Key Laboratory of Wind Generator and Its Control, Hunan Institute of Engineering, Xiangtan, China
College of Electrical and Information Engineering
or
In
I. INTRODUCTION
interface and security
recognition
their
from
this paper, we propose an effective
('38296532', 'Di WU', 'di wu')
('38296532', 'Di WU', 'di wu')
[e-mail: wudi6152007@163.com]
5b721f86f4a394f05350641e639a9d6cb2046c45A short version of this paper is accepted to ACM Asia Conference on Computer and Communications Security (ASIACCS) 2018
Detection under Privileged Information (Full Paper)∗
Pennsylvania State University
Patrick McDaniel
Pennsylvania State University
Vencore Labs
Pennsylvania State University
Army Research Laboratory
('2950892', 'Z. Berkay Celik', 'z. berkay celik')
('1804289', 'Rauf Izmailov', 'rauf izmailov')
('1967156', 'Nicolas Papernot', 'nicolas papernot')
('9541640', 'Ryan Sheatsley', 'ryan sheatsley')
('30792942', 'Raquel Alvarez', 'raquel alvarez')
('1703726', 'Ananthram Swami', 'ananthram swami')
zbc102@cse.psu.edu
mcdaniel@cse.psu.edu
rizmailov@appcomsci.com
{ngp5056,rms5643,rva5120}@cse.psu.edu
ananthram.swami.civ@mail.mil
5b4b84ce3518c8a14f57f5f95a1d07fb60e58223Diagnosing Error in Object Detectors
Department of Computer Science
University of Illinois at Urbana-Champaign
('2433269', 'Derek Hoiem', 'derek hoiem')
('2918391', 'Yodsawalai Chodpathumwan', 'yodsawalai chodpathumwan')
('2279233', 'Qieyun Dai', 'qieyun dai')
5b6ecbf5f1eecfe1a9074d31fe2fb030d75d9a79Improving 3D Face Details based on Normal Map of Hetero-source Images
Tsinghua University
Beijing, 100084, China
('8100333', 'Chang Yang', 'chang yang')
('1752427', 'Jiansheng Chen', 'jiansheng chen')
('1949216', 'Nan Su', 'nan su')
('7284296', 'Guangda Su', 'guangda su')
yangchang11@mails.tsinghua.edu.cn, jschenthu@tsinghua.edu.cn
v377026@sina.com, susu@tsinghua.edu.cn
5b86c36e3eb59c347b81125d5dd57dd2a2c377a9Name Identification of People in News Video
by Face Matching
Graduate School of Information Science, Nagoya University; Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
Japan Society for the Promotion of Science
Nagoya University
School of Information Science,
Nagoya University
('1679187', 'Ichiro IDE', 'ichiro ide')
('8027540', 'Takashi OGASAWARA', 'takashi ogasawara')
('1685524', 'Tomokazu TAKAHASHI', 'tomokazu takahashi')
('1725612', 'Hiroshi MURASE', 'hiroshi murase')
ide@is.nagoya-u.ac.jp, ide@nii.ac.jp
toga@murase.m.is.nagoya-u.ac.jp
ttakahashi@murase.m.is.nagoya-u.ac.jp
murase@is.nagoya-u.ac.jp Graduate
5be3cc1650c918da1c38690812f74573e66b1d32Relative Parts: Distinctive Parts for Learning Relative Attributes
Center for Visual Information Technology, IIIT Hyderabad, India - 500032
('32337248', 'Ramachandruni N. Sandeep', 'ramachandruni n. sandeep')
('2169614', 'Yashaswi Verma', 'yashaswi verma')
('1694502', 'C. V. Jawahar', 'c. v. jawahar')
5bc0a89f4f73523967050374ed34d7bc89e4d9e1
On: 12 August 2015, At: 08:38
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5
Howick Place, London, SW1P 1WG
Cognition and Emotion
Publication details, including instructions for authors and subscription
information:
http://www.tandfonline.com/loi/pcem20
The role of emotion transition for the
perception of social dominance and
affiliation
University of Haifa, Haifa, Israel
b The Interdisciplinary Center for Research on Emotions, University of
Haifa, Haifa, Israel
Humboldt-University, Berlin, Germany
Published online: 11 Aug 2015.
Click for updates
perception of social dominance and affiliation, Cognition and Emotion, DOI: 10.1080/02699931.2015.1056107
To link to this article: http://dx.doi.org/10.1080/02699931.2015.1056107
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”)
contained in the publications on our platform. However, Taylor & Francis, our agents, and our
licensors make no representations or warranties whatsoever as to the accuracy, completeness, or
suitability for any purpose of the Content. Any opinions and views expressed in this publication
are the opinions and views of the authors, and are not the views of or endorsed by Taylor &
Francis. The accuracy of the Content should not be relied upon and should be independently
verified with primary sources of information. Taylor and Francis shall not be liable for any
losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities
whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any substantial
or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or
distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use
can be found at http://www.tandfonline.com/page/terms-and-conditions
('3141618', 'Shlomo Hareli', 'shlomo hareli')
('6885116', 'Shlomo David', 'shlomo david')
('3141618', 'Shlomo Hareli', 'shlomo hareli')
('6885116', 'Shlomo David', 'shlomo david')
5b6bed112e722c0629bcce778770d1b28e42fc96FLOREA ET AL.:CANYOUREYESTELLMEHOWYOUTHINK?
Can Your Eyes Tell Me How You Think? A
Gaze Directed Estimation of the Mental
Activity
http://alpha.imag.pub.ro/common/staff/lflorea
http://alpha.imag.pub.ro/common/staff/cflorea
http://alpha.imag.pub.ro/common/staff/vertan
Image Processing and Analysis
Laboratory, LAPI
University Politehnica of Bucharest
Bucharest, Romania
('2143956', 'Laura Florea', 'laura florea')
('2760434', 'Corneliu Florea', 'corneliu florea')
('29723670', 'Ruxandra Vrânceanu', 'ruxandra vrânceanu')
('2905899', 'Constantin Vertan', 'constantin vertan')
rvranceanu@alpha.imag.pub.ro
5bde1718253ec28a753a892b0ba82d8e553b6bf3JMLR: Workshop and Conference Proceedings 13: 79-94
2nd Asian Conference on Machine Learning (ACML2010), Tokyo, Japan, Nov. 8{10, 2010.
Variational Relevance Vector Machine for Tabular Data
Dorodnicyn Computing Centre of the Russian Academy of Sciences
119333, Russia, Moscow, Vavilov str., 40
Dmitry Vetrov
Lomonosov Moscow State University
119992, Russia, Moscow, Leninskie Gory, 1, 2nd ed. bld., CMC department
The Blavatnik School of Computer Science, The Tel-Aviv University
Schreiber Building, room 103, Tel Aviv University, P.O.B. 39040, Ramat Aviv, Tel Aviv
Computer Science Division, The Open University of Israel
108 Ravutski Str. P.O.B. 808, Raanana 43107, Israel
Editor: Masashi Sugiyama and Qiang Yang
('3160602', 'Dmitry Kropotov', 'dmitry kropotov')
('1776343', 'Lior Wolf', 'lior wolf')
('1756099', 'Tal Hassner', 'tal hassner')
dmitry.kropotov@gmail.com
hassner@openu.ac.il
vetrovd@yandex.ru
wolf@cs.tau.ac.il
5b0ebb8430a04d9259b321fc3c1cc1090b8e600e
37c8514df89337f34421dc27b86d0eb45b660a5eFacial Landmark Tracking by Tree-based Deformable Part Model
Based Detector
Michal Uˇriˇc´aˇr, Vojtˇech Franc, and V´aclav Hlav´aˇc
Center for Machine Perception, Department of Cybernetics
Faculty of Electrical Engineering, Czech Technical University in Prague
166 27 Prague 6, Technick´a 2, Czech Republic
{uricamic, xfrancv, hlavac}@cmp.felk.cvut.cz
371f40f6d32ece05cc879b6954db408b3d4edaf3Mining Semantic Affordances of Visual Object Categories
Computer Science and Engineering, University of Michigan, Ann Arbor
accelerate
intervie w
race
h urt
h u nt
fe e d
m a n ufacture
o p erate
drive
rid e
b o ard
bicycle
bird
boat
bottle
car
cat
cow
dining table
horse
person
train
tv
15
10
−5
−10
−15
−20
−15
airplane
boat
car
train
bus
motorcycle
bicycle
chair
tv
couch
dining table
bottle
potted plant
person
horse
dog
cow
sheep
cat
bird
−10
−5
10
15
20
25
30
(a)
(b)
Figure 1: (a) “Affordance matrix” encoding the plausibility of each action-
object pair. (b) 20 PASCAL VOC object classes in the semantic affordance
space.
Affordances are fundamental attributes of objects. Affordances reveal the
functionalities of objects and the possible actions that can be performed on
them. We can “hug” a dog, but not an ant. We can “turn on” a tv, but not a
bottle. Acquiring such knowledge is crucial for recognizing human activities
in visual data and for robots to interact with the world. The key question is:
given an object, can an action be performed on it? While this might seem
obvious to a human, there is no automated system that can readily answer
this question and there is no knowledge base that provides comprehensive
knowledge of object affordances.
In this paper, we introduce the problem of mining the knowledge of
semantic affordance: given an action and an object, determine whether the
action can be applied to the object. For example, the action of “carry” form a
valid combination with “bag”, but not with “skyscraper”. This is equivalent
to establishing connections between action concepts and object concepts,
or filling an “affordance matrix” encoding the plausibility of each action-
object pair (Fig. 1). The key scientific question is: “how can we collect
affordance knowledge”? We first introduce a new benchmark with crowd-
sourced ground truth affordances on 20 PASCAL VOC object classes and
957 action classes. We then study a variety of approaches including 1) text
mining, 2) visual mining, and 3) collaborative filtering. We quantitatively
evaluate all approaches using ground truth affordances collected through
crowdsourcing.
For our crowdsourcing study, we ask human annotators to label whether
an action-object pair is a valid combination. We use the 20 object categories
in PASCAL VOC [2]. We design experiments to obtain a list of action
categories that are both common and “visual”. Our list contains 957 ac-
tion categories extracted from the verb synsets on Wordnet [6] that has 1) a
member verb that frequently occurs in text corpora, and 2) high “visualness
score” determined by human labelers. Given the list of actions and objects,
we set up a crowdsourcing task on Amazon Mechanical Turk (AMT). We
ask crowd workers whether it is possible (for a human) to perform a given
action on a given object. For instance,
Is it possible to hunt (pursue for food or sport, as of wild animals) a car?
For every possible action-object pair formed by the 20 PASCAL VOC ob-
jects and the 957 visual verb synsets, we ask 5 workers to determine its
plausibility. This gives a total of 19K action-object questions and 96K an-
swers
What is the distribution of 20 PASCAL object classes in their affordance
space? We answer this by analyzing the human annotated affordances. Each
object has a 957 dimensional“affordance vector“, where each dimension
is the plausibility score with an action. We use PCA to project the affor-
dance vectors to a 2-dimensional space and plot the coordinates of the object
('2820136', 'Yu-Wei Chao', 'yu-wei chao')
('1718667', 'Zhan Wang', 'zhan wang')
('1738516', 'Rada Mihalcea', 'rada mihalcea')
('8342699', 'Jia Deng', 'jia deng')
374c7a2898180723f3f3980cbcb31c8e8eb5d7afFACIAL EXPRESSION RECOGNITION IN VIDEOS USING A NOVEL MULTI-CLASS
SUPPORT VECTOR MACHINES VARIANT
yAristotle University of Thessaloniki
Department of Informatics
Box 451, 54124 Thessaloniki, Greece
('1754270', 'Irene Kotsia', 'irene kotsia')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
37007af698b990a3ea8592b11d264b14d39c843fDCMSVM: Distributed Parallel Training For Single-Machine Multiclass
Classifiers
Computer Science Department
Stony Brook University
('1682965', 'Xufeng Han', 'xufeng han')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
374a0df2aa63b26737ee89b6c7df01e59b4d8531Temporal Action Localization with Pyramid of Score Distribution Features
National University of Singapore, 2Shanghai Jiao Tong University
('1746449', 'Jun Yuan', 'jun yuan')
('5796401', 'Bingbing Ni', 'bingbing ni')
('1795291', 'Xiaokang Yang', 'xiaokang yang')
yuanjun@nus.edu.sg, nibingbing@sjtu.edu.cn, xkyang@sjtu.edu.cn, ashraf@nus.edu.sg
378ae5ca649f023003021f5a63e393da3a4e47f0Multi-Class Object Localization by Combining Local Contextual Interactions
Serge Belongie†
Gert Lanckriet‡
†Computer Science and Engineering Department
‡Electrical and Computer Engineering Department
University of California, San Diego
('1954793', 'Carolina Galleguillos', 'carolina galleguillos'){cgallegu,bmcfee,sjb}@cs.ucsd.edu, gert@ece.ucsd.edu
37619564574856c6184005830deda4310d3ca580A Deep Pyramid Deformable Part Model for Face Detection
Center for Automation Research
University of Maryland, College Park, MD
('26988560', 'Rajeev Ranjan', 'rajeev ranjan')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
{rranjan1, pvishalm, rama}@umiacs.umd.edu
372fb32569ced35eaf3740a29890bec2be1869faRunning head: MU RHYTHM MODULATION BY CLASSIFICATION OF EMOTION 1
Mu rhythm suppression is associated with the classification of emotion in faces
University of Otago, Dunedin, New Zealand
Corresponding authors:
Phone: +64 (3) 479 5269; Fax: +64 (3) 479 8335
Department of Psychology
University of Otago
PO Box 56
Dunedin, New Zealand
('2187036', 'Elizabeth A. Franz', 'elizabeth a. franz')Matthew Moore (matthew.moore@otago.ac.nz) & Liz Franz (lfranz@psy.otago.ac.nz)
37ce1d3a6415d6fc1760964e2a04174c24208173Pose-Invariant 3D Face Alignment
Department of Computer Science and Engineering
Michigan State University, East Lansing MI
('2357264', 'Amin Jourabloo', 'amin jourabloo')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
{jourablo, liuxm}@msu.edu
3765c26362ad1095dfe6744c6d52494ea106a42c
3727ac3d50e31a394b200029b2c350073c1b69e3
37f2e03c7cbec9ffc35eac51578e7e8fdfee3d4eWACV
#394
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
WACV 2015 Submission #394. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
Co-operative Pedestrians Group Tracking in Crowded Scenes using an MST
Approach
Anonymous WACV submission
Paper ID 394
3795974e24296185d9b64454cde6f796ca235387Finding your Lookalike:
Measuring Face Similarity Rather than Face Identity
Lafayette College
Easton, PA
Andrew Gallagher
Google Research
Mountain View, CA
('1803066', 'Amir Sadovnik', 'amir sadovnik')
('50977255', 'Wassim Gharbi', 'wassim gharbi')
('2197717', 'Thanh Vu', 'thanh vu')
{sadovnia,gharbiw,vut}@lafayette.edu
agallagher@google.com
37278ffce3a0fe2c2bbf6232e805dd3f5267eba3Can we still avoid automatic face detection?
Serge Belongie1,2
Cornell University 2 Cornell Tech
('3035230', 'Michael J. Wilber', 'michael j. wilber')
('1723945', 'Vitaly Shmatikov', 'vitaly shmatikov')
377a1be5113f38297716c4bb951ebef7a93f949aDear Faculty, IGERT Fellows, IGERT Associates and Students,
You are cordially invited to attend a Seminar presented by Albert Cruz. Please
plan to attend.
Albert Cruz
IGERT Fellow
Electrical Engineering

Date: Friday, October 11, 2013
Location: Bourns A265
Time: 11:00am
Facial emotion recognition with anisotropic
inhibited gabor energy histograms
377c6563f97e76a4dc836a0bd23d7673492b1aae
370e0d9b89518a6b317a9f54f18d5398895a7046IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. X, NO. X, XXXXXXX 20XX
Cross-pollination of normalisation techniques
from speaker to face authentication
using Gaussian mixture models
and S´ebastien Marcel, Member, IEEE
('1843477', 'Roy Wallace', 'roy wallace')
37ba12271d09d219dd1a8283bc0b4659faf3a6c6Domain Transfer for Person Re-identification
Queen Mary University of London
London, England
('3264124', 'Ryan Layne', 'ryan layne')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('2073354', 'Shaogang Gong', 'shaogang gong')
{rlayne, tmh, sgg}@eecs.qmul.ac.uk
3773e5d195f796b0b7df1fca6e0d1466ad84b5e7UNIVERSITY OF CALIFORNIA
RIVERSIDE
Learning from Time Series in the Presence of Noise: Unsupervised and Semi-Supervised
Approaches
A Dissertation submitted in partial satisfaction
of the requirements for the degree of
Doctor of Philosophy
in
Computer Science
by
March 2008
Dissertation Committee:
Dr. Eamonn Keogh, Chairperson
Dr. Vassilis Tsotras
('40564016', 'Dragomir Dimitrov', 'dragomir dimitrov')
('1736011', 'Stefano Lonardi', 'stefano lonardi')
37eb666b7eb225ffdafc6f318639bea7f0ba9a24MSU Technical Report (2014): MSU-CSE-14-5
Age, Gender and Race Estimation from
Unconstrained Face Images
('34393045', 'Hu Han', 'hu han')
('40437942', 'Anil K. Jain', 'anil k. jain')
377f2b65e6a9300448bdccf678cde59449ecd337Pushing the Limits of Unconstrained Face Detection:
a Challenge Dataset and Baseline Results
1Fujitsu Laboratories Ltd., Kanagawa, Japan
Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA
Rutgers University, 94 Brett Rd, Piscataway Township, NJ 08854, USA
('41018586', 'Hajime Nada', 'hajime nada')
('2577847', 'Vishwanath A. Sindagi', 'vishwanath a. sindagi')
('46197381', 'He Zhang', 'he zhang')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
nada.hajime@jp.fujitsu.com, vishwanath.sindagi@gmail.com, he.zhang92@rutgers.edu,
vpatel36@jhu.edu
375435fb0da220a65ac9e82275a880e1b9f0a557This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
From Pixels to Response Maps: Discriminative Image
Filtering for Face Alignment in the Wild
('3183108', 'Akshay Asthana', 'akshay asthana')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1902288', 'Shiyang Cheng', 'shiyang cheng')
('1694605', 'Maja Pantic', 'maja pantic')
370b6b83c7512419188f5373a962dd3175a56a9bFace Alignment Refinement via Exploiting
Low-Rank property and Temporal Stability
Shuang LIU
Bournemouth University
Bournemouth University
Wenyu HU
Gannan Normal University
Xiaosong YANG
Ruofeng TONG
Zhejiang University
Jian J. ZHANG
Bournemouth University
Bournemouth University
face
and
alignment
('48708691', 'Zhao Wang', 'zhao wang')zwang@bournemouth.ac.uk
sliu@bournemouth.ac.uk
wenyu.huu@gmail.com
trf@zju.edu.cn
xyang@bournemouth.ac.uk
jzhang@bournemouth.ac.uk
37b6d6577541ed991435eaf899a2f82fdd72c790Vision-based Human Gender Recognition: A Survey
Universiti Tunku Abdul Rahman, Kuala Lumpur, Malaysia.
('32877936', 'Choon Boon Ng', 'choon boon ng')
('9201065', 'Yong Haur Tay', 'yong haur tay')
{ngcb,tayyh,goibm}@utar.edu.my
372a8bf0ef757c08551d41e40cb7a485527b6cd7Unsupervised Video Hashing by Exploiting
Spatio-Temporal Feature
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong
University, Shanghai, China
('46194894', 'Chao Ma', 'chao ma')
('46964428', 'Yun Gu', 'yun gu')
('46641573', 'Wei Liu', 'wei liu')
('39264954', 'Jie Yang', 'jie yang')
{sjtu_machao,geron762,liuwei.1989,jieyang}@sjtu.edu.cn
37ef18d71c1ca71c0a33fc625ef439391926bfbbExtraction of Subject-Specific Facial Expression
Categories and Generation of Facial Expression
Feature Space using Self-Mapping
Department of Machine Intelligence and Systems Engineering, Faculty of Systems Science and Technology,
Akita Prefectural University, Yurihonjo, Japan
Department of Computer Science and Engineering, Faculty of Engineering and Resource Science,
Akita University, Akita, Japan
('1932760', 'Masaki Ishii', 'masaki ishii')
('2052920', 'Kazuhito Sato', 'kazuhito sato')
('1738333', 'Hirokazu Madokoro', 'hirokazu madokoro')
('21063785', 'Makoto Nishida', 'makoto nishida')
Email: {ishii, ksato, madokoro}@akita-pu.ac.jp
Email: nishida@ie.akita-u.ac.jp
370b5757a5379b15e30d619e4d3fb9e8e13f3256Labeled Faces in the Wild: A Database for Studying
Face Recognition in Unconstrained Environments
('3219900', 'Gary B. Huang', 'gary b. huang')
('1685538', 'Tamara Berg', 'tamara berg')
('1714536', 'Erik Learned-Miller', 'erik learned-miller')
081189493ca339ca49b1913a12122af8bb431984Photorealistic Facial Texture Inference Using Deep Neural Networks
Supplemental Material for
*Pinscreen
University of Southern California
USC Institute for Creative Technologies
Appendix I. Additional Results
Our main results in the paper demonstrate successful in-
ference of high-fidelity texture maps from unconstrained
images. The input images have mostly low resolutions, non-
frontal faces, and the subjects are often captured in chal-
lenging lighting conditions. We provide additional results
with pictures from the annotated faces-in-the-wild (AFW)
dataset [10] to further demonstrate how photorealistic pore-
level details can be synthesized using our deep learning ap-
proach. We visualize in Figure 9 the input, the intermedi-
ate low-frequency albedo map obtained using a linear PCA
model, and the synthesized high-frequency albedo texture
map. We also show several views of the final renderings us-
ing the Arnold renderer [13]. We refer to the accompanying
video for additional rotating views of the resulting textured
3D face models.
Figure 2: Even for largely downsized image resolutions, our
algorithm can produce fine-scale details while preserving
the person’s similarity.
We also evaluate the robustness of our inference frame-
work for downsized image resolutions in Figure 2. We crop
a diffuse lit face from a Light Stage capture [5]. The re-
sulting image has 435 × 652 pixels and we decrease its res-
olution to 108 × 162 pixels. In addition to complex skin
pigmentations, even the tiny mole on the lower left cheek is
properly reconstructed from the reduced input image using
our synthesis approach.
Figure 1: Comparison between different convolutional neu-
ral network architectures.
Evaluation. As Figure 1 indicates, other deep convolu-
tional neural networks can be used to extract mid-layer fea-
ture correlations to characterize multi-scale details, but it
seems that deeper architectures produce fewer artifacts and
higher quality textures. All three convolutional neural net-
works are pre-trained for classification tasks using images
from the ImageNet object recognition dataset [4]. The re-
sults of the 8 layer CaffeNet [2] show noticeable blocky ar-
tifacts in the synthesized textures and the ones from the 16
layer VGG [12] are slightly noisy around boundaries, while
the 19 layer VGG network performs the best.
§- indicates equal contribution
Comparison. We provide in Figure 3 additional visual-
izations of our method when using the closest feature corre-
lation, unconstrained linear combinations, and convex com-
binations. We also compare against a PCA-based model
fitting [3] approach and the state-of-the-art visio-lization
framework [9]. We notice that only our proposed tech-
nique using convex combinations is effective in generating
mesoscopic-scale texture details. Both visio-lization and
the PCA-based model result in lower frequency textures and
less similar faces than the ground truth. Since our inference
also fills holes, we compare our synthesis technique with
a general inpainting solution for predicting unseen face re-
gions. We test with the widely used PatchMatch [1] tech-
nique as illustrated in Figure 4. Unsurprisingly, we observe
unwanted repeating structures and semantically wrong fill-
ings since this method is based on low-level vision cues.
CaffeNetVGG-16VGG-19albedo mapinputrendering input (magnified)
('2059597', 'Shunsuke Saito', 'shunsuke saito')
('1792471', 'Lingyu Wei', 'lingyu wei')
('1808579', 'Liwen Hu', 'liwen hu')
('1897417', 'Koki Nagano', 'koki nagano')
('40348249', 'Hao Li', 'hao li')
08ee541925e4f7f376538bc289503dd80399536fRuntime Neural Pruning
Department of Automation
Tsinghua University
Department of Automation
Tsinghua University
Department of Automation
Tsinghua University
Department of Automation
Tsinghua University
('2772283', 'Ji Lin', 'ji lin')
('39358728', 'Yongming Rao', 'yongming rao')
('1697700', 'Jiwen Lu', 'jiwen lu')
('39491387', 'Jie Zhou', 'jie zhou')
lin-j14@mails.tsinghua.edu.cn
raoyongming95@gmail.com
lujiwen@tsinghua.edu.cn
jzhou@tsinghua.edu.cn
08d2f655361335bdd6c1c901642981e650dff5ecThis is the published version:  
 Arandjelovic, Ognjen and Cipolla, R. 2006, Automatic cast listing in feature‐length films with
Anisotropic Manifold Space, in CVPR 2006 : Proceedings of the Computer Vision and Pattern
Recognition Conference 2006, IEEE, Piscataway, New Jersey, pp. 1513‐1520.

http://hdl.handle.net/10536/DRO/DU:30058435
Reproduced with the kind permission of the copyright owner.
Copyright : 2006, IEEE
Available from Deakin Research Online: 
08fbe3187f31b828a38811cc8dc7ca17933b91e9MITSUBISHI ELECTRIC RESEARCH LABORATORIES
http://www.merl.com
Statistical Computations on Grassmann and
Stiefel Manifolds for Image and Video-Based
Recognition
Turaga, P.; Veeraraghavan, A.; Srivastava, A.; Chellappa, R.
TR2011-084 April 2011
08ae100805d7406bf56226e9c3c218d3f9774d19Gavrilescu and Vizireanu EURASIP Journal on Image and Video Processing (2017) 2017:59
DOI 10.1186/s13640-017-0211-4
EURASIP Journal on Image
and Video Processing
R ES EAR CH
Predicting the Sixteen Personality Factors
(16PF) of an individual by analyzing facial
features
Open Access
('2132188', 'Mihai Gavrilescu', 'mihai gavrilescu')
('1929703', 'Nicolae Vizireanu', 'nicolae vizireanu')
08c18b2f57c8e6a3bfe462e599a6e1ce03005876A Least-Squares Framework
for Component Analysis
('1707876', 'Fernando De la Torre', 'fernando de la torre')
08f6ad0a3e75b715852f825d12b6f28883f5ca05To appear in the 9th IEEE Int'l Conference on Automatic Face and Gesture Recognition, Santa Barbara, CA, March, 2011.
Face Recognition: Some Challenges in Forensics
Michigan State University
East Lansing, MI, U.S.A
('6680444', 'Anil K. Jain', 'anil k. jain')
('1817623', 'Brendan Klare', 'brendan klare')
('2222919', 'Unsang Park', 'unsang park')
{jain, klarebre, parkunsa}@cse.msu.edu
08ff81f3f00f8f68b8abd910248b25a126a4dfa4Papachristou, K., Tefas, A., & Pitas, I. (2014). Symmetric Subspace Learning
5697. DOI: 10.1109/TIP.2014.2367321
Peer reviewed version
Link to published version (if available):
10.1109/TIP.2014.2367321
Link to publication record in Explore Bristol Research
PDF-document
This is the author accepted manuscript (AAM). The final published version (version of record) is available online
via Institute of Electrical and Electronic Engineers at http://dx.doi.org/10.1109/TIP.2014.2367321. Please refer to
any applicable terms of use of the publisher.
University of Bristol - Explore Bristol Research
General rights
This document is made available in accordance with publisher policies. Please cite only the published
version using the reference above. Full terms of use are available:
http://www.bristol.ac.uk/pure/about/ebr-terms
081a431107eb38812b74a8cd036ca5e97235b499
084bd02d171e36458f108f07265386f22b34a1aeFace Alignment at 3000 FPS via Regressing Local Binary Features
University of Science and Technology of China
Microsoft Research
('2032273', 'Xudong Cao', 'xudong cao')
('3080683', 'Shaoqing Ren', 'shaoqing ren')
('1732264', 'Yichen Wei', 'yichen wei')
('40055995', 'Jian Sun', 'jian sun')
sqren@mail.ustc.edu.cn
{xudongca,yichenw,jiansun}@microsoft.com
081cb09791e7ff33c5d86fd39db00b2f29653fa8Square Loss based Regularized LDA for Face Recognition Using Image Sets
Center for Information Science, Peking University, Beijing 100871, China
2Philips Research, High Tech Campus 36, 5656 AE Eindhoven, The Netherlands
Queen Mary, University of London, London E1 4NS, UK
('37536447', 'Yanlin Geng', 'yanlin geng')
('10795229', 'Caifeng Shan', 'caifeng shan')
('1685266', 'Pengwei Hao', 'pengwei hao')
gengyanlin@cis.pku.edu.cn, caifeng.shan@philips.com, phao@dcs.qmul.ac.uk
086131159999d79adf6b31c1e604b18809e70ba8Deep Action Unit Classification using a Binned
Intensity Loss and Semantic Context Model
Department of Computing Sciences
Villanova University
Villanova, Pennsylvania 19085
Department of Computing Sciences
Villanova University
Villanova, Pennsylvania 19085
('1904114', 'Edward Kim', 'edward kim')
('35266734', 'Shruthika Vangala', 'shruthika vangala')
Email: edward.kim@villanova.edu
Email: svagal1@villanova.edu
0831a511435fd7d21e0cceddb4a532c35700a622
0861f86fb65aa915fbfbe918b28aabf31ffba364International Journal of Computer Trends and Technology (IJCTT) – volume 22 Number 3–April 2015
An Efficient Facial Annotation with Machine Learning Approach
1A.Anusha,2R.Srinivas
1Final M.Tech Student, 2Associate Professor
Aditya Institute of Technology And Management, Tekkali, Srikakulam, Andhra Pradesh
089513ca240c6d672c79a46fa94a92cde28bd567RNN Fisher Vectors for Action Recognition and Image Annotation
The Blavatnik School of Computer Science, Tel Aviv University, Israel
2IBM Research, Haifa, Israel
('3004979', 'Guy Lev', 'guy lev')
('2251827', 'Gil Sadeh', 'gil sadeh')
('2205955', 'Benjamin Klein', 'benjamin klein')
('1776343', 'Lior Wolf', 'lior wolf')
089b5e8eb549723020b908e8eb19479ba39812f5A Cross Benchmark Assessment of A Deep Convolutional Neural
Network for Face Recognition
National Institute of Standards and Technology
Gaithersburg, MD 20899 USA
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
080c204edff49bf85b335d3d416c5e734a861151CLAD: A Complex and Long Activities
Dataset with Rich Crowdsourced
Annotations
Journal Title
XX(X):1–6
c(cid:13)The Author(s) 2016
Reprints and permission:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/ToBeAssigned
www.sagepub.com/
('3280554', 'Jawad Tayyub', 'jawad tayyub')
('2762811', 'Majd Hawasly', 'majd hawasly')
('1967104', 'David C. Hogg', 'david c. hogg')
('1703235', 'Anthony G. Cohn', 'anthony g. cohn')
08f4832507259ded9700de81f5fd462caf0d5be8International Journal of Computer Applications (0975 – 8887)
Volume 118 – No.14, May 2015
Geometric Approach for Human Emotion
Recognition using Facial Expression
S. S. Bavkar
Assistant Professor
J. S. Rangole
Assistant Professor
V. U. Deshmukh
Assistant Professor
08a1fc55d03e4a73cad447e5c9ec79a6630f3e2dBERG, BELHUMEUR: TOM-VS-PETE CLASSIFIERS AND IDENTITY-PRESERVING ALIGNMENT
Tom-vs-Pete Classifiers and Identity-Preserving
Alignment for Face Verification
Columbia University
New York, NY
('1778562', 'Thomas Berg', 'thomas berg')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
tberg@cs.columbia.edu
belhumeur@cs.columbia.edu
08d40ee6e1c0060d3b706b6b627e03d4b123377aHuman Action Localization
with Sparse Spatial Supervision
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('3269403', 'Xavier Martin', 'xavier martin')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
08c1f8f0e69c0e2692a2d51040ef6364fb263a40
088aabe3da627432fdccf5077969e3f6402f0a80Under review as a conference paper at ICLR 2018
CLASSIFIER-TO-GENERATOR ATTACK: ESTIMATION
OF TRAINING DATA DISTRIBUTION FROM CLASSIFIER
Anonymous authors
Paper under double-blind review
087002ab569e35432cdeb8e63b2c94f1abc53ea9Looking at People
CVPRW 2015
Spatio-temporal Analysis of RGB-D-T Facial
Images for Multimodal Pain Level
Recognition
Visual Analysis of People Lab, Aalborg University, Denmark
Computer Vision Center, UAB, Barcelona, Spain
Aalborg University, Denmark
('37541412', 'Ramin Irani', 'ramin irani')
('1803459', 'Kamal Nasrollahi', 'kamal nasrollahi')
('3321700', 'Ciprian A. Corneanu', 'ciprian a. corneanu')
('7855312', 'Sergio Escalera', 'sergio escalera')
('40526933', 'Tanja L. Pedersen', 'tanja l. pedersen')
('31627926', 'Maria-Louise Klitgaard', 'maria-louise klitgaard')
('35675498', 'Laura Petrini', 'laura petrini')
08903bf161a1e8dec29250a752ce9e2a508a711cJoint Dimensionality Reduction and Metric Learning: A Geometric Take ('2862871', 'Mathieu Salzmann', 'mathieu salzmann')
08cb294a08365e36dd7ed4167b1fd04f847651a9EXAMINING VISIBLE ARTICULATORY FEATURES IN CLEAR AND
CONVERSATIONAL SPEECH
Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada
Language and Brain Lab, Simon Fraser University, Canada
KU Phonetics and Psycholinguistics Lab, University of Kansas
('2664514', 'Lisa Tang', 'lisa tang')
('26839551', 'Beverly Hannah', 'beverly hannah')
('3200950', 'Allard Jongman', 'allard jongman')
('1723309', 'Yue Wang', 'yue wang')
('3049056', 'Ghassan Hamarneh', 'ghassan hamarneh')
lisat@sfu.ca, beverlyw@sfu.ca, jongman@ku.edu, sereno@ku.edu, yuew@sfu.ca, hamarneh@sfu.ca
081286ede247c5789081502a700b378b6223f94bORIGINAL RESEARCH
published: 06 February 2018
doi: 10.3389/fpsyg.2018.00052
Neural Correlates of Facial Mimicry:
Simultaneous Measurements of EMG
and BOLD Responses during
Perception of Dynamic Compared to
Static Facial Expressions
Institute of Cognitive and Behavioural Neuroscience, SWPS University of Social
Sciences and Humanities, Warsaw, Poland, 2 Laboratory of Psychophysiology, Department of Neurophysiology, Nencki
Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
Facial mimicry (FM) is an automatic response to imitate the facial expressions of others.
However, neural correlates of the phenomenon are as yet not well established. We
investigated this issue using simultaneously recorded EMG and BOLD signals during
perception of dynamic and static emotional facial expressions of happiness and anger.
During display presentations, BOLD signals and zygomaticus major (ZM), corrugator
supercilii (CS) and orbicularis oculi (OO) EMG responses were recorded simultaneously
from 46 healthy individuals. Subjects reacted spontaneously to happy facial expressions
with increased EMG activity in ZM and OO muscles and decreased CS activity, which
was interpreted as FM. Facial muscle responses correlated with BOLD activity in regions
associated with motor simulation of facial expressions [i.e., inferior frontal gyrus, a
classical Mirror Neuron System (MNS)]. Further, we also found correlations for regions
associated with emotional processing (i.e., insula, part of the extended MNS). It is
concluded that FM involves both motor and emotional brain structures, especially during
perception of natural emotional expressions.
Keywords: facial mimicry, EMG, fMRI, mirror neuron system, emotional expressions, dynamic, happiness, anger
INTRODUCTION
Facial mimicry (FM) is an unconscious and unintentional automatic response to the facial
expressions of others. Numerous studies have shown that observing the emotional states of others
leads to congruent facial muscle activity. For example, observing angry facial expressions can result
in enhanced activity in the viewer’s muscle responsible for frowning (CS), while viewing happy
images leads to Increased activity in the facial muscle involved in smiling (ZM), and decreased
activity of the CS (Hess et al., 1998; Dimberg and Petterson, 2000). However, it has recently been
suggested that FM may not be an exclusive automatic reaction but rather a multifactorial response
dependent on properties such as stimulus modality (e.g., static or dynamic) or interpersonal
characteristics (e.g., emotional contagion susceptibility) (for review see Seibt et al., 2015).
There are two main psychological approaches trying to explain the mechanisms of
FM. One of
these is the perception-behavior link model which assumes perception and
execution of a specific action show a certain overlap (Chartrand and Bargh, 1999).
Edited by:
Alessio Avenanti,
Università di Bologna, Italy
Reviewed by:
Sebastian Korb,
University of Vienna, Austria
Frank A. Russo,
Ryerson University, Canada
*Correspondence:
Łukasz ˙Zurawski
Specialty section:
This article was submitted to
Emotion Science,
a section of the journal
Frontiers in Psychology
Received: 20 July 2017
Accepted: 12 January 2018
Published: 06 February 2018
Citation:
Rymarczyk K, ˙Zurawski Ł,
Jankowiak-Siuda K and Szatkowska I
(2018) Neural Correlates of Facial
Mimicry: Simultaneous Measurements
of EMG and BOLD Responses during
Perception of Dynamic Compared to
Static Facial Expressions.
Front. Psychol. 9:52.
doi: 10.3389/fpsyg.2018.00052
Frontiers in Psychology | www.frontiersin.org
February 2018 | Volume 9 | Article 52
('4079953', 'Krystyna Rymarczyk', 'krystyna rymarczyk')
('4022705', 'Kamila Jankowiak-Siuda', 'kamila jankowiak-siuda')
('4970569', 'Iwona Szatkowska', 'iwona szatkowska')
('4079953', 'Krystyna Rymarczyk', 'krystyna rymarczyk')
krymarczyk@swps.edu.pl
l.zurawski@nencki.gov.pl
08e995c080a566fe59884a527b72e13844b6f176A New KSVM + KFD Model for Improved
Classification and Face Recognition
School of Computer Science, University of Windsor, Windsor, ON, Canada N9B 3P
('1687000', 'Riadh Ksantini', 'riadh ksantini')Email: {ksantini, boufama, imran}@uwindsor.ca
08e24f9df3d55364290d626b23f3d42b4772efb6ENHANCING FACIAL EXPRESSION CLASSIFICATION BY INFORMATION
FUSION
I. Buciu1, Z. Hammal 2, A. Caplier2, N. Nikolaidis 1, and I. Pitas 1

GR-54124, Thessaloniki, Box 451, Greece
2 Laboratoire des Images et des Signaux / Institut National Polytechnique de Grenoble
web: http://www.aiia.csd.auth.gr
38031 Grenoble, France
web: http://www.lis.inpg.fr
phone: + 30(2310)99.6361, fax: + 30(2310)99.8453, email: {nelu,nikolaid,pitas}@aiia.csd.auth.gr
phone: + 33(0476)574363, fax: + 33(0476)57 47 90, email: alice.caplier@inpg.fr
085ceda1c65caf11762b3452f87660703f914782Large-pose Face Alignment via CNN-based Dense 3D Model Fitting
Department of Computer Science and Engineering
Michigan State University, East Lansing MI
('2357264', 'Amin Jourabloo', 'amin jourabloo')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
{jourablo, liuxm}@msu.edu
0830c9b9f207007d5e07f5269ffba003235e4eff
08d55271589f989d90a7edce3345f78f2468a7e0Quality Aware Network for Set to Set Recognition
SenseTime Group Limited
SenseTime Group Limited
University of Sydney
('1715752', 'Yu Liu', 'yu liu')
('1721677', 'Junjie Yan', 'junjie yan')
('3001348', 'Wanli Ouyang', 'wanli ouyang')
liuyuisanai@gmail.com
yanjunjie@sensetime.com
wanli.ouyang@gmail.com
081fb4e97d6bb357506d1b125153111b673cc128
08a98822739bb8e6b1388c266938e10eaa01d903SensorSift: Balancing Sensor Data Privacy and Utility in
Automated Face Understanding
University of Washington
**Microsoft Research, Redmond WA
('3299424', 'Miro Enev', 'miro enev')
('33481800', 'Jaeyeon Jung', 'jaeyeon jung')
('1766509', 'Liefeng Bo', 'liefeng bo')
('1728501', 'Xiaofeng Ren', 'xiaofeng ren')
('1769675', 'Tadayoshi Kohno', 'tadayoshi kohno')
084bebc5c98872e9307cd8e7f571d39ef9c1b81eA Discriminative Feature Learning Approach
for Deep Face Recognition
1 Shenzhen Key Lab of Computer Vision and Pattern Recognition,
Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
The Chinese University of Hong Kong, Sha Tin, Hong Kong
('2512949', 'Yandong Wen', 'yandong wen')
('3393556', 'Kaipeng Zhang', 'kaipeng zhang')
('1911510', 'Zhifeng Li', 'zhifeng li')
('33427555', 'Yu Qiao', 'yu qiao')
yandongw@andrew.cmu.edu, {kp.zhang,zhifeng.li,yu.qiao}@siat.ac.cn
0857281a3b6a5faba1405e2c11f4e17191d3824dChude-Olisah et al. EURASIP Journal on Advances in Signal Processing 2014, 2014:102
http://asp.eurasipjournals.com/content/2014/1/102
R ES EAR CH
Face recognition via edge-based Gabor feature
representation for plastic surgery-altered images
Open Access
('2529988', 'Ghazali Sulong', 'ghazali sulong')
08f1e9e14775757298afd9039f46ec56e80677f9Attentional Push: Augmenting Salience with
Shared Attention Modeling
Centre for Intelligent Machines, Department of Electrical and Computer Engineering,
McGill University
Montreal, Quebec, Canada
('38111179', 'Siavash Gorji', 'siavash gorji')
('1713608', 'James J. Clark', 'james j. clark')
siagorji@cim.mcgill.ca clark@cim.mcgill.ca
08d41d2f68a2bf0091dc373573ca379de9b16385Recursive Chaining of Reversible Image-to-Image
Translators for Face Aging
Aalto University, Espoo, Finland
1 GenMind Ltd, Finland
{ari.heljakka,arno.solin,juho.kannala}aalto.fi
('2622083', 'Ari Heljakka', 'ari heljakka')
('1768402', 'Arno Solin', 'arno solin')
('1776374', 'Juho Kannala', 'juho kannala')
08f6745bc6c1b0fb68953ea61054bdcdde6d2fc7Understanding Kin Relationships in a Photo ('2025056', 'Ming Shao', 'ming shao')
('33642939', 'Jiebo Luo', 'jiebo luo')
('1708679', 'Yun Fu', 'yun fu')
082ad50ac59fc694ba4369d0f9b87430553b11db
6d0fe30444c6f4e4db3ad8b02fb2c87e2b33c58dRobust Deep Appearance Models
Concordia University, Montreal, Quebec, Canada
2 CyLab Biometrics Center and the Department of Electrical and Computer Engineering,
Carnegie Mellon University, Pittsburgh, PA, USA
face images. In this approach,
('2687827', 'Kha Gia Quach', 'kha gia quach')
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
('1769788', 'Khoa Luu', 'khoa luu')
('1699922', 'Tien D. Bui', 'tien d. bui')
Email: {k q, c duon, bui}@encs.concordia.ca
Email: kluu@andrew.cmu.edu
6dbdb07ce2991db0f64c785ad31196dfd4dae721Seeing Small Faces from Robust Anchor’s Perspective
Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213, USA
('47894545', 'Chenchen Zhu', 'chenchen zhu')
('1794486', 'Marios Savvides', 'marios savvides')
('47599820', 'Ran Tao', 'ran tao')
('1769788', 'Khoa Luu', 'khoa luu')
{chenchez, rant, kluu, marioss}@andrew.cmu.edu
6dd052df6b0e89d394192f7f2af4a3e3b8f89875International Journal of Engineering and Advanced Technology (IJEAT)
ISSN: 2249 – 8958, Volume-2, Issue-4, April 2013
A literature survey on Facial Expression
Recognition using Global Features
('9318822', 'Mahesh M. Goyani', 'mahesh m. goyani')
6d7a32f594d46f4087b71e2a2bb66a4b25da5e30Towards Person Authentication by Fusing Visual and Thermal Face
Biometrics
1 Department of Engineering
University of Cambridge
Cambridge, CB2 1TQ
UK
2 Delphi Corporation
Delphi Electronics and Safety
Kokomo, IN 46901-9005
USA
('2214319', 'Riad Hammoud', 'riad hammoud')
('1745672', 'Roberto Cipolla', 'roberto cipolla')
{oa214,cipolla}@eng.cam.ac.uk
riad.hammoud@delphi.com
6dd5dbb6735846b214be72983e323726ef77c7a9Josai Mathematical Monographs
vol. 7 (2014), pp. 25-40
A Survey on Newer Prospective
Biometric Authentication Modalities
('3322335', 'Narishige Abe', 'narishige abe')
('2395689', 'Takashi Shinzaki', 'takashi shinzaki')
6d10beb027fd7213dd4bccf2427e223662e20b7dResearchArticleUserAdaptiveandContext-AwareSmartHomeUsingPervasiveandSemanticTechnologiesAggelikiVlachostergiou,1GeorgiosStratogiannis,1GeorgeCaridakis,1,2GeorgeSiolas,1andPhivosMylonas1,31IntelligentSystemsContentandInteractionLaboratory,NationalTechnicalUniversityofAthens,IroonPolytexneiou9,15780Zografou,Greece2DepartmentofCulturalTechnologyandCommunication,UniversityoftheAegean,Mytilene,Lesvos,Greece3DepartmentofInformatics,IonianUniversity,Corfu,GreeceCorrespondenceshouldbeaddressedtoAggelikiVlachostergiou;aggelikivl@image.ntua.grReceived17January2016;Revised6July2016;Accepted17July2016AcademicEditor:JohnN.SahalosCopyright©2016AggelikiVlachostergiouetal.ThisisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense,whichpermitsunrestricteduse,distribution,andreproductioninanymedium,providedtheoriginalworkisproperlycited.UbiquitousComputingismovingtheinteractionawayfromthehuman-computerparadigmandtowardsthecreationofsmartenvironmentsthatusersandthings,fromtheIoTperspective,interactwith.Usermodelingandadaptationisconsistentlypresenthavingthehumanuserasaconstantbutpervasiveinteractionintroducestheneedforcontextincorporationtowardscontext-awaresmartenvironments.Thecurrentarticlediscussesbothaspectsoftheusermodelingandadaptationaswellascontextawarenessandincorporationintothesmarthomedomain.Usersaremodeledasfuzzypersonasandthesemodelsaresemanticallyrelated.Contextinformationiscollectedviasensorsandcorrespondstovariousaspectsofthepervasiveinteractionsuchastemperatureandhumidity,butalsosmartcitysensorsandservices.Thiscontextinformationenhancesthesmarthomeenvironmentviatheincorporationofuserdefinedhomerules.SemanticWebtechnologiessupporttheknowledgerepresentationofthisecosystemwhiletheoverallarchitecturehasbeenexperimentallyverifiedusinginputfromtheSmartSantandersmartcityandapplyingittotheSandSsmarthomewithinFIREandFIWAREframeworks.1.IntroductionAlthoughintheirinitialdefinitionanddevelopmentstagespervasivecomputingpracticesdidnotnecessarilyrelyontheuseoftheInternet,currenttrendsshowtheemergenceofmanyconvergencepointswiththeInternetofThings(IoT)paradigm,whereobjectsareidentifiedasInternetresourcesandcanbeaccessedandutilizedassuch.Inthesametime,theHuman-ComputerInteraction(HCI)paradigminthedomainofdomoticshaswideneditsscopeconsiderably,placingthehumaninhabitantinapervasiveenvironmentandinacontinuousinteractionwithsmartobjectsandappliances.SmarthomesthatadditionallyadheretotheIoTapproachconsiderthatthisdatacontinuouslyproducedbyappliances,sensors,andhumanscanbeprocessedandassessedcollaboratively,remotely,andevensocially.Inthepresentpaper,wetrytobuildanewknowledgerepresentationframeworkwherewefirstplacethehumanuserinthecenterofthisinteraction.Wethenproposetobreakdownthemultitudeofpossibleuserbehaviorstoafewprototypicalusermodelsandthentoresynthesizethemusingfuzzyreasoning.Then,wediscusstheubiquityofcontextinformationinrelationtotheuserandthedifficultyofproposingauniversalformalizationframeworkfortheopenworld.Weshowthat,byrestrictinguser-relatedcontexttothesmarthomeenvironment,wecanreliablydefinesimplerulestructuresthatcorrelatespecificsensorinputdataanduseractionsthatcanbeusedtotriggerarbitrarysmarthomeevents.ThisrationaleisthenevolvedtoahigherlevelsemanticrepresentationofthedomoticecosysteminwhichcomplexhomerulescanbedefinedusingSemanticWebtechnologies.Itisthusobservedthatasmarthomeusingpervasiveandsemantictechnologiesinwhichthehumanuserisinthecenteroftheinteractionhastobeadaptive(itsbehaviorcanchangeinresponsetoaperson’sactionsandenvironment)andpersonalized(itsbehaviorcanbetailoredtotheuser’sHindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016, Article ID 4789803, 20 pageshttp://dx.doi.org/10.1155/2016/4789803
6d2ca1ddacccc8c865112bd1fbf8b931c2ee8e75ROC Speak: Semi-Automated Personalized Feedback on
Nonverbal Behavior from Recorded Videos
Rochester Human-Computer Interaction (ROC HCI), University of Rochester, NY
Figure 1. An overview of our system. Once the user finishes recording, the video is analyzed on the server for objective feedback
and sent to Mechanical Turk for subjective feedback. The objective feedback is then combined with subjective feedback that is
scored based on helpfulness, under which the sentiment is then classified.
('1825866', 'Michelle Fung', 'michelle fung')
('2961433', 'Yina Jin', 'yina jin')
('2171034', 'RuJie Zhao', 'rujie zhao')
{mfung, yjin18, rzhao2, mehoque}@cs.rochester.edu
6dddf1440617bf7acda40d4d75c7fb4bf9517dbbJOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, MM YY
Beyond Counting: Comparisons of Density Maps for Crowd
Analysis Tasks - Counting, Detection, and Tracking
('41201301', 'Di Kang', 'di kang')
('1730232', 'Zheng Ma', 'zheng ma')
('3651407', 'Antoni B. Chan', 'antoni b. chan')
6de18708218988b0558f6c2f27050bb4659155e4
6d97e69bbba5d1f5c353f9a514d62aff63bc0fb1Semi-Supervised Learning for Facial Expression
Recognition
1HP Labs, Palo Alto, CA, USA
Faculty of Science, University of Amsterdam, The Netherlands
3Escola Polit´ecnica, Universidade de S˜ao Paulo, Brazil
Beckman Institute, University of Illinois at Urbana-Champaign, IL, USA
('1774778', 'Ira Cohen', 'ira cohen')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
Ira.cohen@hp.com
nicu@science.uva.nl
fgcozman@usp.br
huang@ifp.uiuc.edu
6d91da37627c05150cb40cac323ca12a91965759
6d07e176c754ac42773690d4b4919a39df85d7ecFace Attribute Prediction Using Off-The-Shelf Deep
Learning Networks
Computer Science and Communication
KTH Royal Institute of Technology
100 44 Stockholm, Sweden
('50262049', 'Yang Zhong', 'yang zhong')
('1736906', 'Josephine Sullivan', 'josephine sullivan')
('40565290', 'Haibo Li', 'haibo li')
{yzhong, sullivan, haiboli}@kth.se
6dd2a0f9ca8a5fee12edec1485c0699770b4cfdfWebly-supervised Video Recognition by Mutually
Voting for Relevant Web Images and Web Video Frames
IIIS, Tsinghua University
2Google Research
3Amazon
CRCV, University of Central Florida
('2551285', 'Chuang Gan', 'chuang gan')
('1726241', 'Chen Sun', 'chen sun')
('2055900', 'Lixin Duan', 'lixin duan')
('40206014', 'Boqing Gong', 'boqing gong')
6d4b5444c45880517213a2fdcdb6f17064b3fa91Journal of Information Engineering and Applications
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.3, 2012
www.iiste.org
Harvesting Image Databases from The Web
G.H.Raisoni College of Engg. and Mgmt., Pune, India
G.H.Raisoni College of Engg. and Mgmt., Pune, India
G.H.Raisoni College of Engg. and Mgmt., Pune, India
('2671016', 'Snehal M. Gaikwad', 'snehal m. gaikwad')
('40050646', 'Snehal S. Pathare', 'snehal s. pathare')
*gaikwad.snehal99@gmail.com
*snehalpathare4@gmail.com
*truptijachak311991@gmail.com
6d8c9a1759e7204eacb4eeb06567ad0ef4229f93Face Alignment Robust to Pose, Expressions and
Occlusions
('2232940', 'Vishnu Naresh Boddeti', 'vishnu naresh boddeti')
('1767616', 'Myung-Cheol Roh', 'myung-cheol roh')
('2526145', 'Jongju Shin', 'jongju shin')
('3149566', 'Takaharu Oguri', 'takaharu oguri')
('1733113', 'Takeo Kanade', 'takeo kanade')
6dc1f94b852538d572e4919238ddb10e2ee449a4Objects as context for detecting their semantic parts
University of Edinburgh
('20758701', 'Abel Gonzalez-Garcia', 'abel gonzalez-garcia')
('1996209', 'Davide Modolo', 'davide modolo')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
a.gonzalez-garcia@sms.ed.ac.uk
davide.modolo@gmail.com
vferrari@staffmail.ed.ac.uk
6d4e3616d0b27957c4107ae877dc0dd4504b69abShuffle and Learn: Unsupervised Learning using
Temporal Order Verification
The Robotics Institute, Carnegie Mellon University
2 Facebook AI Research
('1806773', 'Ishan Misra', 'ishan misra')
('1699161', 'C. Lawrence Zitnick', 'c. lawrence zitnick')
('1709305', 'Martial Hebert', 'martial hebert')
{imisra, hebert}@cs.cmu.edu, zitnick@fb.com
6d5125c9407c7762620eeea7570af1a8ee7d76f3Video Frame Interpolation by Plug-and-Play
Deep Locally Linear Embedding
Yonsei University
('1886286', 'Anh-Duc Nguyen', 'anh-duc nguyen')
('47902684', 'Woojae Kim', 'woojae kim')
('2078790', 'Jongyoo Kim', 'jongyoo kim')
('39200200', 'Sanghoon Lee', 'sanghoon lee')
6d8e3f3a83514381f890ab7cd2a1f1c5be597b69University of Massachusetts - Amherst
Doctoral Dissertations 2014-current
Dissertations and Theses
2014
Improving Text Recognition in Images of Natural
Scenes
Jacqueline Feild
Follow this and additional works at: http://scholarworks.umass.edu/dissertations_2
Recommended Citation
Feild, Jacqueline, "Improving Text Recognition in Images of Natural Scenes" (2014). Doctoral Dissertations 2014-current. Paper 37.
ScholarWorks@UMass Amherst
University of Massachusetts - Amherst, jacqueline.feild@gmail.com
This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has
been accepted for inclusion in Doctoral Dissertations 2014-current by an authorized administrator of ScholarWorks@UMass Amherst. For more
information, please contact scholarworks@library.umass.edu.
6d8eef8f8d6cd8436c55018e6ca5c5907b31ac19Understanding Representations and Reducing
their Redundancy in Deep Networks
Thesis submitted to the Faculty of
Virginia Polytechnic Institute and State University
in partial fulfillment of the requirements for the degree of
Master of Science
in
Computer Science and Applications
Chair
Co-chair
February 18, 2016
Blacksburg, Virginia
Keywords: Computer Vision, Machine Learning, Object Recognition, Overfitting
('3358085', 'Micheal Cogswell', 'micheal cogswell')
('40486307', 'Bert Huang', 'bert huang')
('1746610', 'Dhruv Batra', 'dhruv batra')
('38013066', 'B. Aditya Prakash', 'b. aditya prakash')
Copyright @ 2016 Michael Cogswell
6d618657fa5a584d805b562302fe1090957194baFull Paper
NNGT Int. J. of Artificial Intelligence , Vol. 1, July 2014
Human Facial Expression Recognition based
on Principal Component Analysis and
Artificial Neural Network
Laboratory of Automatic and Signals Annaba (LASA) , Department of electronics, Faculty of Engineering,
Zermi.Narima, Ramdani.M, Saaidia.M
Badji-Mokhtar University, P.O.Box 12, Annaba-23000, Algeria
E-Mail : naili.narima@gmail.com, messaoud.ramdani@univ-annaba.org
6d66c98009018ac1512047e6bdfb525c35683b16IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 25, NO. 9, SEPTEMBER 2003
1063
Face Recognition Based on
Fitting a 3D Morphable Model
('2880906', 'Volker Blanz', 'volker blanz')
('1687079', 'Thomas Vetter', 'thomas vetter')
016cbf0878db5c40566c1fbc237686fbad666a33
016800413ebd1a87730a5cf828e197f43a08f4b3Learning Attributes Equals
Multi-Source Domain Generalization
IIIS, Tsinghua University
University of Iowa
CRCV, U. of Central Florida
('2551285', 'Chuang Gan', 'chuang gan')
('40381920', 'Tianbao Yang', 'tianbao yang')
('40206014', 'Boqing Gong', 'boqing gong')
ganchuang1990@gmail.com
tianbao-yang@uiowa.edu
bgong@crcv.ucf.edu
0172867f4c712b33168d9da79c6d3859b198ed4cTechnique for Face Recognition
Faculty of Engineering, Ain Shams University, Cairo, Egypt
Expression and Illumination Invariant Preprocessing
('1726416', 'A. Abbas', 'a. abbas')
('9159923', 'S. Abdel-Hay', 's. abdel-hay')
0145dc4505041bf39efa70ea6d95cf392cfe7f19Human Action Segmentation with Hierarchical Supervoxel Consistency
University of Michigan
Detailed analysis of human action, such as classification, detection and lo-
calization has received increasing attention from the community; datasets
like J-HMDB [1] have made it plausible to conduct studies analyzing the
impact that such deeper information has on the greater action understanding
problem. However, detailed automatic segmentation of human action has
comparatively been unexplored. In this paper, we introduce a hierarchical
MRF model to automatically segment human action boundaries in videos
“in-the-wild” (see Fig. 1).
We first propose a human motion saliency representation which incor-
porates two parts: foreground motion and human appearance information.
For foreground motion estimation, we propose a new motion saliency fea-
ture by using long-term trajectories to build a camera motion model, and
then measure the motion saliency via the deviation from the camera model.
For human appearance information, we use a DPM person detector trained
on PASCAL VOC 2007 and construct a saliency map by averaging the nor-
malized detection score of all the scale and all components.
Then, to segment the human action, we start by applying hierarchical
graph-based video segmentation [2] to form a hierarchy of supervoxels. On
this hierarchy, we define an MRF model, using our novel human motion
saliency as the unary term. We consider the joint information of temporal
connections in the direction of optical flow and human appearance-aware
spatial neighbors as pairwise potential. We design an innovative high-order
potential between different supervoxels on different levels of the hierar-
chy to alleviate leaks and sustain better semantic information. Given the
graph structure G = (X ,E) induced by the supervoxel hierarchy (E is the
set of edges in the graph hiearchy). We introduce an energy function over
G = (X ,E) that enforces hierarchical supervoxel consistency through higher
order potentials derived from supervoxel V.
E(Y ) = ∑
i∈X
Φi(yi) + ∑
(i, j)∈E
Φi, j(yi,y j) + ∑
v∈V
Φv(yv)
(1)
where Φi(yi) denotes unary potential for a supervoxel with index i, Φi, j(yi,y j)
denotes pairwise potential between two supervoxels with edge, and Φv(yv)
denotes high order potential of supervoxels between two layers. Unary po-
tential: We encode the motion saliency and human saliency feature into
supervoxels to get the unary potential components:
Φi(yi) = γMMi(yi) + γPPi(yi) + γSSi(yi)
(2)
where γM, γP and γS are weights for the unary terms. Mi(yi) reflects the
motion evidence, Pi(yi) and Si(yi) reflect the human evidence. Pairwise
potential: we constrain the edge space with only two types of neighbors:
temporal supervoxel neighbors and human-aware spatial neighbors, so we
define the pairwise potential as:
Φi, j(yi,y j) = γIIi, j(yi,y j) + γKKi, j(yi,y j))
(3)
where γI and γK are pairwise potential weights. Ii, j(yi,y j) is the cost be-
tween supervoxel i and supervoxel j with human detection constraints, which
ensures the smoothness spatially. Note that i and j could be determined as
neighbors without pixel-level connection. Ki, j(yi,y j) is the virtual dissim-
ilarity which ensures the smoothness temporally. Higher order potential:
We define the hierarchical supervoxel label consistency potential. We utilize
the connection between different supervoxel hierarchical levels. In practice,
we adopt the Robust Pn model [3] to define the potentials,
if N(yv) (cid:54) Q
otherwise
(cid:26) N(yv) 1
Φv(yv) =
Q γmax(v)
γmax(v)
('8553015', 'Jiasen Lu', 'jiasen lu')
('1856629', 'Ran Xu', 'ran xu')
01bef320b83ac4405b3fc5b1cff788c124109fb9de Lausanne
RLC D1 740, CH-1015
Lausanne
de Lausanne
RLC D1 740, CH-1015
Lausanne
de Lausanne
RLC D1 740, CH-1015
Lausanne
Translating Head Motion into Attention - Towards
Processing of Student’s Body-Language
CHILI Laboratory
Łukasz Kidzi´nski
CHILI Laboratory
CHILI Laboratory
École polytechnique fédérale
École polytechnique fédérale
École polytechnique fédérale
('1850245', 'Mirko Raca', 'mirko raca')
('1799133', 'Pierre Dillenbourg', 'pierre dillenbourg')
mirko.raca@epfl.ch
lukasz.kidzinski@epfl.ch
pierre.dillenbourg@epfl.ch
01c9dc5c677aaa980f92c4680229db482d5860dbTemporal Action Detection using a Statistical Language Model
University of Bonn, Germany
('32774629', 'Alexander Richard', 'alexander richard')
('2946643', 'Juergen Gall', 'juergen gall')
{richard,gall}@iai.uni-bonn.de
013909077ad843eb6df7a3e8e290cfd5575999d2A semi-automatic methodology for facial landmark annotation
Imperial College London, UK
School of Computer Science, University of Lincoln, U.K
EEMCS, University of Twente, The Netherlands
('3320415', 'Christos Sagonas', 'christos sagonas')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{c.sagonas, gt204, s.zafeiriou, m.pantic}@imperial.ac.uk
01c7a778cde86ad1b89909ea809d55230e569390A Supervised Low-rank Method for Learning Invariant Subspaces
West Virginia University
Morgantown, WV 26508
('1803400', 'Farzad Siyahjani', 'farzad siyahjani')
('3360490', 'Ranya Almohsen', 'ranya almohsen')
('36911226', 'Sinan Sabri', 'sinan sabri')
('1736352', 'Gianfranco Doretto', 'gianfranco doretto')
{fsiyahja, ralmohse, sisabri, gidoretto}@mix.wvu.edu
01c8d7a3460422412fba04e7ee14c4f6cdff9ad7(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No. 7, 2013
Rule Based System for Recognizing Emotions Using
Multimodal Approach
Information System
SBM, SVKM’s NMIMS
Mumbai, India
('9575671', 'Preeti Khanna', 'preeti khanna')
0115f260069e2e501850a14845feb400142e2443An On-Line Handwriting Recognizer
with Fisher Matching, Hypotheses
Propagation Network and Context
Constraint Models
By
A dissertation submitted in partial fulfillment of
the requirements for the degree of
Doctor of Philosophy
Department of Computer Science
New York University
May 2001
_____________________
Davi Geiger
('2034318', 'Jong Oh', 'jong oh')
01cc8a712e67384f9ef9f30580b7415bfd71e98014750 • The Journal of Neuroscience, November 3, 2010 • 30(44):14750 –14758
Behavioral/Systems/Cognitive
Failing to Ignore: Paradoxical Neural Effects of Perceptual
Load on Early Attentional Selection in Normal Aging
2Program in Neuroscience, and 3Rotman Research Institute, University of Toronto, Toronto, Ontario M5S 3G3, Canada
We examined visual selective attention under perceptual load—simultaneous presentation of task-relevant and -irrelevant informa-
tion—in healthy young and older adult human participants to determine whether age differences are observable at early stages of
selection in the visual cortices. Participants viewed 50/50 superimposed face/place images and judged whether the faces were male or
female, rendering places perceptible but task-irrelevant. Each stimulus was repeated, allowing us to index dynamic stimulus-driven
competition from places. Consistent with intact early selection in young adults, we observed no adaptation to unattended places in
parahippocampal place area (PPA) and significant adaptation to attended faces in fusiform face area (FFA). Older adults, however,
exhibited both PPA adaptation to places and weak FFA adaptation to faces. We also probed participants’ associative recognition for
face-place pairs post-task. Older adults with better place recognition memory scores were found to exhibit both the largest magnitudes of
PPA adaptation and the smallest magnitudes of FFA adaptation on the attention task. In a control study, we removed the competing
perceptual information to decrease perceptual load. These data revealed that the initial age-related impairments in selective attention
were not due to a general decline in visual cortical selectivity; both young and older adults exhibited robust FFA adaptation and neither
group exhibited PPA adaptation to repeated faces. Accordingly, distracting information does not merely interfere with attended input in
older adults, but is co-encoded along with the contents of attended input, to the extent that this information can subsequently be
recovered from recognition memory.
Introduction
Age-related changes in selective attention have traditionally been
examined using manipulations of executive attention, e.g., the
capacity to selectively maintain targets and suppress distractors
in working memory (WM) (Hasher and Zacks, 1988; Gazzaley et
al., 2005, 2008; Healey et al., 2008). Under cognitive load from
WM, older adults appear more susceptible to interference from
distracting stimuli compared with young controls.
At the neural level, executive attention appears to reconcile
interference from unattended distractors at stages of processing
after encoding in the perceptual cortices, i.e., late selection, and
relies on prefrontal control mechanisms (de Fockert et al., 2001;
Gehring and Knight, 2002). Experimental tasks that manipulate
executive attention, such as distractor exclusion (de Fockert et al.,
2001; Yi et al., 2004) and attentional blink (Luck et al., 1996;
Marois et al., 2000) have routinely demonstrated late selection of
unattended information.
However, the focus of aging research on executive attention
and distractor interference has left several questions unexplored.
Executive attention appears to be dissociable from the type of
perceptual attention used for reconciling distractor competition
Received May 26, 2010; revised Aug. 28, 2010; accepted Sept. 11, 2010.
This work was supported by Grant MOP102637 from the Canadian Institutes of Health Research to E.D.R. and the
Vanier National Science and Engineering Research Council Scholarship to T.W.S. We also thank Adam K. Anderson
and Daniel H. Lee for helpful editorial input on the manuscript.
DOI:10.1523/JNEUROSCI.2687-10.2010
Copyright © 2010 the authors
0270-6474/10/3014750-09$15.00/0
within the visual field, which is thought to be embedded in pos-
terior cortical subsystems (Treisman, 1969; Desimone and Dun-
can, 1995; Lavie et al., 2004). For instance, Yi et al. (2004)
observed that under perceptual load but not WM load, unat-
tended distractors were suppressed at stages of visual processing
before extrastriate encoding. These finding indicate that percep-
tual attention relies on a distinct early selection mechanism.
In the present study, we therefore explored with functional
magnetic resonance imaging (fMRI) whether perceptual at-
tention is also susceptible to an age-related impairment. We
hypothesized that under perceptually demanding conditions,
when task-relevant and -irrelevant stimuli were simultaneously
presented in the visual field, early competitive perceptual inter-
actions from task-irrelevant sensory information would be suc-
cessfully filtered in younger adults before encoding (Lavie, 1995;
Yi et al., 2004). By contrast, if older adults do exhibit impaired
perceptual attention, then age-differences in distractor encoding
should be observable in extrastriate cortex sensitive to the
unattended stream of input. We were also interested in eluci-
dating the precise neural fate of this unattended information
in older adults. Specifically, do distractors merely interfere
with attended input, or are distractors co-encoded along with
the content of attended input to the extent that they can sub-
sequently be recognized?
To interrogate these hypotheses, we acquired fMRI while a
group of healthy young (mean age ⫽ 22.2 years) and older (mean
age ⫽ 77.4 years) adults viewed 50/50 threshold superimposed
face and place images (O’Craven et al., 1999; Yi et al., 2006) (Fig.
1a). Participants decided whether faces were male or female, ren-
('4258285', 'Eve De Rosa', 'eve de rosa')
('4258285', 'Eve De Rosa', 'eve de rosa')
George Street, Toronto, ON M5S 3G3, Canada. E-mail: taylor@aclab.ca or derosa@psych.utoronto.ca.
01e12be4097fa8c94cabeef0ad61498c8e7762f2
0163d847307fae508d8f40ad193ee542c1e051b4JOURNAL OF LATEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007
Classemes and Other Classifier-based
Features for Efficient Object Categorization
- Supplementary material -
1 LOW-LEVEL FEATURES
We extract the SIFT [1] features for our descriptor
according to the following pipeline. We first convert
each image to gray-scale, then we normalize the con-
trast by forcing the 0.01% of lightest and darkest pixels
to be mapped to white and black respectively, and
linearly rescaling the values in between. All images
exceeding 786,432 pixels of resolution are downsized
to this maximum value while keeping the aspect ratio.
The 128-dimensional SIFT descriptors are computed
from the interest points returned by a DoG detec-
tor [2]. We finally compute a Bag-Of-Word histogram
of these descriptors, using a K-means vocabulary of
500 words.
2 CLASSEMES
The LSCOM categories were developed specifically
for multimedia annotation and retrieval, and have
been used in the TRECVID video retrieval series.
We took the LSCOM CYC ontology dated 2006-06-30,
which contains 2832 unique categories. We removed
('34338883', 'Alessandro Bergamo', 'alessandro bergamo')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
01dc1e03f39901e212bdf291209b7686266aeb13Actionness Estimation Using Hybrid Fully Convolutional Networks
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
The Chinese University of Hong Kong, Hong Kong
3Computer Vision Lab, ETH Zurich, Switzerland
('33345248', 'Limin Wang', 'limin wang')
('33427555', 'Yu Qiao', 'yu qiao')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1681236', 'Luc Van Gool', 'luc van gool')
016f49a54b79ec787e701cc8c7d0280273f9b1efSELF ORGANIZING MAPS FOR REDUCING THE NUMBER OF CLUSTERS BY ONE ON
SIMPLEX SUBSPACES
Aristotle University of Thessaloniki
Box 451, Thessaloniki 541 24, Greece
('1736143', 'Constantine Kotropoulos', 'constantine kotropoulos')
('1762248', 'Vassiliki Moschou', 'vassiliki moschou')
E-mail: {costas, vmoshou}@aiia.csd.auth.gr
01c4cf9c7c08f0ad3f386d88725da564f3c54679Interpretability Beyond Feature Attribution:
Quantitative Testing with Concept Activation Vectors (TCAV)
('3351164', 'Been Kim', 'been kim')
('2217654', 'Rory Sayres', 'rory sayres')
017ce398e1eb9f2eed82d0b22fb1c21d3bcf9637FACE RECOGNITION WITH HARMONIC DE-LIGHTING
2ICT-ISVISION Joint R&D Laboratory for Face Recognition, CAS, Beijing, China, 100080
1Graduate School, CAS, Beijing, China, 100080
Emails: {lyqing, sgshan, wgao}jdl.ac.cn
('2343895', 'Laiyun Qing', 'laiyun qing')
('1685914', 'Shiguang Shan', 'shiguang shan')
('40049005', 'Wen Gao', 'wen gao')
014e3d0fa5248e6f4634dc237e2398160294edceInt J Comput Vis manuscript No.
(will be inserted by the editor)
What does 2D geometric information really tell us about
3D face shape?
Received: date / Accepted: date
('39180407', 'Anil Bas', 'anil bas')
01125e3c68edb420b8d884ff53fb38d9fbe4f2b8Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric
CNN Regression
The University of Nottingham, UK
Kingston University, UK
Figure 1: A few results from our VRN - Guided method, on a full range of pose, including large expressions
('34596685', 'Aaron S. Jackson', 'aaron s. jackson')
('3458121', 'Adrian Bulat', 'adrian bulat')
('1689047', 'Vasileios Argyriou', 'vasileios argyriou')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
1{aaron.jackson, adrian.bulat, yorgos.tzimiropoulos}@nottingham.ac.uk
2 vasileios.argyriou@kingston.ac.uk
01c09acf0c046296643de4c8b55a9330e9c8a419MANIFOLD LEARNING USING EUCLIDEAN
-NEAREST NEIGHBOR GRAPHS
Department of Electrical Engineering and Computer Science
University of Michigan, Ann Arbor, MI
('1759109', 'Jose A. Costa', 'jose a. costa')
('35806564', 'Alfred O. Hero', 'alfred o. hero')
Email: jcosta@umich.edu, hero@eecs.umich.edu
01d23cbac762b0e46251f5dbde08f49f2d13b9f8Combining Face Verification Experts
+Telecommunication laboratory, Universit´e catholique de Louvain, B-1348 Belgium
⁄Center for Vision, Speech and Signal Processing,
University of Surrey, Guildford, Surrey GU2 7XH, UK
('34964585', 'Jacek Czyz', 'jacek czyz')
('1748684', 'Josef Kittler', 'josef kittler')
('1698047', 'Luc Vandendorpe', 'luc vandendorpe')
czyz@tele.ucl.ac.be
014143aa16604ec3f334c1407ceaa496d2ed726eLarge-Scale Manifold Learning
Courant Institute
New York, NY
Google Research
New York, NY
Henry Rowley
Google Research
Mountain View, CA
('8395559', 'Ameet Talwalkar', 'ameet talwalkar')
('2794322', 'Sanjiv Kumar', 'sanjiv kumar')
ameet@cs.nyu.edu
sanjivk@google.com
har@google.com
011e6146995d5d63c852bd776f782cc6f6e11b7bFast Training of Triplet-based Deep Binary Embedding Networks
The University of Adelaide; and Australian Centre for Robotic Vision
('3194022', 'Bohan Zhuang', 'bohan zhuang')
('2604251', 'Guosheng Lin', 'guosheng lin')
('1780381', 'Chunhua Shen', 'chunhua shen')
0182d090478be67241392df90212d6cd0fb659e6Discovering Localized Attributes for Fine-grained Recognition
Indiana University
Bloomington, IN
TTI-Chicago
Chicago, IL
David Crandall
Indiana University
Bloomington, IN
University of Texas
Austin, TX
('2481141', 'Kun Duan', 'kun duan')
('1713589', 'Devi Parikh', 'devi parikh')
('1794409', 'Kristen Grauman', 'kristen grauman')
kduan@indiana.edu
dparikh@ttic.edu
djcran@indiana.edu
grauman@cs.utexas.edu
016a8ed8f6ba49bc669dbd44de4ff31a799630781Graduate School, CAS, Beijing, 100039, China,
2ICT-ISVISION Joint R&D Laboratory for Face Recognition, CAS, Beijing, China, 100080
Harbin Institute of Technology, Harbin, China
FACE RELIGHTING FOR FACE RECOGNTION UNDER GENERIC ILLUMINATION
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
01beab8f8293a30cf48f52caea6ca0fb721c8489
0178929595f505ef7655272cc2c339d7ed0b9507
0181fec8e42d82bfb03dc8b82381bb329de00631Discriminative Subspace Clustering
CVL, Link oping University, Link oping, Sweden
VSI Lab, Goethe University, Frankfurt, Germany
('1797883', 'Vasileios Zografos', 'vasileios zografos')
('34824636', 'Rudolf Mester', 'rudolf mester')
01b4b32c5ef945426b0396d32d2a12c69c282e29
0113b302a49de15a1d41ca4750191979ad756d2f1­4244­0367­7/06/$20.00 ©2006 IEEE
537
ICME 2006
019e471667c72b5b3728b4a9ba9fe301a7426fb2Cross-Age Face Verification by Coordinating with Cross-Face Age Verification
Temple University, Philadelphia, USA
('38909760', 'Liang Du', 'liang du')
('1805398', 'Haibin Ling', 'haibin ling')
{liang.du, hbling}@temple.edu
0601416ade6707c689b44a5bb67dab58d5c27814Feature Selection in Face Recognition: A Sparse
Representation Perspective
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2007-99
http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-99.html
August 14, 2007
('2223304', 'Allan Y. Yang', 'allan y. yang')
('1738310', 'John Wright', 'john wright')
('7777470', 'Yi Ma', 'yi ma')
('1717598', 'S. Shankar Sastry', 's. shankar sastry')
064b797aa1da2000640e437cacb97256444dee82Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression
Megvii Inc.
Megvii Inc.
Megvii Inc.
('18036051', 'Zhiao Huang', 'zhiao huang')
('1848243', 'Erjin Zhou', 'erjin zhou')
('2695115', 'Zhimin Cao', 'zhimin cao')
hza@megvii.com
zej@megvii.com
czm@megvii.com
06f146dfcde10915d6284981b6b84b85da75acd4Scalable Face Image Retrieval using
Attribute-Enhanced Sparse Codewords
('33970300', 'Bor-Chun Chen', 'bor-chun chen')
('35081710', 'Yan-Ying Chen', 'yan-ying chen')
('1692811', 'Yin-Hsi Kuo', 'yin-hsi kuo')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
067126ce1f1a205f98e33db7a3b77b7aec7fb45aOn Improving Dissimilarity-Based Classifications Using
a Statistical Similarity Measure(cid:2)
Myongji University
Yongin, 449-728 South Korea
2 Faculty of Electrical Engineering, Mathematics and Computer Science,
Delft University of Technology, The Netherlands
('34959719', 'Sang-Woon Kim', 'sang-woon kim')kimsw@mju.ac.kr
r.p.w.duin@tudelft.nl
06466276c4955257b15eff78ebc576662100f740Where Is Who: Large-Scale Photo Retrieval by Facial
Attributes and Canvas Layout
National Taiwan University, Taipei, Taiwan
('2476032', 'Yu-Heng Lei', 'yu-heng lei')
('35081710', 'Yan-Ying Chen', 'yan-ying chen')
('33970300', 'Bor-Chun Chen', 'bor-chun chen')
('2817570', 'Lime Iida', 'lime iida')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
{siriushpa, limeiida}@gmail.com, winston@csie.ntu.edu.tw
{ryanlei, yanying}@cmlab.csie.ntu.edu.tw,
0697bd81844d54064d992d3229162fe8afcd82cbUser-driven mobile robot storyboarding: Learning image interest and
saliency from pairwise image comparisons
('1699287', 'Michael Burke', 'michael burke')
06262d6beeccf2784e4e36a995d5ee2ff73c8d11Recognize Actions by Disentangling Components of Dynamics
CUHK - SenseTime Joint Lab, The Chinese University of Hong Kong 2Amazon Rekognition
('47827548', 'Yue Zhao', 'yue zhao')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('1807606', 'Dahua Lin', 'dahua lin')
{zy317,dhlin}@ie.cuhk.edu.hk {yuanjx}@amazon.com
06f585a3a05dd3371cd600a40dc35500e2f82f9bBetter and Faster: Knowledge Transfer from Multiple Self-supervised Learning
Tasks via Graph Distillation for Video Classification
Institute of Computer Science and Technology, Peking University
Beijing 100871, China
('2439211', 'Chenrui Zhang', 'chenrui zhang')
('1704081', 'Yuxin Peng', 'yuxin peng')
pengyuxin@pku.edu.cn
06f8aa1f436a33014e9883153b93581eea8c5c70Leaving Some Stones Unturned:
Dynamic Feature Prioritization
for Activity Detection in Streaming Video
The University of Texas at Austin
Current approaches for activity recognition often ignore con-
straints on computational resources: 1) they rely on extensive
feature computation to obtain rich descriptors on all frames,
and 2) they assume batch-mode access to the entire test video at
once. We propose a new active approach to activity recognition
that prioritizes “what to compute when” in order to make timely
predictions. The main idea is to learn a policy that dynamically
schedules the sequence of features to compute on selected frames
of a given test video. In contrast to traditional static feature
selection, our approach continually re-prioritizes computation
based on the accumulated history of observations and accounts
for the transience of those observations in ongoing video. We
develop variants to handle both the batch and streaming settings.
On two challenging datasets, our method provides significantly
better accuracy than alternative techniques for a wide range of
computational budgets.
I. INTRODUCTION
Activity recognition in video is a core vision challenge. It
has applications in surveillance, autonomous driving, human-
robot interaction, and automatic tagging for large-scale video
retrieval. In any such setting, a system that can both categorize
and temporally localize activities would be of great value.
Activity recognition has attracted a steady stream of in-
teresting research [1]. Recent methods are largely learning-
based, and tackle realistic everyday activities (e.g., making
tea, riding a bike). Due to the complexity of the problem,
as well as the density of raw data comprising even short
videos, useful video representations are often computationally
intensive—whether dense trajectories, interest points, object
detectors, or convolutional neural network (CNN) features run
on each frame [2]–[8]. In fact, the expectation is that the more
features one extracts from the video, the better for accuracy.
For a practitioner wanting reliable activity recognition, then,
the message is to “leave no stone unturned”, ideally extracting
complementary descriptors from all video frames.
However, the “no stone unturned” strategy is problematic.
Not only does it assume virtually unbounded computational
resources, it also assumes that an entire video is available
at once for batch processing. In reality, a recognition system
will have some computational budget. Further, it may need
to perform in a streaming manner, with access to only a short
buffer of recent frames. Together, these considerations suggest
some form of feature triage is needed.
Yet prioritizing features for activity in video is challenging,
for two key reasons. First,
informative features
may depend critically on what has been observed so far in
the most
the specific test video, making traditional fixed/static feature
selection methods inadequate. In other words, the recognition
system’s belief state must evolve over time, and its priorities of
which features to extract next must evolve too. Second, when
processing streaming video, the entire video is never available
to the algorithm at once. This puts limits on what features can
even be considered each time step, and requires accounting
for the feature extractors’ framerates when allocating compu-
tation.
In light of these challenges, we propose a dynamic approach
to prioritize which features to compute when for activity
recognition. We formulate the problem as policy learning in a
Markov decision process. In particular, we learn a non-myopic
policy that maps the accumulated feature history (state) to the
subsequent feature and space-time location (action) that, once
extracted, is most expected to improve recognition accuracy
(reward) over a sequence of such actions. We develop two
variants of our approach: one for batch processing, where
we are free to “jump” around the video to get
the next
desired feature, and one for streaming video, where we are
confined to a buffer of newly received frames. By dynamically
allocating feature extraction effort, our method wisely leaves
some stones unturned—that is, some features unextracted—in
order to meet real computational budget constraints.
To our knowledge, our work is the first to actively triage
feature computation for streaming activity recognition.1 While
recent work explores ways to intelligently order feature com-
putation in a static image for the sake of object or scene
recognition [10]–[17] or offline batch activity detection [18],
streaming video presents unique challenges, as we explain in
detail below. While methods for “early” detection can fire on
an action prior to its completion [19]–[21], they nonetheless
passively extract all features in each incoming frame.
We validate our approach on two public datasets consist-
ing of third- and first-person video from over 120 activity
categories. We show its impact in both the streaming and
batch settings, and we further consider scenarios where the test
video is “untrimmed”. Comparisons with status quo passive
feature extraction, traditional feature selection approaches, and
a state-of-the-art early event detector demonstrate the clear
advantages of our approach.
1This paper extends our earlier technical report [9].
('39523296', 'Yu-Chuan Su', 'yu-chuan su')
('1794409', 'Kristen Grauman', 'kristen grauman')
061c84a4143e859a7caf6e6d283dfb30c23ee56eDEEP-CARVING : Discovering Visual Attributes by Carving Deep Neural Nets
Machine Intelligence Lab (MIL), Cambridge University
∗Computer Science & Artificial Intelligence Lab (CSAIL), MIT
Most of the approaches for discovering visual attributes in images de-
mand significant supervision, which is cumbersome to obtain. In this paper,
we aim to discover visual attributes in a weakly supervised setting that is
commonly encountered with contemporary image search engines.
For instance, given a noun (say forest) and its associated attributes (say
dense, sunlit, autumn), search engines can now generate many valid im-
ages for any attribute-noun pair (dense forests, autumn forests, etc). How-
ever, images for an attribute-noun pair do not contain any information about
other attributes (like which forests in the autumn are dense too). Thus, a
weakly supervised scenario occurs. Let A = {a1, . . . ,aM} be the set of
M attributes under consideration. We have a weakly supervised training
set, S = {(x1,y1), . . . , (xN,yN )} of N images x1, . . . ,xN ∈ X having labels
y1, . . . ,yN ∈ A respectively. Equivalently, segregating the training images
based on their label, we obtain M sets Sm = Xm × am, where Xm = {x ∈
X|(x,am) ∈ S} denotes the set of Nm = |Xm| images each having the (sin-
gle) positive training label am,m ∈ {1, . . . ,M}. For a test image xt, the task
is to predict yt ⊆ A, i.e. all the attributes present in xt. The aforemen-
tioned weakly supervised problem setting is more challenging for attributes
as compared to object and scene detection, because attributes can highly co-
('1808862', 'Sukrit Shankar', 'sukrit shankar')
('3307138', 'Vikas K. Garg', 'vikas k. garg')
('1745672', 'Roberto Cipolla', 'roberto cipolla')
06d93a40365da90f30a624f15bf22a90d9cfe6bbLearning from Candidate Labeling Sets
Idiap Research Institute and EPF Lausanne
Luo Jie
DSI, Universit`a degli Studi di Milano
('1721068', 'Francesco Orabona', 'francesco orabona')jluo@idiap.ch
orabona@dsi.unimi.it
061e29eae705f318eee703b9e17dc0989547ba0cEnhancing Expression Recognition in the Wild
with Unlabeled Reference Data
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing 100190, China
Graduate University of Chinese Academy of Sciences, Beijing 100049, China
('1730228', 'Mengyi Liu', 'mengyi liu')
('1688086', 'Shaoxin Li', 'shaoxin li')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{mengyi.liu, shaoxin.li, shiguang.shan, xilin.chen}@vipl.ict.ac.cn;
06850b60e33baa4ea9473811d58c0d5015da079eA SURVEY OF THE TRENDS IN FACIAL AND
EXPRESSION RECOGNITION DATABASES AND
METHODS
University of Washington, Bothell
University of Washington, Bothell
('2971095', 'Sohini Roychowdhury', 'sohini roychowdhury')
('31448697', 'Michelle Emmons', 'michelle emmons')
roych@uw.edu
memmons1442@gmail.com
06e7e99c1fdb1da60bc3ec0e2a5563d05b63fe32WhittleSearch: Image Search with Relative Attribute Feedback
(Supplementary Material)
1 Comparative Qualitative Search Results
We present three qualitative search results for human-generated feedback, in addition to those
shown in the paper. Each example shows one search iteration, where the 20 reference images are
randomly selected (rather than ones that match a keyword search, as the image examples in the
main paper illustrate). For each result, the first figure shows our method and the second figure
shows the binary feedback result for the corresponding target image. Note that for our method,
“more/less X” (where X is an attribute) means that the target image is more/less X than the
reference image which is shown.
Figures 1 and 2 show results for human-generated relative attribute and binary feedback, re-
spectively, when both methods are used to target the same “mental image” of a shoe shown in the
top left bubble. The top right grid of 20 images are the reference images displayed to the user, and
those outlined and annotated with constraints are the ones chosen by the user to give feedback.
The bottom row of images in either figure shows the top-ranked images after integrating the user’s
feedback into the scoring function, revealing the two methods’ respective performance. We see that
while both methods retrieve high-heeled shoes, only our method retrieves images that are as “open”
as the target image. This is because using the proposed approach, the user was able to comment
explicitly on the desired openness property.
('1770205', 'Adriana Kovashka', 'adriana kovashka')
('1713589', 'Devi Parikh', 'devi parikh')
('1794409', 'Kristen Grauman', 'kristen grauman')
06a6347ac14fd0c6bb3ad8190cbe9cdfa5d59efcActive Image Clustering: Seeking Constraints from Humans to Complement
Algorithms
Computer Science Department
University of Maryland, College Park
('2221075', 'Arijit Biswas', 'arijit biswas')arijitbiswas87@gmail.com, djacobs@umiacs.umd.edu
066d71fcd997033dce4ca58df924397dfe0b5fd1(cid:1)(cid:2)(cid:3)(cid:4)(cid:5)(cid:3)(cid:4)(cid:6)(cid:7)(cid:3)(cid:8)(cid:9)(cid:6)(cid:10)(cid:3)(cid:11)(cid:3)(cid:12)(cid:3)(cid:13)(cid:9)
(cid:3)(cid:4)(cid:14)(cid:6)(cid:15)(cid:16)(cid:3)(cid:17)(cid:18)(cid:3)(cid:11)(cid:5)(cid:19)(cid:4) (cid:20)(cid:5)(cid:11)(cid:21)(cid:6)(cid:3)(cid:6)(cid:22)(cid:9)(cid:20)(cid:6)(cid:10)(cid:9)(cid:11)(cid:9)(cid:8)(cid:11)(cid:5)(cid:19)(cid:4)(cid:6)(cid:23)(cid:17)(cid:24)(cid:19)(cid:2)(cid:5)(cid:11)(cid:21)(cid:25)
(cid:26)(cid:11)(cid:5)(cid:8)(cid:17)(cid:6)(cid:27)(cid:1)(cid:9)(cid:22)(cid:8)(cid:18)(cid:1)(cid:28)(cid:12)(cid:6)(cid:29)(cid:4)(cid:20)(cid:11)(cid:6)(cid:24)(cid:30)(cid:1)(cid:15)(cid:25)(cid:1)(cid:31)(cid:8)(cid:20)(cid:8) (cid:14)(cid:1)!(cid:8) (cid:8)(cid:6)(cid:4)(cid:1)"(cid:16)(cid:8)(cid:16)(cid:20)(cid:14)(cid:1)(cid:3)(cid:15)(cid:8)(cid:22)(cid:4)(cid:12)(cid:1)(cid:23)(cid:5)(cid:29)(cid:18)(cid:14)(cid:1)(cid:31)(cid:8)(cid:20)(cid:8) (cid:14)(cid:1)(cid:26)!(cid:9)(cid:13)(cid:14)(cid:1)#(cid:17)(cid:8)(cid:6)(cid:5)$(cid:1)(cid:17)(cid:4)(cid:5)%(cid:8)(cid:10)(cid:8)(cid:11)(cid:6)(cid:8)(cid:12)&(cid:30)(cid:8)(cid:16)(cid:15)(cid:15)(cid:21)(cid:27)(cid:15)(cid:17)
(cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:1)(cid:9)(cid:10)(cid:10)(cid:8)(cid:11)(cid:6)(cid:8)(cid:12)(cid:1)(cid:13)(cid:6)(cid:7)(cid:14) (cid:3)(cid:15)(cid:16)(cid:8)(cid:17)(cid:17)(cid:8)(cid:18)(cid:1)(cid:3)(cid:8)(cid:16)(cid:18)(cid:6)(cid:1)(cid:19)(cid:4)(cid:16)(cid:11)(cid:16)(cid:6)(cid:10)(cid:6)(cid:14)(cid:1)(cid:19)(cid:20)(cid:21)(cid:1)(cid:9)(cid:22)(cid:8)(cid:17)(cid:1)(cid:23)(cid:8)(cid:11)(cid:24)(cid:8)(cid:12)(cid:25)(cid:8)(cid:20)(cid:18)
(cid:23)(cid:12)(cid:13)(cid:11)(cid:2)(cid:3)(cid:8)(cid:11)$(cid:1)’(cid:16)(cid:6)(cid:11) ((cid:8)((cid:4)(cid:20)(cid:1)(cid:6)(cid:12)(cid:24)(cid:20)(cid:15)(cid:18))(cid:27)(cid:4)(cid:11)(cid:1)(cid:8)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:15)(cid:25)(cid:1)(cid:15)(cid:29)(cid:4)(cid:20)(cid:1)*(cid:14)+,,(cid:1)(cid:27)(cid:15)(cid:5)(cid:15)(cid:20)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1).(cid:4)(cid:1)(cid:27)(cid:15)(cid:5)(cid:5)(cid:4)(cid:27)(cid:24)(cid:4)(cid:18)(cid:1)(cid:25)(cid:20)(cid:15)(cid:17)(cid:1)+(cid:2)+(cid:1)(cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)(cid:16))(cid:17)(cid:8)(cid:12)(cid:1)(cid:25)(cid:8)(cid:27)(cid:4)(cid:11) (cid:6)(cid:12)(cid:1)(cid:8)-(cid:4)(cid:11)(cid:1)(cid:10)(cid:4)(cid:24).(cid:4)(cid:4)(cid:12)(cid:1)/
(cid:8)(cid:12)(cid:18) 01(cid:21)(cid:1)2(cid:4)(cid:1)(cid:12)(cid:8)(cid:17)(cid:4)(cid:18)(cid:1)(cid:24)(cid:16)(cid:6)(cid:11)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)(cid:26)(cid:20)(cid:8)(cid:12)(cid:6)(cid:8)(cid:12)(cid:1)3(cid:8)(cid:27)(cid:4)(cid:1)(cid:19)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)4(cid:26)3(cid:19)(cid:23)5(cid:21)(cid:1)’(cid:15)(cid:1)(cid:4)(cid:29)(cid:8)(cid:5))(cid:8)(cid:24)(cid:4)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)(cid:4)6((cid:4)(cid:20)(cid:6)(cid:17)(cid:4)(cid:12)(cid:24)(cid:8)(cid:5)(cid:1)(cid:20)(cid:4)(cid:11))(cid:5)(cid:24)(cid:1)(cid:15)(cid:25)(cid:1)(cid:8)(cid:1)(cid:12)(cid:4).(cid:1)(cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)
(cid:25)(cid:4)(cid:8)(cid:24))(cid:20)(cid:4)(cid:1)(cid:18)(cid:4)(cid:24)(cid:4)(cid:27)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)(cid:8)(cid:5)-(cid:15)(cid:20)(cid:6)(cid:24)(cid:16)(cid:17)(cid:1)(cid:6)(cid:11)(cid:1)(cid:20)(cid:4)((cid:15)(cid:20)(cid:24)(cid:4)(cid:18)(cid:21)
(cid:26)(cid:9)(cid:27) (cid:28)(cid:19)(cid:2)(cid:14)(cid:13)$(cid:1)3(cid:8)(cid:27)(cid:4)(cid:1)(cid:26)(cid:17)(cid:8)-(cid:4)(cid:1)(cid:19)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:14)(cid:1)3(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)3(cid:4)(cid:8)(cid:24))(cid:20)(cid:4)(cid:1)(cid:19)(cid:4)(cid:24)(cid:4)(cid:27)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)(cid:9)(cid:5)-(cid:15)(cid:20)(cid:6)(cid:24)(cid:16)(cid:17)(cid:11)(cid:14)(cid:1)(cid:9)-(cid:4)(cid:1)7(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:21)
(cid:29) (cid:1)(cid:4)(cid:11)(cid:2)(cid:19)(cid:14)(cid:18)(cid:8)(cid:11)(cid:5)(cid:19)(cid:4)
8)(cid:17)(cid:8)(cid:12)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1) (cid:6)(cid:11)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1) (cid:17)(cid:15)(cid:11)(cid:24)(cid:1) (cid:27)(cid:15)(cid:17)(cid:17)(cid:15)(cid:12)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) )(cid:11)(cid:4)(cid:25))(cid:5)(cid:1) (cid:7)(cid:4)(cid:30)(cid:1) (cid:24)(cid:15)(cid:1) (cid:8)(cid:1)
((cid:4)(cid:20)(cid:11)(cid:15)(cid:12)9(cid:11)(cid:1) (cid:6)(cid:18)(cid:4)(cid:12)(cid:24)(cid:6)(cid:24)(cid:30)(cid:21)(cid:1) (cid:9)(cid:11)(cid:1) (cid:16))(cid:17)(cid:8)(cid:12)(cid:11)(cid:14)(cid:1) .(cid:4)(cid:1) (cid:8)(cid:20)(cid:4)(cid:1) (cid:8)(cid:10)(cid:5)(cid:4)(cid:1) (cid:24)(cid:15)(cid:1) (cid:27)(cid:8)(cid:24)(cid:4)-(cid:15)(cid:20)(cid:6)(cid:22)(cid:4)(cid:1) (cid:8)(cid:1)
((cid:4)(cid:20)(cid:11)(cid:15)(cid:12):(cid:11)(cid:1)(cid:8)-(cid:4)(cid:1)-(cid:20)(cid:15))((cid:1)(cid:25)(cid:20)(cid:15)(cid:17)(cid:1)(cid:8)(cid:1)((cid:4)(cid:20)(cid:11)(cid:15)(cid:12):(cid:11)(cid:1)(cid:25)(cid:8)(cid:27)(cid:4)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)(cid:8)(cid:20)(cid:4)(cid:1)(cid:15)(cid:25)(cid:24)(cid:4)(cid:12)(cid:1)
(cid:8)(cid:10)(cid:5)(cid:4)(cid:1)(cid:24)(cid:15)(cid:1)(cid:10)(cid:4)(cid:1);)(cid:6)(cid:24)(cid:4)(cid:1)((cid:20)(cid:4)(cid:27)(cid:6)(cid:11)(cid:4)(cid:1)(cid:6)(cid:12)(cid:1)(cid:24)(cid:16)(cid:6)(cid:11)(cid:1)(cid:4)(cid:11)(cid:24)(cid:6)(cid:17)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)<(cid:2)=(cid:21)(cid:1)(cid:26)(cid:12)(cid:1)(cid:20)(cid:4)(cid:27)(cid:4)(cid:12)(cid:24)(cid:1)(cid:30)(cid:4)(cid:8)(cid:20)(cid:11)(cid:14)(cid:1)
(cid:25)(cid:8)(cid:27)(cid:4)(cid:1) (cid:20)(cid:4)(cid:27)(cid:15)-(cid:12)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:20)(cid:4)(cid:5)(cid:8)(cid:24)(cid:4)(cid:18)(cid:1) .(cid:15)(cid:20)(cid:7)(cid:11)(cid:1) (cid:16)(cid:8)(cid:29)(cid:4)(cid:1) (cid:20)(cid:4)(cid:27)(cid:4)(cid:6)(cid:29)(cid:4)(cid:18)(cid:1) (cid:11))(cid:10)(cid:11)(cid:24)(cid:8)(cid:12)(cid:24)(cid:6)(cid:8)(cid:5)(cid:1)
(cid:8)(cid:24)(cid:24)(cid:4)(cid:12)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:25)(cid:20)(cid:15)(cid:17)(cid:1) (cid:20)(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:4)(cid:20)(cid:11)(cid:1) (cid:6)(cid:12)(cid:1) (cid:10)(cid:6)(cid:15)(cid:17)(cid:4)(cid:24)(cid:20)(cid:6)(cid:27)(cid:11)(cid:14)(cid:1) ((cid:8)(cid:24)(cid:24)(cid:4)(cid:20)(cid:12)(cid:1) (cid:20)(cid:4)(cid:27)(cid:15)-(cid:12)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:14)(cid:1)
(cid:8)(cid:12)(cid:18)(cid:1) (cid:27)(cid:15)(cid:17)()(cid:24)(cid:4)(cid:20) (cid:29)(cid:6)(cid:11)(cid:6)(cid:15)(cid:12)(cid:1) (cid:27)(cid:15)(cid:17)(cid:17))(cid:12)(cid:6)(cid:24)(cid:6)(cid:4)(cid:11)(cid:1) (cid:8)(cid:12)(cid:18) 1=(cid:21)(cid:1) ’(cid:16)(cid:4)(cid:11)(cid:4)(cid:1)
(cid:27)(cid:15)(cid:17)(cid:17)(cid:15)(cid:12)(cid:1)(cid:6)(cid:12)(cid:24)(cid:4)(cid:20)(cid:4)(cid:11)(cid:24)(cid:11)(cid:1)(cid:8)(cid:17)(cid:15)(cid:12)-(cid:1)(cid:20)(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:4)(cid:20)(cid:11)(cid:1)(cid:17)(cid:15)(cid:24)(cid:6)(cid:29)(cid:8)(cid:24)(cid:4)(cid:18)(cid:1))(cid:11)(cid:1)(cid:24)(cid:15)(cid:1)(cid:27)(cid:15)(cid:5)(cid:5)(cid:4)(cid:27)(cid:24)(cid:1)(cid:8)(cid:1)
(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1) (cid:15)(cid:25)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1) (cid:25)(cid:20)(cid:15)(cid:17)(cid:1) ((cid:4)(cid:15)((cid:5)(cid:4)(cid:1) (cid:6)(cid:12)(cid:1) (cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1) (cid:8)-(cid:4)(cid:11)(cid:21) ’(cid:16)(cid:4)(cid:1)
(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:6)(cid:11)(cid:1)(cid:6)(cid:12)(cid:24)(cid:4)(cid:12)(cid:18)(cid:4)(cid:18)(cid:1)(cid:25)(cid:15)(cid:20)(cid:1)(cid:18)(cid:6)(cid:11)(cid:24)(cid:20)(cid:6)(cid:10))(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)(cid:24)(cid:15)(cid:1)(cid:20)(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:4)(cid:20)(cid:11)(cid:21)
’(cid:16)(cid:4)(cid:20)(cid:4)(cid:1) (cid:8)(cid:20)(cid:4)(cid:1) (cid:17)(cid:8)(cid:12)(cid:30)(cid:1) ()(cid:10)(cid:5)(cid:6)(cid:27)(cid:8)(cid:5)(cid:5)(cid:30)(cid:1) (cid:8)(cid:29)(cid:8)(cid:6)(cid:5)(cid:8)(cid:10)(cid:5)(cid:4)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:1) (cid:25)(cid:15)(cid:20)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1)
(cid:20)(cid:4)(cid:27)(cid:15)-(cid:12)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:14)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:4)6((cid:20)(cid:4)(cid:11)(cid:11)(cid:6)(cid:15)(cid:12)(cid:1) (cid:8)(cid:12)(cid:8)(cid:5)(cid:30)(cid:11)(cid:6)(cid:11)(cid:21)(cid:1) (cid:23)(cid:4)(cid:11)(cid:6)(cid:18)(cid:4)(cid:1) (cid:8)(cid:10)(cid:15)(cid:29)(cid:4)(cid:1)
(cid:8)(((cid:5)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:14)(cid:1)(cid:26)(cid:20)(cid:8)(cid:12)(cid:6)(cid:8)(cid:12)(cid:1)3(cid:8)(cid:27)(cid:4)(cid:1)(cid:19)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)4(cid:26)3(cid:19)(cid:23)5(cid:1)(cid:27)(cid:8)(cid:12)(cid:1)(cid:10)(cid:4)(cid:1))(cid:11)(cid:4)(cid:18)(cid:1)(cid:25)(cid:15)(cid:20)(cid:1)(cid:8)-(cid:4)(cid:1)
(cid:27)(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:14)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:11))(cid:20)-(cid:4)(cid:20)(cid:30)(cid:14)(cid:1) (cid:20)(cid:8)(cid:27)(cid:4)(cid:1) (cid:18)(cid:4)(cid:24)(cid:4)(cid:27)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) 4(cid:10)(cid:4)(cid:11)(cid:6)(cid:18)(cid:4)(cid:1) (cid:15)(cid:24)(cid:16)(cid:4)(cid:20)(cid:1)
(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)5(cid:14)(cid:1) (cid:11)(cid:24))(cid:18)(cid:30)(cid:6)(cid:12)-(cid:1) (cid:6)(cid:12)(cid:25)(cid:5))(cid:4)(cid:12)(cid:27)(cid:4)(cid:1) (cid:15)(cid:25)(cid:1) (cid:27)(cid:8)(cid:20)(cid:4)(cid:4)(cid:20)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:7)(cid:6)(cid:12)(cid:18)(cid:1) (cid:15)(cid:25)(cid:1) (cid:11)(cid:7)(cid:6)(cid:12)(cid:1) (cid:15)(cid:12)(cid:1)
(cid:8)-(cid:6)(cid:12)-(cid:14)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)(cid:15)(cid:24)(cid:16)(cid:4)(cid:20)(cid:1)(cid:11)(cid:6)(cid:17)(cid:6)(cid:5)(cid:8)(cid:20)(cid:1)(cid:20)(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:4)(cid:11)(cid:21)
(cid:26)(cid:12)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1) (cid:20)(cid:4)(cid:17)(cid:8)(cid:6)(cid:12)(cid:6)(cid:12)-(cid:1) ((cid:8)(cid:20)(cid:24)(cid:11) (cid:18)(cid:4)(cid:24)(cid:8)(cid:6)(cid:5)(cid:11)(cid:1) (cid:15)(cid:25)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1) (cid:4)6(cid:6)(cid:11)(cid:24)(cid:6)(cid:12)-(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:1)
(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11) (cid:8)(cid:12)(cid:18) (cid:24)(cid:16)(cid:4)(cid:1)(cid:26)(cid:20)(cid:8)(cid:12)(cid:6)(cid:8)(cid:12)(cid:1)3(cid:8)(cid:27)(cid:4)(cid:1)(cid:26)(cid:17)(cid:8)-(cid:4)(cid:1)(cid:19)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:6)(cid:11)(cid:1)-(cid:6)(cid:29)(cid:4)(cid:12)(cid:21) (cid:9)(cid:5)(cid:11)(cid:15)(cid:1)
(cid:24)(cid:16)(cid:4)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1) (cid:6)(cid:11)(cid:1) (cid:4)(cid:29)(cid:8)(cid:5))(cid:8)(cid:24)(cid:4)(cid:18)(cid:1) (cid:10)(cid:30)(cid:1) (cid:8)(((cid:5)(cid:30)(cid:6)(cid:12)- (cid:8)(cid:1) (cid:12)(cid:4).(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:25)(cid:4)(cid:8)(cid:24))(cid:20)(cid:4)(cid:1)
(cid:18)(cid:4)(cid:24)(cid:4)(cid:27)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)(cid:8)(cid:5)-(cid:15)(cid:20)(cid:6)(cid:24)(cid:16)(cid:17)(cid:21)(cid:1)
(cid:30) (cid:15)(cid:31)(cid:5)(cid:13)(cid:11)(cid:5)(cid:4)(cid:24)(cid:6)(cid:7)(cid:3)(cid:8)(cid:9)(cid:6)(cid:1)(cid:25)(cid:3)(cid:24)(cid:9)(cid:6)(cid:10)(cid:3)(cid:11)(cid:3)(cid:12)(cid:3)(cid:13)(cid:9)(cid:13)
(cid:3)(cid:8)(cid:12)(cid:30)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:1) (cid:8)(cid:20)(cid:4)(cid:1) (cid:20)(cid:4)(cid:27)(cid:15)(cid:20)(cid:18)(cid:4)(cid:18)(cid:1) )(cid:12)(cid:18)(cid:4)(cid:20)(cid:1) (cid:8)(cid:1) (cid:29)(cid:8)(cid:20)(cid:6)(cid:4)(cid:24)(cid:30)(cid:1) (cid:15)(cid:25)(cid:1)
(cid:27)(cid:15)(cid:12)(cid:18)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1).(cid:6)(cid:24)(cid:16)(cid:1)(cid:29)(cid:8)(cid:20)(cid:6)(cid:15))(cid:11)(cid:1)(cid:8)(((cid:5)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1)(cid:6)(cid:12)(cid:1)(cid:17)(cid:6)(cid:12)(cid:18)(cid:21)(cid:1)(cid:9)(cid:5)(cid:15)(cid:12)-(cid:1).(cid:6)(cid:24)(cid:16)(cid:1)
(cid:24)(cid:16)(cid:4)(cid:1) (cid:18)(cid:4)(cid:29)(cid:4)(cid:5)(cid:15)((cid:17)(cid:4)(cid:12)(cid:24)(cid:1) (cid:15)(cid:25)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1) (cid:20)(cid:4)(cid:27)(cid:15)-(cid:12)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:4)6((cid:20)(cid:4)(cid:11)(cid:11)(cid:6)(cid:15)(cid:12)(cid:1)
(cid:8)(cid:12)(cid:8)(cid:5)(cid:30)(cid:11)(cid:6)(cid:11)(cid:1) (cid:8)(cid:5)-(cid:15)(cid:20)(cid:6)(cid:24)(cid:16)(cid:17)(cid:11)(cid:14)(cid:1) (cid:8)(cid:1) (cid:27)(cid:15)(cid:17)((cid:8)(cid:20)(cid:8)(cid:24)(cid:6)(cid:29)(cid:4)(cid:5)(cid:30)(cid:1) (cid:5)(cid:8)(cid:20)-(cid:4)(cid:1) (cid:12))(cid:17)(cid:10)(cid:4)(cid:20)(cid:1) (cid:15)(cid:25)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1)
A(cid:8)(cid:5)(cid:4)(cid:1)<0=(cid:14)(cid:1)(cid:3)(cid:26)’(cid:1)(cid:2)/=(cid:21)(cid:1)8(cid:4)(cid:20)(cid:4)(cid:1)3#!#’(cid:1)<(cid:2)*= (cid:8)(cid:12)(cid:18)(cid:1)3DE(cid:13)#’(cid:1)<(cid:2)>=(cid:1)(cid:8)(cid:20)(cid:4)(cid:1)(cid:20)(cid:4)(cid:29)(cid:6)(cid:4).(cid:4)(cid:18)(cid:21)
(cid:30) (cid:29) (cid:7)(cid:15)!(cid:15)"(cid:6)(cid:10)(cid:3)(cid:11)(cid:3)(cid:12)(cid:3)(cid:13)(cid:9)
’(cid:16)(cid:4)(cid:1) 3(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) !(cid:4)(cid:27)(cid:15)-(cid:12)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) ’(cid:4)(cid:27)(cid:16)(cid:12)(cid:15)(cid:5)(cid:15)-(cid:30)(cid:1) 43#!#’5(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)
.(cid:8)(cid:11)(cid:1)(cid:27)(cid:15)(cid:5)(cid:5)(cid:4)(cid:27)(cid:24)(cid:4)(cid:18)(cid:1) (cid:8)(cid:24)(cid:1) D(cid:4)(cid:15)(cid:20)-(cid:4)(cid:1)(cid:3)(cid:8)(cid:11)(cid:15)(cid:12)(cid:1) (cid:28)(cid:12)(cid:6)(cid:29)(cid:4)(cid:20)(cid:11)(cid:6)(cid:24)(cid:30)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1) (cid:28)"(cid:1) (cid:9)(cid:20)(cid:17)(cid:30)(cid:1)
!(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:1)F(cid:8)(cid:10)(cid:15)(cid:20)(cid:8)(cid:24)(cid:15)(cid:20)(cid:30)(cid:1)(cid:25)(cid:8)(cid:27)(cid:6)(cid:5)(cid:6)(cid:24)(cid:6)(cid:4)(cid:11)(cid:1) (cid:8)(cid:11)(cid:1)((cid:8)(cid:20)(cid:24)(cid:1)(cid:15)(cid:25)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1)3#!#’(cid:1) ((cid:20)(cid:15)-(cid:20)(cid:8)(cid:17)(cid:1)
<(cid:2)*=(cid:21)(cid:1)(cid:26)(cid:12)(cid:1)3#!#’(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1)(cid:15)(cid:25)(cid:1)(cid:2)(cid:2)BB(cid:1)(cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:1)(cid:4)6(cid:6)(cid:11)(cid:24)(cid:1)(cid:6)(cid:12)(cid:1)BE/,
(cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1) ((cid:15)(cid:11)(cid:4)(cid:11)(cid:14)(cid:1) /(cid:1)
(cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:4)6((cid:20)(cid:4)(cid:11)(cid:11)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) /(cid:1) (cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)
(cid:6)(cid:5)(cid:5))(cid:17)(cid:6)(cid:12)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1)(cid:6)(cid:12)(cid:1)/(cid:1)(cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)(cid:24)(cid:6)(cid:17)(cid:4)(cid:11)(cid:21)(cid:1)(cid:1)’(cid:16)(cid:4)(cid:20)(cid:4)(cid:1)(cid:8)(cid:20)(cid:4) (cid:2)>(cid:14),1(cid:2)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1)(cid:6)(cid:12)(cid:1)
/1+G*0>(cid:1)((cid:6)6(cid:4)(cid:5)(cid:11)(cid:1)(cid:6)(cid:12)(cid:1)(cid:11)(cid:6)(cid:22)(cid:4)(cid:21)(cid:1)(cid:26)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1).(cid:4)(cid:20)(cid:4)(cid:1)(cid:27)(cid:15)(cid:5)(cid:5)(cid:4)(cid:27)(cid:24)(cid:4)(cid:18)(cid:1)(cid:8)(cid:24)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)(cid:25)(cid:15)(cid:5)(cid:5)(cid:15).(cid:6)(cid:12)-(cid:1)
((cid:15)(cid:11)(cid:4)(cid:11)$(cid:1)(cid:20)(cid:6)-(cid:16)(cid:24)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)(cid:5)(cid:4)(cid:25)(cid:24)(cid:1)((cid:20)(cid:15)(cid:25)(cid:6)(cid:5)(cid:4)(cid:14)(cid:1)(cid:20)(cid:6)-(cid:16)(cid:24)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)(cid:5)(cid:4)(cid:25)(cid:24)(cid:1);)(cid:8)(cid:20)(cid:24)(cid:4)(cid:20)(cid:1)((cid:20)(cid:15)(cid:25)(cid:6)(cid:5)(cid:4)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)
(cid:20)(cid:6)-(cid:16)(cid:24)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:5)(cid:4)(cid:25)(cid:24)(cid:1) (cid:16)(cid:8)(cid:5)(cid:25)(cid:1) ((cid:20)(cid:15)(cid:25)(cid:6)(cid:5)(cid:4)(cid:21)(cid:1) (cid:26)(cid:12)(cid:1) (cid:24)(cid:16)(cid:4)(cid:11)(cid:4)(cid:1) (cid:27)(cid:8)(cid:24)(cid:4)-(cid:15)(cid:20)(cid:6)(cid:4)(cid:11)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1) .(cid:4)(cid:20)(cid:4)(cid:1)
(cid:20)(cid:4)(cid:27)(cid:15)(cid:20)(cid:18)(cid:4)(cid:18)(cid:1)(cid:25)(cid:15)(cid:20)(cid:1)1,0(cid:1)(cid:24)(cid:15)(cid:1)B0,(cid:1)(cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:11)(cid:21)
(cid:30) (cid:30)(cid:6)(cid:7)#$(cid:22)(cid:15)"(cid:6)(cid:23)(cid:24)(cid:5)(cid:4)(cid:24)(cid:6)(cid:10)(cid:3)(cid:11)(cid:3)(cid:12)(cid:3)(cid:13)(cid:9)
(cid:2)
’(cid:16)(cid:4)(cid:1)3DE(cid:13)#’(cid:1)(cid:9)-(cid:6)(cid:12)-(cid:1)(cid:19)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1).(cid:8)(cid:11)(cid:1)-(cid:4)(cid:12)(cid:4)(cid:20)(cid:8)(cid:24)(cid:4)(cid:18)(cid:1)(cid:8)(cid:11)(cid:1)((cid:8)(cid:20)(cid:24)(cid:1)(cid:15)(cid:25)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)
#)(cid:20)(cid:15)((cid:4)(cid:8)(cid:12)(cid:1) (cid:28)(cid:12)(cid:6)(cid:15)(cid:12)(cid:1) ((cid:20)(cid:15) (cid:4)(cid:27)(cid:24)(cid:1) 3DE(cid:13)#’(cid:1)
43(cid:8)(cid:27)(cid:4)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) D(cid:4)(cid:11)(cid:24))(cid:20)(cid:4)(cid:1)
!(cid:4)(cid:27)(cid:15)-(cid:12)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) !(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:1) (cid:13)(cid:4)(cid:24).(cid:15)(cid:20)(cid:7)5(cid:21)’(cid:16)(cid:6)(cid:11)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1) (cid:6)(cid:11)(cid:1) (cid:27)(cid:15)(cid:12)(cid:24)(cid:8)(cid:6)(cid:12)(cid:6)(cid:12)-(cid:1)
(cid:2),,/(cid:1) (cid:11)(cid:27)(cid:8)(cid:12)(cid:12)(cid:4)(cid:18)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1) (cid:11)(cid:16)(cid:15).(cid:6)(cid:12)-(cid:1) 0/(cid:1) (cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:11)(cid:1) (cid:8)(cid:24)(cid:1) (cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)
(cid:8)-(cid:4)(cid:11)(cid:21)(cid:1)(cid:26)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1)(cid:16)(cid:8)(cid:29)(cid:4)(cid:1)(cid:29)(cid:8)(cid:20)(cid:30)(cid:6)(cid:12)-(cid:1)(cid:20)(cid:4)(cid:11)(cid:15)(cid:5))(cid:24)(cid:6)(cid:15)(cid:12)?(cid:1)(cid:8)(((cid:20)(cid:15)6(cid:6)(cid:17)(cid:8)(cid:24)(cid:4)(cid:5)(cid:30)(cid:1)>,,G1,,
((cid:6)6(cid:4)(cid:5)(cid:11)(cid:21)(cid:1) ’(cid:16)(cid:4)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1) .(cid:8)(cid:11)(cid:1) (cid:18)(cid:4)(cid:29)(cid:4)(cid:5)(cid:15)((cid:4)(cid:18)(cid:1) (cid:6)(cid:12)(cid:1) (cid:8)(cid:12)(cid:1) (cid:8)(cid:24)(cid:24)(cid:4)(cid:17)((cid:24)(cid:1) (cid:24)(cid:15)(cid:1) (cid:8)(cid:11)(cid:11)(cid:6)(cid:11)(cid:24)(cid:1)
(cid:20)(cid:4)(cid:11)(cid:4)(cid:8)(cid:20)(cid:27)(cid:16)(cid:4)(cid:20)(cid:11)(cid:1) .(cid:16)(cid:15)(cid:1) (cid:6)(cid:12)(cid:29)(cid:4)(cid:11)(cid:24)(cid:6)-(cid:8)(cid:24)(cid:4)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1) (cid:4)(cid:25)(cid:25)(cid:4)(cid:27)(cid:24)(cid:11)(cid:1) (cid:15)(cid:25)(cid:1) (cid:8)-(cid:6)(cid:12)-(cid:1) (cid:15)(cid:12)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)
(cid:8)(((cid:4)(cid:8)(cid:20)(cid:8)(cid:12)(cid:27)(cid:4)(cid:1)<(cid:2)> =(cid:21)
(cid:30) % (cid:22)(cid:9)(cid:9)(cid:14)(cid:6)(cid:7)(cid:19)(cid:2)(cid:6)(cid:23)(cid:6)(cid:22)(cid:9)(cid:20)(cid:6)(cid:10)(cid:3)(cid:11)(cid:3)(cid:12)(cid:3)(cid:13)(cid:9)
(cid:26)(cid:12)(cid:1)(cid:15)(cid:20)(cid:18)(cid:4)(cid:20)(cid:1)(cid:24)(cid:15)(cid:1) (cid:10))(cid:6)(cid:5)(cid:18)(cid:14)(cid:1) (cid:24)(cid:20)(cid:8)(cid:6)(cid:12)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:20)(cid:4)(cid:5)(cid:6)(cid:8)(cid:10)(cid:5)(cid:30)(cid:1) (cid:24)(cid:4)(cid:11)(cid:24)(cid:1) (cid:8)-(cid:4)(cid:1) (cid:27)(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)
(cid:8)(cid:5)-(cid:15)(cid:20)(cid:6)(cid:24)(cid:16)(cid:17)(cid:11)(cid:14)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:1).(cid:6)(cid:24)(cid:16)(cid:1)(cid:27)(cid:15)(cid:12)(cid:24)(cid:20)(cid:15)(cid:5)(cid:5)(cid:4)(cid:18)(cid:1)(cid:29)(cid:8)(cid:20)(cid:6)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1)(cid:15)(cid:25)(cid:1)(cid:25)(cid:8)(cid:27)(cid:24)(cid:15)(cid:20)(cid:11)(cid:1)(cid:11))(cid:27)(cid:16)(cid:1)
(cid:8)(cid:11)(cid:1)(cid:8)-(cid:4)(cid:14)(cid:1)(cid:25)(cid:8)(cid:27)(cid:4)(cid:1)((cid:15)(cid:11)(cid:4)(cid:14)(cid:1)(cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)(cid:4)6((cid:20)(cid:4)(cid:11)(cid:11)(cid:6)(cid:15)(cid:12)(cid:14)(cid:1)(cid:15)(cid:27)(cid:27)(cid:5))(cid:11)(cid:6)(cid:15)(cid:12)(cid:1)(cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)(cid:16)(cid:8)(cid:6)(cid:20)(cid:14)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)
(cid:6)(cid:5)(cid:5))(cid:17)(cid:6)(cid:12)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:6)(cid:11)(cid:1) (cid:12)(cid:4)(cid:4)(cid:18)(cid:4)(cid:18)(cid:21)(cid:1) (cid:26)(cid:12)(cid:1) (cid:11)((cid:6)(cid:24)(cid:4)(cid:1) (cid:15)(cid:25)(cid:1) (cid:29)(cid:8)(cid:20)(cid:6)(cid:15))(cid:11)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:14)(cid:1) (cid:24)(cid:16)(cid:4)(cid:20)(cid:4)(cid:1) (cid:6)(cid:11)(cid:1)
(cid:12)(cid:15)(cid:24)(cid:1)(cid:8)(cid:12)(cid:1)(cid:8)(((cid:20)(cid:15)((cid:20)(cid:6)(cid:8)(cid:24)(cid:4)(cid:1)(cid:15)(cid:12)(cid:4)(cid:1)(cid:25)(cid:15)(cid:20)(cid:1)(cid:8)-(cid:4)(cid:1)(cid:27)(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:21)(cid:1)(cid:3)(cid:15)(cid:11)(cid:24)(cid:1)(cid:27))(cid:20)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:1)
(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:1)(cid:18)(cid:15)(cid:12):(cid:24)(cid:1)(cid:16)(cid:8)(cid:29)(cid:4)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1)(cid:15)(cid:25)(cid:1)((cid:4)(cid:15)((cid:5)(cid:4)(cid:1)(cid:6)(cid:12)(cid:1)(cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)(cid:8)-(cid:4)(cid:11)(cid:14)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)(cid:6)(cid:25)(cid:1)(cid:24)(cid:16)(cid:4)(cid:30)(cid:1)
(cid:16)(cid:8)(cid:29)(cid:4)(cid:14)(cid:1) (cid:24)(cid:16)(cid:4)(cid:30)(cid:1) (cid:18)(cid:15)(cid:1) (cid:12)(cid:15)(cid:24)(cid:1) (cid:17)(cid:4)(cid:12)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:24)(cid:16)(cid:4)(cid:6)(cid:20)(cid:1) (cid:8)-(cid:4)(cid:11)(cid:21)(cid:1) 3DE(cid:13)#’(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)
(cid:27)(cid:15)(cid:12)(cid:24)(cid:8)(cid:6)(cid:12)(cid:11)(cid:1) (cid:11)(cid:27)(cid:8)(cid:12)(cid:12)(cid:4)(cid:18)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1) (cid:15)(cid:25)(cid:1) ((cid:4)(cid:20)(cid:11)(cid:15)(cid:12)(cid:11)(cid:1) .(cid:6)(cid:24)(cid:16)(cid:1) (cid:17)(cid:4)(cid:12)(cid:24)(cid:6)(cid:15)(cid:12)(cid:6)(cid:12)-(cid:1) (cid:24)(cid:16)(cid:4)(cid:6)(cid:20)(cid:1)
(cid:8)-(cid:4)(cid:11)?(cid:1)(cid:10))(cid:24)(cid:1)(cid:18)(cid:6)(cid:25)(cid:25)(cid:4)(cid:20)(cid:4)(cid:12)(cid:24)(cid:1)(cid:5)(cid:6)-(cid:16)(cid:24)(cid:6)(cid:12)-(cid:1)(cid:27)(cid:15)(cid:12)(cid:18)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:14)(cid:1)(cid:10)(cid:8)(cid:27)(cid:7)-(cid:20)(cid:15))(cid:12)(cid:18)(cid:14)(cid:1)((cid:15)(cid:11)(cid:4)(cid:11)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)
(cid:4)6((cid:20)(cid:4)(cid:11)(cid:11)(cid:6)(cid:15)(cid:12)(cid:11)(cid:21)(cid:1)(cid:23)(cid:30)(cid:1)(cid:11)(cid:24))(cid:18)(cid:30)(cid:6)(cid:12)-(cid:1)(cid:15)(cid:24)(cid:16)(cid:4)(cid:20)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:1)(cid:6)(cid:24)(cid:1) .(cid:8)(cid:11)(cid:1) (cid:27)(cid:15)(cid:12)(cid:27)(cid:5))(cid:18)(cid:4)(cid:18)(cid:1)(cid:24)(cid:15)(cid:1)
((cid:20)(cid:15)(cid:29)(cid:6)(cid:18)(cid:4)(cid:1) (cid:8)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1) .(cid:6)(cid:24)(cid:16)(cid:1) (cid:27)(cid:15)(cid:12)(cid:18)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1) (cid:15)(cid:25)(cid:1) (cid:8)(cid:12)(cid:1) (cid:8)-(cid:4)(cid:1) (cid:27)(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1)
((cid:20)(cid:15) (cid:4)(cid:27)(cid:24)(cid:21)(cid:1) (cid:9)-(cid:4)(cid:14)(cid:1) (cid:4)(cid:12)(cid:15))-(cid:16)(cid:1) (cid:20)(cid:4)(cid:11)(cid:15)(cid:5))(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:25)(cid:15)(cid:20)(cid:1) .(cid:20)(cid:6)(cid:12)(cid:7)(cid:5)(cid:4)(cid:1) (cid:8)(cid:12)(cid:8)(cid:5)(cid:30)(cid:11)(cid:6)(cid:11)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1)
(cid:25)(cid:20)(cid:15)(cid:12)(cid:24)(cid:8)(cid:5)(cid:1)((cid:15)(cid:11)(cid:4)(cid:11)(cid:1)(cid:8)(cid:20)(cid:4)(cid:1)(cid:10)(cid:8)(cid:11)(cid:6)(cid:27)(cid:1)(cid:12)(cid:4)(cid:4)(cid:18)(cid:11)(cid:1)(cid:25)(cid:15)(cid:20)(cid:1)(cid:24)(cid:16)(cid:6)(cid:11)(cid:1)(cid:25)(cid:6)(cid:4)(cid:5)(cid:18)(cid:21)(cid:1)
% (cid:10)(cid:9)(cid:13)(cid:8)(cid:2)(cid:5)&(cid:11)(cid:5)(cid:19)(cid:4)(cid:6) ’((cid:6)
(cid:10)(cid:3)(cid:11)(cid:3)(cid:12)(cid:3)(cid:13)(cid:9)
(cid:1)(cid:2)(cid:3)(cid:4)(cid:5)(cid:3)(cid:4)(cid:6) (cid:7)(cid:3)(cid:8)(cid:9)(cid:6)
’(cid:16)(cid:4)(cid:1) (cid:26)(cid:20)(cid:8)(cid:12)(cid:6)(cid:8)(cid:12)(cid:1) 3(cid:8)(cid:27)(cid:4)(cid:1) (cid:19)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:14)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1) (cid:25)(cid:6)(cid:20)(cid:11)(cid:24)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:1) (cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1) (cid:6)(cid:12)(cid:1)
(cid:17)(cid:6)(cid:18)(cid:18)(cid:5)(cid:4)E(cid:4)(cid:8)(cid:11)(cid:24)(cid:14)(cid:1)(cid:27)(cid:15)(cid:12)(cid:24)(cid:8)(cid:6)(cid:12)(cid:11)(cid:1)(cid:27)(cid:15)(cid:5)(cid:15)(cid:20)(cid:1)(cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:20)(cid:30)(cid:1)(cid:15)(cid:25)(cid:1)(cid:8)(cid:1)(cid:5)(cid:8)(cid:20)-(cid:4)(cid:1)(cid:12))(cid:17)(cid:10)(cid:4)(cid:20)(cid:1)(cid:15)(cid:25)(cid:1)
(cid:26)(cid:20)(cid:8)(cid:12)(cid:6)(cid:8)(cid:12)(cid:1)(cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:11) (cid:10)(cid:4)(cid:24).(cid:4)(cid:4)(cid:12)(cid:1)/(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)01(cid:1)(cid:30)(cid:4)(cid:8)(cid:20)(cid:11)(cid:1)(cid:15)(cid:5)(cid:18)(cid:21)
(cid:26)3(cid:19)(cid:23)(cid:1)(cid:6)(cid:11)(cid:1)(cid:8)(cid:1)(cid:5)(cid:8)(cid:20)-(cid:4)(cid:1)(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:1)(cid:24)(cid:16)(cid:8)(cid:24)(cid:1)(cid:27)(cid:8)(cid:12)(cid:1)(cid:11))(((cid:15)(cid:20)(cid:24)(cid:1)(cid:11)(cid:24))(cid:18)(cid:6)(cid:4)(cid:11)(cid:1)(cid:15)(cid:25)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)(cid:8)-(cid:4)(cid:1)
(cid:27)(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:11)(cid:30)(cid:11)(cid:24)(cid:4)(cid:17)(cid:11)(cid:21)(cid:1) (cid:26)(cid:24)(cid:1) (cid:27)(cid:15)(cid:12)(cid:24)(cid:8)(cid:6)(cid:12)(cid:11)(cid:1) (cid:15)(cid:29)(cid:4)(cid:20)(cid:1) *(cid:14)+,,(cid:1) (cid:27)(cid:15)(cid:5)(cid:15)(cid:20)(cid:1) (cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1)
(cid:13)(cid:15)(cid:1)(cid:20)(cid:4)(cid:11)(cid:24)(cid:20)(cid:6)(cid:27)(cid:24)(cid:6)(cid:15)(cid:12)(cid:11)(cid:1)(cid:15)(cid:12)(cid:1).(cid:4)(cid:8)(cid:20)(cid:1)4(cid:27)(cid:5)(cid:15)(cid:24)(cid:16)(cid:4)(cid:11)(cid:14)(cid:1)-(cid:5)(cid:8)(cid:11)(cid:11)(cid:4)(cid:11)(cid:14)(cid:1)(cid:4)(cid:24)(cid:27)(cid:21)5(cid:14)(cid:1) (cid:17)(cid:8)(cid:7)(cid:4)E)((cid:14)(cid:1)(cid:16)(cid:8)(cid:6)(cid:20)(cid:1)
(cid:11)(cid:24)(cid:30)(cid:5)(cid:4)(cid:14)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1) (cid:16)(cid:8)(cid:6)(cid:20)(cid:1) .(cid:4)(cid:20)(cid:4)(cid:1) (cid:6)(cid:17)((cid:15)(cid:11)(cid:4)(cid:18)(cid:1) (cid:24)(cid:15)(cid:1) ((cid:8)(cid:20)(cid:24)(cid:6)(cid:27)(cid:6)((cid:8)(cid:12)(cid:24)(cid:11)(cid:21)(cid:1) D(cid:20)(cid:15))(cid:12)(cid:18)E(cid:24)(cid:20))(cid:24)(cid:16)(cid:1)
(cid:6)(cid:12)(cid:25)(cid:15)(cid:20)(cid:17)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:14)(cid:1)(cid:6)(cid:12)(cid:27)(cid:5))(cid:18)(cid:6)(cid:12)-(cid:1)(cid:26)(cid:19)(cid:14)(cid:1)(cid:8)-(cid:4)(cid:14)(cid:1)(cid:7)(cid:6)(cid:12)(cid:18)(cid:1)(cid:15)(cid:25) ((cid:15)(cid:11)(cid:4)(cid:1)(cid:15)(cid:20)(cid:1)(cid:4)6((cid:20)(cid:4)(cid:11)(cid:11)(cid:6)(cid:15)(cid:12)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)
(cid:6)(cid:25)(cid:1) (cid:24)(cid:16)(cid:4)(cid:1) (cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:1) (cid:16)(cid:8)(cid:11)(cid:1) -(cid:5)(cid:8)(cid:11)(cid:11)(cid:4)(cid:11)(cid:1) (cid:6)(cid:11)(cid:1) ((cid:20)(cid:15)(cid:29)(cid:6)(cid:18)(cid:4)(cid:18)(cid:21)(cid:1) #6((cid:4)(cid:20)(cid:6)(cid:17)(cid:4)(cid:12)(cid:24)(cid:8)(cid:5)(cid:1) (cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:11)(cid:1)
.(cid:4)(cid:20)(cid:4)(cid:1)((cid:16)(cid:15)(cid:24)(cid:15)-(cid:20)(cid:8)((cid:16)(cid:4)(cid:18)(cid:1).(cid:6)(cid:24)(cid:16)(cid:1)(cid:8)(cid:1)(cid:25)(cid:6)(cid:12)(cid:4)E(cid:20)(cid:4)(cid:11)(cid:15)(cid:5))(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)(cid:27)(cid:15)(cid:5)(cid:15)(cid:20)(cid:1)(cid:18)(cid:6)-(cid:6)(cid:24)(cid:8)(cid:5)(cid:1)(cid:27)(cid:8)(cid:17)(cid:4)(cid:20)(cid:8)(cid:1)
(cid:6)(cid:12)(cid:1)(cid:18)(cid:8)(cid:30)(cid:5)(cid:6)-(cid:16)(cid:24)(cid:21)(cid:1)’(cid:16)(cid:4)(cid:1)(cid:11))(cid:10) (cid:4)(cid:27)(cid:24)(cid:11)(cid:1).(cid:4)(cid:20)(cid:4)(cid:1)(cid:11)(cid:4)(cid:8)(cid:24)(cid:4)(cid:18)(cid:1)(cid:15)(cid:12)(cid:1)(cid:8)(cid:1)(cid:11)(cid:24)(cid:15)(cid:15)(cid:5)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)(cid:6)(cid:12)(cid:11)(cid:24)(cid:20))(cid:27)(cid:24)(cid:4)(cid:18)(cid:1)
(cid:24)(cid:15)(cid:1) (cid:17)(cid:8)(cid:6)(cid:12)(cid:24)(cid:8)(cid:6)(cid:12)(cid:1) (cid:8)(cid:1) (cid:27)(cid:15)(cid:12)(cid:11)(cid:24)(cid:8)(cid:12)(cid:24)(cid:1) (cid:16)(cid:4)(cid:8)(cid:18)(cid:1) ((cid:15)(cid:11)(cid:6)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) 4(cid:8)(cid:5)(cid:24)(cid:16)(cid:15))-(cid:16)(cid:1) (cid:11)(cid:5)(cid:6)-(cid:16)(cid:24)(cid:1)
(cid:17)(cid:15)(cid:29)(cid:4)(cid:17)(cid:4)(cid:12)(cid:24)(cid:11)(cid:1).(cid:4)(cid:20)(cid:4)(cid:1))(cid:12)(cid:8)(cid:29)(cid:15)(cid:6)(cid:18)(cid:8)(cid:10)(cid:5)(cid:4)5(cid:21)
’(cid:16)(cid:4)(cid:1)(cid:6)(cid:17)(cid:8)-(cid:4)(cid:11)(cid:1)(cid:8)(cid:20)(cid:4)(cid:1)(cid:6)(cid:12)(cid:1)>0,G+>,(cid:1)((cid:6)6(cid:4)(cid:5)(cid:11)(cid:1)(cid:20)(cid:4)(cid:11)(cid:15)(cid:5))(cid:24)(cid:6)(cid:15)(cid:12)(cid:14)(cid:1)/>(cid:1)(cid:10)(cid:6)(cid:24)(cid:1)(cid:18)(cid:4)((cid:24)(cid:16) (cid:14)(cid:1)
(cid:8)(cid:10)(cid:15))(cid:24)(cid:1)>,(cid:1)(cid:31)(cid:10)(cid:30)(cid:24)(cid:4)(cid:11)(cid:1)(cid:11)(cid:6)(cid:22)(cid:4)(cid:1)(cid:8)(cid:12)(cid:18)(cid:1)CHD(cid:1)(cid:25)(cid:15)(cid:20)(cid:17)(cid:8)(cid:24) (cid:21)(cid:1)
#(cid:12)(cid:15))-(cid:16)(cid:1) (cid:5))(cid:17)(cid:6)(cid:12)(cid:15)(cid:11)(cid:6)(cid:24)(cid:30)(cid:1) (cid:25)(cid:15)(cid:20)(cid:1) .(cid:20)(cid:6)(cid:12)(cid:7)(cid:5)(cid:4)(cid:1) ((cid:20)(cid:15)(cid:27)(cid:4)(cid:11)(cid:11)(cid:6)(cid:12)-(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:25)(cid:8)(cid:27)(cid:6)(cid:8)(cid:5)(cid:1)
(cid:25)(cid:4)(cid:8)(cid:24))(cid:20)(cid:4)(cid:11)(cid:1) .(cid:6)(cid:24)(cid:16)(cid:15))(cid:24)(cid:1) (cid:11)(cid:16)(cid:8)(cid:18)(cid:15).(cid:11)(cid:1) (cid:6)(cid:11)(cid:1) (cid:12)(cid:4)(cid:4)(cid:18)(cid:4)(cid:18)(cid:1) 4(cid:6)(cid:12)(cid:1) (cid:8)-(cid:4)(cid:1) (cid:27)(cid:5)(cid:8)(cid:11)(cid:11)(cid:6)(cid:25)(cid:6)(cid:27)(cid:8)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1)
.(cid:20)(cid:6)(cid:12)(cid:7)(cid:5)(cid:4)(cid:1) (cid:18)(cid:4)(cid:24)(cid:4)(cid:27)(cid:24)(cid:6)(cid:15)(cid:12)(cid:1) (cid:8)(cid:12)(cid:18)(cid:1) (cid:8)(cid:12)(cid:8)(cid:5)(cid:30)(cid:11)(cid:6)(cid:11)(cid:1)
(cid:24)(cid:16)(cid:4)(cid:1)
(cid:18)(cid:6)(cid:11)(cid:24)(cid:6)(cid:12)-)(cid:6)(cid:11)(cid:16)(cid:6)(cid:12)-(cid:1)(cid:15)(cid:25)(cid:1)(cid:11)(cid:4)(cid:12)(cid:6)(cid:15)(cid:20)(cid:11)(cid:1)(cid:25)(cid:20)(cid:15)(cid:17)(cid:1)(cid:24)(cid:16)(cid:15)(cid:11)(cid:4)(cid:1)(cid:6)(cid:12)(cid:1)(cid:24)(cid:16)(cid:4)(cid:1)(cid:30)(cid:15))(cid:12)-(cid:4)(cid:20)(cid:1)(cid:27)(cid:8)(cid:24)(cid:4)-(cid:15)(cid:20)(cid:6)(cid:4)(cid:11)(cid:1)
<(cid:2)=5(cid:21)(cid:1) ")(cid:10) (cid:4)(cid:27)(cid:24)(cid:11)(cid:1) .(cid:4)(cid:20)(cid:4)(cid:1) ((cid:16)(cid:15)(cid:24)(cid:15)-(cid:20)(cid:8)((cid:16)(cid:4)(cid:18)(cid:1) .(cid:6)(cid:24)(cid:16)(cid:15))(cid:24)(cid:1) (cid:8)(cid:12)(cid:30)(cid:1) ((cid:20)(cid:15) (cid:4)(cid:27)(cid:24)(cid:15)(cid:20)(cid:11)(cid:1) (cid:15)(cid:20)(cid:1)
(cid:6)(cid:17)((cid:15)(cid:20)(cid:24)(cid:8)(cid:12)(cid:24)(cid:1)
(cid:25)(cid:15)(cid:20)(cid:1)
(cid:6)(cid:11)(cid:1)
(cid:18)(cid:8)(cid:24)(cid:8)(cid:10)(cid:8)(cid:11)(cid:4)(cid:11)(cid:1) (cid:16)(cid:8)(cid:29)(cid:4)(cid:1) (cid:10)(cid:4)(cid:4)(cid:12)(cid:1) (cid:27)(cid:15)(cid:5)(cid:5)(cid:4)(cid:27)(cid:24)(cid:4)(cid:18)?(cid:1) (cid:11))(cid:27)(cid:16)(cid:1) (cid:8)(cid:11)(cid:1) (cid:9)!(cid:1) <+=(cid:14)(cid:1) (cid:23)(cid:9)(cid:13)7(cid:9)(cid:1) <@=(cid:14)(cid:1)
(cid:27)(cid:15)(cid:20)(cid:20)(cid:4)(cid:11)((cid:15)(cid:12)(cid:18)(cid:6)(cid:12)-(cid:1) (cid:24)(cid:15)(cid:1) +(cid:2)+(cid:1) ((cid:4)(cid:15)((cid:5)(cid:4)9(cid:11)(cid:1) (cid:25)(cid:8)(cid:27)(cid:4)(cid:11)(cid:1) 4>0@(cid:1) (cid:17)(cid:4)(cid:12)(cid:14)(cid:1) (cid:2)/B(cid:1) .(cid:15)(cid:17)(cid:4)(cid:12)5(cid:21)(cid:1)
06560d5721ecc487a4d70905a485e22c9542a522SUN, YU: DEEP FACIAL ATTRIBUTE DETECTION IN THE WILD
Deep Facial Attribute Detection in the Wild:
From General to Specific
Department of Automation
University of Science and Technology
of China
Hefei, China
('4364455', 'Yuechuan Sun', 'yuechuan sun')
('1720236', 'Jun Yu', 'jun yu')
ycsun@mail.ustc.edu.cn
harryjun@ustc.edu.cn
06526c52a999fdb0a9fd76e84f9795a69480cecf
06bad0cdda63e3fd054e7b334a5d8a46d8542817Sharing Features Between Objects and Their Attributes
1Department of Computer Science
University of Texas at Austin
2Computer Science Department
University of Southern California
('35788904', 'Sung Ju Hwang', 'sung ju hwang')
('1693054', 'Fei Sha', 'fei sha')
('1794409', 'Kristen Grauman', 'kristen grauman')
{sjhwang,grauman}@cs.utexas.edu
feisha@usc.edu
06fe63b34fcc8ff68b72b5835c4245d3f9b8a016Mach Learn
DOI 10.1007/s10994-013-5336-9
Learning semantic representations of objects
and their parts
Received: 24 May 2012 / Accepted: 26 February 2013
© The Author(s) 2013
('1935910', 'Grégoire Mesnil', 'grégoire mesnil')
('1732280', 'Gal Chechik', 'gal chechik')
06aab105d55c88bd2baa058dc51fa54580746424Image Set based Collaborative Representation for
Face Recognition
('2873638', 'Pengfei Zhu', 'pengfei zhu')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('36685537', 'Lei Zhang', 'lei zhang')
('1698371', 'David Zhang', 'david zhang')
0641dbee7202d07b6c78a39eecd312c17607412e283
978-1-4799-5751-4/14/$31.00 ©2014 IEEE
ICIP 2014
WITH APPLICATIONS TO MOTION SEGMENTATION AND FACE CLUSTERING
NULL SPACE CLUSTERING
Australian National University, Canberra
2NICTA, Canberra
('2744345', 'Pan Ji', 'pan ji')
('2015152', 'Yiran Zhong', 'yiran zhong')
('40124570', 'Hongdong Li', 'hongdong li')
('2862871', 'Mathieu Salzmann', 'mathieu salzmann')
fpan.ji,hongdong.lig@anu.edu.au,mathieu.salzmann@nicta.com.au
06262d14323f9e499b7c6e2a3dec76ad9877ba04Real-Time Pose Estimation Piggybacked on Object Detection
Brno, Czech Republic
('1785162', 'Adam Herout', 'adam herout')Graph@FIT, Brno University of Technology
ijuranek,herout,idubska,zemcik@fit.vutbr.cz
062c41dad67bb68fefd9ff0c5c4d296e796004dcTemporal Generative Adversarial Nets with Singular Value Clipping
Preferred Networks inc., Japan
('49160719', 'Masaki Saito', 'masaki saito')
('8252749', 'Eiichi Matsumoto', 'eiichi matsumoto')
('3083107', 'Shunta Saito', 'shunta saito')
{msaito, matsumoto, shunta}@preferred.jp
06400a24526dd9d131dfc1459fce5e5189b7baecEvent Recognition in Photo Collections with a Stopwatch HMM
1Computer Vision Lab
ETH Z¨urich, Switzerland
2ESAT, PSI-VISICS
K.U. Leuven, Belgium
('1696393', 'Lukas Bossard', 'lukas bossard')
('2737253', 'Matthieu Guillaumin', 'matthieu guillaumin')
('1681236', 'Luc Van Gool', 'luc van gool')
lastname@vision.ee.ethz.ch
vangool@esat.kuleuven.be
062d67af7677db086ef35186dc936b4511f155d7They Are Not Equally Reliable: Semantic Event Search
using Differentiated Concept Classifiers
Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney
Carnegie Mellon University
('1729163', 'Xiaojun Chang', 'xiaojun chang')
('1698559', 'Yi Yang', 'yi yang')
('1752601', 'Eric P. Xing', 'eric p. xing')
cxj273@gmail.com, yaoliang@cs.cmu.edu, yi.yang@uts.edu.au, epxing@cs.cmu.edu
06c2086f7f72536bf970ca629151b16927104df3PALMERO ET AL.: MULTI-MODAL RECURRENT CNN FOR 3D GAZE ESTIMATION
Recurrent CNN for 3D Gaze Estimation
using Appearance and Shape Cues
1 Dept. Mathematics and Informatics
Universitat de Barcelona, Spain
2 Computer Vision Center
Campus UAB, Bellaterra, Spain
3 Dept. Electrical and Computer Eng.
University of Calgary, Canada
4 Dept. Engineering
University of Larestan, Iran
('3413560', 'Cristina Palmero', 'cristina palmero')
('38081877', 'Javier Selva', 'javier selva')
('1921285', 'Mohammad Ali Bagheri', 'mohammad ali bagheri')
('7855312', 'Sergio Escalera', 'sergio escalera')
crpalmec7@alumnes.ub.edu
javier.selva.castello@est.fib.upc.edu
mohammadali.bagheri@ucalgary.ca
sergio@maia.ub.es
0694b05cbc3ef5d1c5069a4bfb932a5a7b4d5ff0Iosifidis, A., Tefas, A., & Pitas, I. (2014). Exploiting Local Class Information
in Extreme Learning Machine. Paper presented at International Joint
Conference on Computational Intelligence (IJCCI), Rome, Italy.
Peer reviewed version
Link to publication record in Explore Bristol Research
PDF-document
University of Bristol - Explore Bristol Research
General rights
This document is made available in accordance with publisher policies. Please cite only the published
version using the reference above. Full terms of use are available:
http://www.bristol.ac.uk/pure/about/ebr-terms
060034b59275c13746413ca9c67d6304cba50da6Ordered Trajectories for Large Scale Human Action Recognition
1Vision & Sensing, HCC Lab,
ESTeM, University of Canberra
2IHCC, RSCS, CECS,
Australian National University
('1793720', 'O. V. Ramana Murthy', 'o. v. ramana murthy')
('1717204', 'Roland Goecke', 'roland goecke')
O.V.RamanaMurthy@ieee.org
roland.goecke@ieee.org
060820f110a72cbf02c14a6d1085bd6e1d994f6aFine-Grained Classification of Pedestrians in Video: Benchmark and State of the Art
California Institute of Technology
The dataset was labelled with bounding boxes, tracks, pose and fine-
grained labels. To achieve this, crowdsourcing, using workers from Ama-
zon’s Mechanical Turk (MTURK) was used. A summary of the dataset’s
statistics can be found in Table 1.
Number of Frames Sent to MTURK
Number of Frames with at least 1 Pedestrian
Number of Bounding Box Labels
Number of Pose Labels
Number of Tracks
38,708
20,994
32,457
27,454
4,222
Table 1: Dataset Statistics
A state-of-the-art algorithm for fine-grained classification was tested us-
ing the dataset. The results are reported as a useful performance baseline.
The dataset is split into a training/validation set containing 4 videos, with
the remaining 3 videos forming the test set. Since each video was collected
on a unique day, different images of the same person do not appear in both
the training and testing sets.
The fine-grained categorisation benchmark uses ’pose normalised deep
convolutional nets’ as proposed by Branson et al. [1]. In this framework,
features are extracted by applying deep convolutional nets to image re-
gions that are normalised by pose. It has state-of the-art performance on
bird species categorisation and we believe that it will generalise to the CRP
dataset. Results can be found in Figure 2
Figure 2: Fine-grained classification results. We report the mean average
accuracy across 10 different train/test splits, for each of the subcategories
in CRP, using the method of [1]. Average accuracy is computed assuming
that there is a uniform prior across the classes. The reference value for
each subcategory corresponds to chance. The results suggest that CRP is a
challenging dataset.
A novel feature of our dataset is the occlusion labelling of the keypoints.
Exploiting this information may be the first step towards improving perfor-
mance for fine-grained classification. Using temporal information is another
alternative. Most pedestrians in CRP appear multiple times over large inter-
vals of time. We are planning on adding an identity label for each individ-
ual, to make our dataset useful for studying individual re-identification from
a moving camera.
Improved Bird Species Recognition Using Pose Normalized Deep Con-
volutional Nets. In BMVC, 2014.
Figure 1: Three examples from the CRP dataset. Annotations include a
bounding box, tracks, parts, occlusion, sex, age, weight and clothing style.
People are an important component of a machine’s environment. De-
tecting, tracking, and recognising people, interpreting their behaviour and
interacting with them is a valuable capability for machines. Using vision to
estimate human attributes such as: age, sex, activity, social status, health,
pose and motion patterns is useful for interpreting and predicting behaviour.
This motivates our interest in fine-grained categorisation of people.
In this work, we introduce a public video dataset—Caltech Roadside
Pedestrians (CRP)—to further advance the state-of-the-art in fine-grained
categorisation of people using the entire human body. This dataset is also
useful for benchmarking tracking, detection and pose estimation of pedes-
trians.
Its novel and distinctive features are:
1. Size (27,454 bounding box and pose labels) – making it suitable for
training deep-networks.
2. Natural behaviour – subjects are recorded “in-the-wild” so are un-
aware, and behave naturally.
3. Viewpoint – Pedestrians are viewed from front, profile, back and ev-
erything in between.
4. Moving camera – More general and challenging than surveillance
video with static background.
5. Realism – There is a variety of outdoor background and lighting con-
ditions
6. Multi-class subcategories – age, clothing style and body shape.
7. Detailed annotation – bounding boxes, tracks and 14 keypoints with
occlusion information; examples can be found in Figure 1. Each
bounding box is also labelled with the fine-grained categories of age
(5 classes), sex (2 classes), weight (3 classes) and clothing type (4
classes).
8. Availability – All videos and annotations are publicly available
CRP contains seven, twenty-one minute videos. Each video is captured
by mounting a rightwards-pointing, GoPro Hero3 camera to the roof of a
car. The car then completes three laps of a ring road within a park where
there are many walkers and joggers. Each video was recorded on a different
day.
('1990633', 'David Hall', 'david hall')
('1690922', 'Pietro Perona', 'pietro perona')
('1690922', 'Pietro Perona', 'pietro perona')
0653dcdff992ad980cd5ea5bc557efb6e2a53ba1
063a3be18cc27ba825bdfb821772f9f59038c207This is a repository copy of The development of spontaneous facial responses to others’
emotions in infancy. An EMG study.
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/125231/
Version: Published Version
Article:
Kaiser, Jakob, Crespo-Llado, Maria Magdalena, Turati, Chiara et al. (1 more author)
(2017) The development of spontaneous facial responses to others’ emotions in infancy.
An EMG study. Scientific Reports. ISSN 2045-2322
https://doi.org/10.1038/s41598-017-17556-y
Reuse
This article is distributed under the terms of the Creative Commons Attribution (CC BY) licence. This licence
allows you to distribute, remix, tweak, and build upon the work, even commercially, as long as you credit the
authors for the original work. More information and the full terms of the licence here:
https://creativecommons.org/licenses/
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
https://eprints.whiterose.ac.uk/
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.
eprints@whiterose.ac.uk
064cd41d323441209ce1484a9bba02a22b625088Selective Transfer Machine for Personalized Facial Action Unit Detection
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
University of Pittsburgh, Pittsburgh, PA
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
06c2dfe1568266ad99368fc75edf79585e29095fBayesian Active Appearance Models
Imperial College London, United Kingdom
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
{ja310,s.zafeiriou}@imperial.ac.uk
06f39834e870278243dda826658319be2d5d8dedRECOGNIZING UNSEEN ACTIONS IN A DOMAIN-ADAPTED EMBEDDING SPACE
Arizona State University
('2180892', 'Yikang Li', 'yikang li')
('8060096', 'Sheng-hung Hu', 'sheng-hung hu')
('2913552', 'Baoxin Li', 'baoxin li')
06d7ef72fae1be206070b9119fb6b61ce4699587On One-Shot Similarity Kernels: explicit feature maps and properties
†Department of Computing
Imperial College London
,†,(cid:2)
∗Electronics Laboratory, Department of Physics,
University of Patras, Greece
(cid:2)School of Science and Technology,
Middlesex University, London
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1754270', 'Irene Kotsia', 'irene kotsia')
s.zafeiriou@imperial.ac.uk
062d0813815c2b9864cd9bb4f5a1dc2c580e0d90Encouraging LSTMs to Anticipate Actions Very Early
Australian National University, 2CVLab, EPFL, Switzerland, 3Smart Vision Systems, CSIRO
('2862871', 'Mathieu Salzmann', 'mathieu salzmann')
('1688071', 'Basura Fernando', 'basura fernando')
('2370776', 'Lars Petersson', 'lars petersson')
('34234277', 'Lars Andersson', 'lars andersson')
firstname.lastname@data61.csiro.au, mathieu.salzmann@epfl.ch, basura.fernando@anu.edu.au
06a9ed612c8da85cb0ebb17fbe87f5a137541603Deep Learning of Player Trajectory Representations for Team
Activity Analysis
('10386960', 'Nazanin Mehrasa', 'nazanin mehrasa')
('19198359', 'Yatao Zhong', 'yatao zhong')
('2123865', 'Frederick Tung', 'frederick tung')
('3004771', 'Luke Bornn', 'luke bornn')
('10771328', 'Greg Mori', 'greg mori')
('2190580', 'Simon Fraser', 'simon fraser')
{nmehrasa, yataoz, ftung, lbornn}@sfu.ca, mori@cs.sfu.ca
06ad99f19cf9cb4a40741a789e4acbf4433c19aeSenTion: A framework for Sensing Facial
Expressions
('31623038', 'Rahul Islam', 'rahul islam')
('3451315', 'Karan Ahuja', 'karan ahuja')
('1784438', 'Sandip Karmakar', 'sandip karmakar')
{rahul.islam, karan.ahuja, sandip, ferdous}@iiitg.ac.in
6c27eccf8c4b22510395baf9f0d0acc3ee547862Using CMU PIE Human Face Database to a
Convolutional Neural Network - Neocognitron

Rodovia Washington Luis, Km 235, São Carlos – SP - Brazil
Systems and Telematics - Neurolab
Via Opera Pia, 13 – I-16145 – Genoa - Italy
('2231336', 'José Hiroki Saito', 'josé hiroki saito')
('3261775', 'Marcelo Hirakuri', 'marcelo hirakuri')
('2558289', 'André Saunite', 'andré saunite')
('36243877', 'Alessandro Noriaki Ide', 'alessandro noriaki ide')
('40209065', 'Sandra Abib', 'sandra abib')
{saito,hirakuri,sabib}@dc.ufscar.br, tiagocarvalho@uol.com.br, saunite@fai.com.br
noriaki@dist.unige.it
6c66ae815e7e508e852ecb122fb796abbcda16a8International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.5, October 2015
A SURVEY OF THE TRENDS IN FACIAL AND
EXPRESSION RECOGNITION DATABASES AND
METHODS
University of Washington, Bothell, USA
('2971095', 'Sohini Roychowdhury', 'sohini roychowdhury')
('33073434', 'Michelle Emmons', 'michelle emmons')
6ca2c5ff41e91c34696f84291a458d1312d15bf2LIPNET: SENTENCE-LEVEL LIPREADING
University of Oxford, Oxford, UK
Google DeepMind, London, UK 2
CIFAR, Canada 3
{yannis.assael,brendan.shillingford,
('3365565', 'Yannis M. Assael', 'yannis m. assael')
('3144580', 'Brendan Shillingford', 'brendan shillingford')
('1766767', 'Shimon Whiteson', 'shimon whiteson')
shimon.whiteson,nando.de.freitas}@cs.ox.ac.uk
6cefb70f4668ee6c0bf0c18ea36fd49dd60e8365Privacy-Preserving Deep Inference for Rich User
Data on The Cloud
Sharif University of Technology
Queen Mary University of London
Nokia Bell Labs and University of Oxford
('9920557', 'Ali Shahin Shamsabadi', 'ali shahin shamsabadi')
('2251846', 'Ali Taheri', 'ali taheri')
('2226725', 'Kleomenis Katevas', 'kleomenis katevas')
('1688652', 'Hamid R. Rabiee', 'hamid r. rabiee')
('2772904', 'Nicholas D. Lane', 'nicholas d. lane')
('1763096', 'Hamed Haddadi', 'hamed haddadi')
6c690af9701f35cd3c2f6c8d160b8891ad85822aMulti-Task Learning with Low Rank Attribute Embedding for Person
Re-identification
Peking University
University of Maryland College Park
University of Texas at San Antonio
('20798990', 'Chi Su', 'chi su')
('1752128', 'Fan Yang', 'fan yang')
('1776581', 'Shiliang Zhang', 'shiliang zhang')
('1693428', 'Larry S. Davis', 'larry s. davis')
6c5fbf156ef9fc782be0089309074cc52617b868Controllable Video Generation with Sparse Trajectories
Cornell University
('19235216', 'Zekun Hao', 'zekun hao')
('47932904', 'Xun Huang', 'xun huang')
('50172592', 'Serge Belongie', 'serge belongie')
{hz472,xh258,sjb344}@cornell.edu
6c304f3b9c3a711a0cca5c62ce221fb098dccff0Attentive Semantic Video Generation using Captions
IIT Hyderabad
IIT Hyderabad
('8268761', 'Tanya Marwah', 'tanya marwah')
('47351893', 'Gaurav Mittal', 'gaurav mittal')
('1699429', 'Vineeth N. Balasubramanian', 'vineeth n. balasubramanian')
ee13b1044@iith.ac.in
gaurav.mittal.191013@gmail.com
vineethnb@iith.ac.in
6ce23cf4f440021b7b05aa3c1c2700cc7560b557Learning Local Convolutional Features for Face
Recognition with 2D-Warping
Human Language Technology and Pattern Recognition Group,
RWTH Aachen University
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('1685956', 'Hermann Ney', 'hermann ney')
surname@cs.rwth-aachen.de
6c80c834d426f0bc4acd6355b1946b71b50cbc0bPose-Based Two-Stream Relational Networks for
Action Recognition in Videos
1Center for Research on Intelligent Perception and Computing (CRIPAC),
National Laboratory of Pattern Recognition (NLPR)
2Center for Excellence in Brain Science and Intelligence Technology (CEBSIT),
Institute of Automation, Chinese Academy of Sciences (CASIA
University of Chinese Academy of Sciences (UCAS
('47824598', 'Wei Wang', 'wei wang')
('47539600', 'Jinjin Zhang', 'jinjin zhang')
('39927579', 'Chenyang Si', 'chenyang si')
('1693997', 'Liang Wang', 'liang wang')
{wangwei, wangliang}@nlpr.ia.ac.cn, {jinjin.zhang,
chenyang.si}@cripac.ia.ac.cn
6cb7648465ba7757ecc9c222ac1ab6402933d983Visual Forecasting by Imitating Dynamics in Natural Sequences
Stanford University National Tsing Hua University
('32970572', 'Kuo-Hao Zeng', 'kuo-hao zeng'){khzeng, bshen88, dahuang, jniebles}@cs.stanford.edu sunmin@ee.nthu.edu.tw
6c2b392b32b2fd0fe364b20c496fcf869eac0a98DOI 10.1007/s00138-012-0423-7
ORIGINAL PAPER
Fully automatic face recognition framework based
on local and global features
Received: 30 May 2011 / Revised: 21 February 2012 / Accepted: 29 February 2012 / Published online: 22 March 2012
© Springer-Verlag 2012
('36048866', 'Cong Geng', 'cong geng')
6c6bb85a08b0bdc50cf8f98408d790ccdb418798Recognition of facial expressions in presence of
partial occlusion
AIIA Laboratory
Computer Vision and Image Processing Group
Department of Informatics
Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece
Phone: +30 2310 996361
Fax: +30 2310 998453
Web: http://poseidon.csd.auth.gr
('2336758', 'Ioan Buciu', 'ioan buciu')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
E-mail: {nelu,ekotsia,pitas}@zeus.csd.auth.gr
6c705285c554985ecfe1117e854e1fe1323f8c21DIY Human Action Data Set Generation
Illya Zharkov
Simon Fraser University
Microsoft
Microsoft
Microsoft
('1916516', 'Mehran Khodabandeh', 'mehran khodabandeh')
('3227254', 'Hamid Reza Vaezi Joze', 'hamid reza vaezi joze')
('3811436', 'Vivek Pradeep', 'vivek pradeep')
mkhodaba@sfu.ca
hava@microsoft.com
zharkov@microsoft.com
vpradeep@microsoft.com
6cddc7e24c0581c50adef92d01bb3c73d8b80b41Face Verification Using the LARK
Representation
('3326805', 'Hae Jong Seo', 'hae jong seo')
('1718280', 'Peyman Milanfar', 'peyman milanfar')
6cfc337069868568148f65732c52cbcef963f79dAudio-Visual Speaker Localization via Weighted
Clustering
To cite this version:
Localization via Weighted Clustering. IEEE Workshop on Machine Learning for Signal Processing,
Sep 2014, Reims, France. pp.1-6, 2014, <10.1109/MLSP.2014.6958874>.
HAL Id: hal-01053732
https://hal.archives-ouvertes.fr/hal-01053732
Submitted on 11 Aug 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('1780201', 'Xavier Alameda-Pineda', 'xavier alameda-pineda')
('1794229', 'Radu Horaud', 'radu horaud')
('1785817', 'Florence Forbes', 'florence forbes')
('1780201', 'Xavier Alameda-Pineda', 'xavier alameda-pineda')
('1794229', 'Radu Horaud', 'radu horaud')
('1785817', 'Florence Forbes', 'florence forbes')
6cd96f2b63c6b6f33f15c0ea366e6003f512a951A New Approach in Solving Illumination and Facial Expression Problems
for Face Recognition
a The University of Nottingham Malaysia Campus
Tel : 03-89248358, Fax : 03-89248017
Jalan Broga
43500 Semenyih, Selangor
('1968167', 'Yee Wan Wong', 'yee wan wong')
('9273662', 'Kah Phooi Seng', 'kah phooi seng')
('2808528', 'Li-Minn Ang', 'li-minn ang')
E-mail : yeewan.wong@nottingham.edu.my
6c8c7065d1041146a3604cbe15c6207f486021baAttention Modeling for Face Recognition via Deep Learning
Department of Computing, Hung Hom, Kowloon
Hong Kong, 999077 CHINA
Department of Computing, Hung Hom, Kowloon
Hong Kong, 99907 CHINA
Department of Computing, Hung Hom, Kowloon
Hong Kong, 99907 CHINA
Department of Computing, Hung Hom, Kowloon
Hong Kong, 99907 CHINA
Sheng-hua Zhong (csshzhong@comp.polyu.edu.hk)
Yan Liu (csyliu@comp.polyu.edu.hk)
Yao Zhang (csyaozhang@comp.polyu.edu.hk)
Fu-lai Chung (cskchung@comp.polyu.edu.hk)
390f3d7cdf1ce127ecca65afa2e24c563e9db93bLearning Deep Representation for Face
Alignment with Auxiliary Attributes
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('1693209', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
39ed31ced75e6151dde41944a47b4bdf324f922bPose-Guided Photorealistic Face Rotation
CRIPAC and NLPR and CEBSIT, CASIA 2University of Chinese Academy of Sciences
3Noah’s Ark Laboratory, Huawei Technologies Co., Ltd.
('49995036', 'Yibo Hu', 'yibo hu')
('47150161', 'Xiang Wu', 'xiang wu')
('46806278', 'Bing Yu', 'bing yu')
('50361927', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
{yibo.hu, xiang.wu}@cripac.ia.ac.cn, yubing5@huawei.com, {rhe, znsun}@nlpr.ia.ac.cn
3918b425bb9259ddff9eca33e5d47bde46bd40aaCopyright
by
David Lieh-Chiang Chen
2012
39ce143238ea1066edf0389d284208431b53b802
39ce2232452c0cd459e32a19c1abe2a2648d0c3f
3998c5aa6be58cce8cb65a64cb168864093a9a3e
39dc2ce4cce737e78010642048b6ed1b71e8ac2fRecognition of Six Basic Facial Expressions by Feature-Points Tracking using
RBF Neural Network and Fuzzy Inference System
Islamic Azad University of AHAR
Elect. Eng. Faculty, Tabriz University, Tabriz, Iran
('3210269', 'Hadi Seyedarabi', 'hadi seyedarabi')
('2488201', 'Ali Aghagolzadeh', 'ali aghagolzadeh')
('1766050', 'Sohrab Khanmohammadi', 'sohrab khanmohammadi')
seyedarabi@tabrizu.ac.ir , aghagol@tabrizu.ac.ir , khan@tabrizu.ac.ir
397aeaea61ecdaa005b09198942381a7a11cd129
3991223b1dc3b87883cec7af97cf56534178f74aA Unified Framework for Context Assisted Face Clustering
Department of Computer Science
University of California, Irvine
('3338094', 'Liyan Zhang', 'liyan zhang')
('1818681', 'Dmitri V. Kalashnikov', 'dmitri v. kalashnikov')
('1686199', 'Sharad Mehrotra', 'sharad mehrotra')
39b22bcbd452d5fea02a9ee63a56c16400af2b83
399a2c23bd2592ebe20aa35a8ea37d07c14199da
396a19e29853f31736ca171a3f40c506ef418a9fReal World Real-time Automatic Recognition of Facial Expressions
Exploratory Computer Vision Group, IBM T. J. Watson Research Center
PO Box 704, Yorktown Heights, NY 10598
('8193125', 'Ying-li Tian', 'ying-li tian')
('1773140', 'Ruud Bolle', 'ruud bolle')
{yltian,lisabr,arunh,sharat,aws,bolle}@us.ibm.com
392d35bb359a3b61cca1360272a65690a97a2b3fYAN, YAP, MORI: ONE-SHOT MULTI-TASK LEARNING FOR VIDEO EVENT DETECTION 1
Multi-Task Transfer Methods to Improve
One-Shot Learning for Multimedia Event
Detection
School of Computing Science
Simon Fraser University
Burnaby, BC, CANADA
('34289418', 'Wang Yan', 'wang yan')
('32874186', 'Jordan Yap', 'jordan yap')
('10771328', 'Greg Mori', 'greg mori')
wyan@sfu.ca
jjyap@sfu.ca
mori@cs.sfu.ca
397085122a5cade71ef6c19f657c609f0a4f7473GHIASI, FOWLKES: USING SEGMENTATION TO DETECT OCCLUSION
Using Segmentation to Predict the Absence
of Occluded Parts
Dept. of Computer Science
University of California
Irvine, CA
('1898210', 'Golnaz Ghiasi', 'golnaz ghiasi')
('3157443', 'Charless C. Fowlkes', 'charless c. fowlkes')
gghiasi@ics.uci.edu
fowlkes@ics.uci.edu
39c48309b930396a5a8903fdfe781d3e40d415d0Learning Spatial and Temporal Cues for Multi-label Facial Action Unit Detection
Robotics Institute, Carnegie Mellon University, Pittsburgh PA
University of Pittsburgh, Pittsburgh PA
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
39c8b34c1b678235b60b648d0b11d241a34c8e32Learning to Deblur Images with Exemplars ('9416825', 'Jinshan Pan', 'jinshan pan')
('2776845', 'Wenqi Ren', 'wenqi ren')
('1786024', 'Zhe Hu', 'zhe hu')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
3986161c20c08fb4b9b791b57198b012519ea58bInternational Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-4 Issue-4, September 2014
An Efficient Method for Face Recognition based on
Fusion of Global and Local Feature Extraction
('9218646', 'E. Gomathi', 'e. gomathi')
('1873007', 'K. Baskaran', 'k. baskaran')
392425be1c9d9c2ee6da45de9df7bef0d278e85f
392c3cabe516c0108b478152902a9eee94f4c81eComputer Science and Artificial Intelligence Laboratory
Technical Report
MIT-CSAIL-TR-2007-024
April 23, 2007
Tiny images
m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u
('34943293', 'Antonio Torralba', 'antonio torralba')
('2276554', 'Rob Fergus', 'rob fergus')
('1768236', 'William T. Freeman', 'william t. freeman')
39f525f3a0475e6bbfbe781ae3a74aca5b401125Deep Joint Face Hallucination and Recognition
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
November 28, 2016
('4080607', 'Junyu Wu', 'junyu wu')
('2442939', 'Shengyong Ding', 'shengyong ding')
('1723992', 'Wei Xu', 'wei xu')
('38255852', 'Hongyang Chao', 'hongyang chao')
wujunyu2@mail2.sysu.edu.cn
1633615231@qq.com
xuwei1993@qq.com
isschhy@mail.sysu.edu.cn
3946b8f862ecae64582ef0912ca2aa6d3f6f84dcWho and Where: People and Location Co-Clustering
Electrical Engineering
Stanford University
('8491578', 'Zixuan Wang', 'zixuan wang')zxwang@stanford.edu
3933416f88c36023a0cba63940eb92f5cef8001aLearning Robust Subspace Clustering
Department of Electrical and Computer Engineering
Duke University
Durham, NC, 27708
May 11, 2014
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
{qiang.qiu, guillermo.sapiro}@duke.edu
39150acac6ce7fba56d54248f9c0badbfaeef0eaProceedings, Digital Signal Processing for in-Vehicle and mobile systems, Istanbul, Turkey, June 2007.
Sabanci University
Faculty of
Engineering and Natural Sciences
Orhanli, Istanbul
('40322754', 'Esra Vural', 'esra vural')
('21691177', 'Mujdat Cetin', 'mujdat cetin')
('31849282', 'Aytul Ercil', 'aytul ercil')
('2724380', 'Gwen Littlewort', 'gwen littlewort')
('1858421', 'Marian Bartlett', 'marian bartlett')
('29794862', 'Javier Movellan', 'javier movellan')
3947b64dcac5bcc1d3c8e9dcb50558efbb8770f1
3965d61c4f3b72044f43609c808f8760af8781a2
39f03d1dfd94e6f06c1565d7d1bb14ab0eee03bcSimultaneous Local Binary Feature Learning and Encoding for Face Recognition
Tsinghua University, Beijing, China
2Rapid-Rich Object Search (ROSE) Lab, Interdisciplinary Graduate School,
Nanyang Technological University, Singapore
('1697700', 'Jiwen Lu', 'jiwen lu')
('1754854', 'Venice Erin Liong', 'venice erin liong')
('39491387', 'Jie Zhou', 'jie zhou')
elujiwen@gmail.com; veniceer001@e.ntu.edu.sg; jzhou@tsinghua.edu.cn
395bf182983e0917f33b9701e385290b64e22f9a
3983637022992a329f1d721bed246ae76bc934f7Wide-Baseline Stereo for Face Recognition with Large Pose Variation
Computer Science Department
University of Maryland, College Park
('38171682', 'Carlos D. Castillo', 'carlos d. castillo')
('34734622', 'David W. Jacobs', 'david w. jacobs')
{carlos,djacobs}@cs.umd.edu
3933e323653ff27e68c3458d245b47e3e37f52fdEvaluation of a 3D-aided Pose Invariant 2D Face Recognition System
Computational Biomedicine Lab
4800 Calhoun Rd. Houston, TX, USA
('26401746', 'Ha A. Le', 'ha a. le')
('39634395', 'Pengfei Dou', 'pengfei dou')
('2461369', 'Yuhang Wu', 'yuhang wu')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
{xxu18, hale4, pdou, ywu35, ikakadia}@central.uh.edu
39b452453bea9ce398613d8dd627984fd3a0d53c
3958db5769c927cfc2a9e4d1ee33ecfba86fe054Describable Visual Attributes for
Face Verification and Image Search
('40631426', 'Neeraj Kumar', 'neeraj kumar')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
('1750470', 'Shree K. Nayar', 'shree k. nayar')
39ecdbad173e45964ffe589b9ced9f1ebfe2d44eAutomatic Recognition of Lower Facial Action Units
Joint Research Group on Audio Visual Signal Processing (AVSP),
Vrije Universiteit Brussel
Pleinlaan 2, 1050 Brussels
lower
recognize
('1802474', 'Werner Verhelst', 'werner verhelst')
('34068333', 'Isabel Gonzalez', 'isabel gonzalez')
('1970907', 'Hichem Sahli', 'hichem sahli')
igonzale@etro.vub.ac.be
hichem.sahli@etro.vub.ac.be
wverhels@etro.vub.ac.be
39b5f6d6f8d8127b2b97ea1a4987732c0db6f9df
99ced8f36d66dce20d121f3a29f52d8b27a1da6cOrganizing Multimedia Data in Video
Surveillance Systems Based on Face Verification
with Convolutional Neural Networks
National Research University Higher School of Economics, Nizhny Novgorod, Russian
Federation
('26376584', 'Anastasiia D. Sokolova', 'anastasiia d. sokolova')
('26427828', 'Angelina S. Kharchevnikova', 'angelina s. kharchevnikova')
('35153729', 'Andrey V. Savchenko', 'andrey v. savchenko')
adsokolova96@mail.ru
994f7c469219ccce59c89badf93c0661aae342641
Model Based Face Recognition Across Facial
Expressions

screens, embedded into mobiles and installed into everyday
living and working environments they become valuable tools
for human system interaction. A particular important aspect of
this interaction is detection and recognition of faces and
interpretation of facial expressions. These capabilities are
deeply rooted in the human visual system and a crucial
building block for social interaction. Consequently, these
capabilities are an important step towards the acceptance of
many technical systems.
trees as a classifier
lies not only
('1725709', 'Zahid Riaz', 'zahid riaz')
('50565622', 'Christoph Mayer', 'christoph mayer')
('32131501', 'Matthias Wimmer', 'matthias wimmer')
('1699132', 'Bernd Radig', 'bernd radig')
('31311898', 'Senior Member', 'senior member')
9949ac42f39aeb7534b3478a21a31bc37fe2ffe3Parametric Stereo for Multi-Pose Face Recognition and
3D-Face Modeling
PSI ESAT-KUL
Leuven, Belgium
('2733505', 'Rik Fransens', 'rik fransens')
('2404667', 'Christoph Strecha', 'christoph strecha')
('1681236', 'Luc Van Gool', 'luc van gool')
999289b0ef76c4c6daa16a4f42df056bf3d68377The Role of Color and Contrast in Facial Age Estimation
Intelligent Systems Lab Amsterdam, University of Amsterdam, The Netherlands
Pattern Recognition and Bioinformatics Group, Delft University of Technology, The Netherlands
Bo gazic i University, Istanbul, Turkey
('1695527', 'Theo Gevers', 'theo gevers')
('1764521', 'Albert Ali Salah', 'albert ali salah')
{h.dibeklioglu,th.gevers,m.p.Lucassen}@uva.nl
salah@boun.edu.tr
9958942a0b7832e0774708a832d8b7d1a5d287aeThe Sparse Matrix Transform for Covariance
Estimation and Analysis of High Dimensional
Signals
('1696925', 'Guangzhi Cao', 'guangzhi cao')
('1709256', 'Leonardo R. Bachega', 'leonardo r. bachega')
('1745655', 'Charles A. Bouman', 'charles a. bouman')
995d55fdf5b6fe7fb630c93a424700d4bc566104The One Triangle Three Parallelograms Sampling Strategy and Its Application
in Shape Regression
Centre of Mathematical Sciences
Lund University, Lund, Sweden
('38481779', 'Mikael Nilsson', 'mikael nilsson')mikael.nilsson@math.lth.se
99726ad232cef837f37914b63de70d8c5101f4e2International Journal of Scientific & Engineering Research, Volume 5, Issue 5, May-2014 570
ISSN 2229-5518
Facial Expression Recognition Using PCA & Distance Classifier
Dept. of Electronics & Telecomm. Engg.
Ph.D Scholar,VSSUT
BURLA, ODISHA, INDIA
Nilamani Bhoi
Reader in Dept. of Electronics & Telecomm. Engg.
VEER SURENDRA SAI UNIVERSITY OF
TECHNOLOGY
BURLA, ODISHA, INDIA
alpesh.d123@gmail.com
nilamanib@gmail.com
993d189548e8702b1cb0b02603ef02656802c92bHighly-Economized Multi-View Binary
Compression for Scalable Image Clustering
Harbin Institute of Technology (Shenzhen), China
The University of Queensland, Australia
Inception Institute of Arti cial Intelligence, UAE
4 Computer Vision Laboratory, ETH Zurich, Switzerland
University of Electronic Science and Technology of China, China
('38448016', 'Zheng Zhang', 'zheng zhang')
('40241836', 'Li Liu', 'li liu')
('1747229', 'Jie Qin', 'jie qin')
('39986542', 'Fan Zhu', 'fan zhu')
('2731972', 'Fumin Shen', 'fumin shen')
('1725160', 'Yong Xu', 'yong xu')
('40799321', 'Ling Shao', 'ling shao')
('1724393', 'Heng Tao Shen', 'heng tao shen')
9931c6b050e723f5b2a189dd38c81322ac0511de
994b52bf884c71a28b4f5be4eda6baaacad1beeeCategorizing Big Video Data on the Web:
Challenges and Opportunities
School of Computer Science
Fudan University
Shanghai, China
http://www.yugangjiang.info
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
99001ac9fdaf7649c0d0bd8d2078719bafd216d9> TPAMI-0571-1005<
General Tensor Discriminant Analysis and
Gabor Features for Gait Recognition
School of Computer Science and Information Systems, Birkbeck College, University of London
University of Vermont, 33 Colchester Avenue, Burlington
Malet Street, London WC1E 7HX, United Kingdom.
Vermont 05405, United States of America.
('1692693', 'Dacheng Tao', 'dacheng tao')
('1720243', 'Xuelong Li', 'xuelong li')
('1748808', 'Xindong Wu', 'xindong wu')
('1740503', 'Stephen J. Maybank', 'stephen j. maybank')
{dacheng, xuelong, sjmaybank}@dcs.bbk.ac.uk; xwu@cs.uvm.edu.
9993f1a7cfb5b0078f339b9a6bfa341da76a3168JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
A Simple, Fast and Highly-Accurate Algorithm to
Recover 3D Shape from 2D Landmarks on a Single
Image
('39071836', 'Ruiqi Zhao', 'ruiqi zhao')
('1678691', 'Yan Wang', 'yan wang')
9901f473aeea177a55e58bac8fd4f1b086e575a4Human and Sheep Facial Landmarks Localisation
by Triplet Interpolated Features
University of Cambridge
('2966679', 'Heng Yang', 'heng yang')
('2271111', 'Renqiao Zhang', 'renqiao zhang')
('39626495', 'Peter Robinson', 'peter robinson')
hy306, rz264, pr10@cam.ac.uk
992ebd81eb448d1eef846bfc416fc929beb7d28bExemplar-Based Face Parsing
Supplementary Material
University of Wisconsin Madison
Adobe Research
http://www.cs.wisc.edu/~lizhang/projects/face-parsing/
1. Additional Selected Results
Figures 1 and 2 supplement Figure 4 in our paper. In all cases, the input images come from our Helen [1] test set. We note
that our algorithm generally produces accurate results, as shown in Figures 1. However, our algorithm is not perfect and makes
mistakes on especially challenging input images, as shown in Figure 2.
In our view, the mouth is the most challenging region of the face to segment: the shape and appearance of the lips vary
widely from subject to subject, mouths deform significantly, and the overall appearance of the mouth region changes depending
on whether the inside of the mouth is visible or not. Unusual mouth expressions, like those shown in Figure 2, are not repre-
sented well in the exemplar images, which results in poor label transfer from the top exemplars to the test image. Despite these
challenges, our algorithm generally performs well on the mouth, with large segmentation errors occurring infrequently.
2. Comparisons with Liu et al. [2]
The scene parsing approach by Liu et al. [2] shares sevaral similarities with our work. Like our approach, they propose a
nonparametric system that transfers labels from exemplars in a database to annotate a test image. This begs the question, Why
not simply apply the approach from Liu et al. to face images?
To help answer this question, we used the code provided by Liu et al. on our Helen [1] images; our exemplar set is used for
training their system, and our test set is used for testing. Please see Section 4.3 in our paper for more details. Figure 3 shows
several selected results for qualitative comparison. In general, our algorithm performs much better than Liu et al.’s algorithm.
References
[1] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang. Interactive facial feature localization. In ECCV, 2012.
[2] C. Liu, J. Yuen, and A. Torralba. Nonparametric scene parsing via label transfer. In PAMI, December 2011.
('2721523', 'Brandon M. Smith', 'brandon m. smith')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
99c20eb5433ed27e70881d026d1dbe378a12b342ISCA Archive
http://www.isca-speech.org/archive
First Workshop on Speech, Language
and Audio in Multimedia
Marseille, France
August 22-23, 2013
Proceedings of the First Workshop on Speech, Language and Audio in Multimedia (SLAM), Marseille, France, August 22-23, 2013.
78
99facca6fc50cc30f13b7b6dd49ace24bc94f702Front.Comput.Sci.
DOI
RESEARCH ARTICLE
VIPLFaceNet: An Open Source Deep Face Recognition SDK
1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
c(cid:13) Higher Education Press and Springer-Verlag Berlin Heidelberg 2016
('46522348', 'Xin Liu', 'xin liu')
('1693589', 'Meina Kan', 'meina kan')
('3468240', 'Wanglong Wu', 'wanglong wu')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
9990e0b05f34b586ffccdc89de2f8b0e5d427067International Journal of Modeling and Optimization, Vol. 3, No. 2, April 2013
Auto-Optimized Multimodal Expression Recognition
Framework Using 3D Kinect Data for ASD Therapeutic
Aid

regarding
emotion
and
to
recognize
('25833279', 'Amira E. Youssef', 'amira e. youssef')
('1720250', 'Ahmed S. Ibrahim', 'ahmed s. ibrahim')
('1731164', 'A. Lynn Abbott', 'a. lynn abbott')
99d7678039ad96ee29ab520ff114bb8021222a91Political image analysis with deep neural
networks
November 28, 2017
('41096358', 'L. Jason Anastasopoulos', 'l. jason anastasopoulos')
('2361255', 'Shiry Ginosar', 'shiry ginosar')
('2007721', 'Dhruvil Badani', 'dhruvil badani')
('2459453', 'Jake Ryland Williams', 'jake ryland williams')
('50521070', 'Crystal Lee', 'crystal lee')
52012b4ecb78f6b4b9ea496be98bcfe0944353cd
JOURNAL OF COMPUTATION IN BIOSCIENCES AND ENGINEERING

Journal homepage: http://scienceq.org/Journals/JCLS.php

Research Article
Using Support Vector Machine and Local Binary Pattern for Facial Expression
Recognition
Open Access
Federal University Technology Akure, PMB 704, Akure, Nigeria
2. Department of computer science, Kwara state polytechnic Ilorin, Kwara-State, Nigeria.
Received: September 22, 2015, Accepted: December 14, 2015, Published: December 14, 2015.
('10698338', 'Alese Boniface Kayode', 'alese boniface kayode'). *Corresponding author: Ayeni Olaniyi Abiodun Mail Id: oaayeni@futa.edu.ng
523854a7d8755e944bd50217c14481fe1329a969A Differentially Private Kernel Two-Sample Test
MPI-IS
University Of Oxford
University Of Oxford
MPI-IS
April 17, 2018
('39565862', 'Anant Raj', 'anant raj')
('35142231', 'Ho Chung Leon Law', 'ho chung leon law')
('1698032', 'Dino Sejdinovic', 'dino sejdinovic')
('37292171', 'Mijung Park', 'mijung park')
anant.raj@tuebingen.mpg.de
ho.law@stats.ox.ac.uk
dino.sejdinovic@stats.ox.ac.uk
mijung.park@tuebingen.mpg.de
521cfbc1949289a7ffc3ff90af7c55adeb43db2aAction Recognition with Coarse-to-Fine Deep Feature Integration and
Asynchronous Fusion
Shanghai Jiao Tong University, China
National Key Laboratory for Novel Software Technology, Nanjing University, China
University of Chinese Academy of Sciences, China
('8131625', 'Weiyao Lin', 'weiyao lin')
('1926641', 'Yang Mi', 'yang mi')
('1808816', 'Jianxin Wu', 'jianxin wu')
('1875882', 'Ke Lu', 'ke lu')
('37028145', 'Hongkai Xiong', 'hongkai xiong')
{wylin, deyangmiyang, xionghongkai}@sjtu.edu.cn, wujx2001@nju.edu.cn, luk@ucas.ac.cn
529e2ce6fb362bfce02d6d9a9e5de635bde81191This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
> TIP-05732-2009<
1
Normalization of Face Illumination Based
on Large- and Small- Scale Features
('2002129', 'Xiaohua Xie', 'xiaohua xie')
('3333315', 'Wei-Shi Zheng', 'wei-shi zheng')
('1768574', 'Pong C. Yuen', 'pong c. yuen')
('1713795', 'Ching Y. Suen', 'ching y. suen')
52887969107956d59e1218abb84a1f834a3145781283
Travel Recommendation by Mining People
Attributes and Travel Group Types From
Community-Contributed Photos
('35081710', 'Yan-Ying Chen', 'yan-ying chen')
('2363522', 'An-Jung Cheng', 'an-jung cheng')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
521482c2089c62a59996425603d8264832998403
521b625eebea73b5deb171a350e3709a4910eebf
52258ec5ec73ce30ca8bc215539c017d279517cfRecognizing Faces with Expressions: Within-class Space and Between-class Space
Zhejang University, Hangzhou 310027, P.R.China
Yu Bing Chen Ping Jin Lianfu
Email: BingbingYu@21cn.com Pchen@mail.hz.zj.cn Lfjin@mail.hz.zj.cn
5253c94f955146ba7d3566196e49fe2edea1c8f4Internet-based Morphable Model
University of Washington
  




  








 
 



  

 
 
 

 

!
!


Figure 1. Overview of the method. We construct a morphable
model directly from Internet photos, the model is then used for
single view reconstruction from any new input image (Face An-
alyzer) and further for shape modification (Face Modifier), e.g.,
from neutral to smile in 3D.
('2419955', 'Ira Kemelmacher-Shlizerman', 'ira kemelmacher-shlizerman')kemelmi@cs.washington.edu
527dda77a3864d88b35e017d542cb612f275a4ec
529b1f33aed49dbe025a99ac1d211c777ad881ecFAST AND EXACT BI-DIRECTIONAL FITTING OF ACTIVE APPEARANCE MODELS
Jean Kossaifi(cid:63)
cid:63) Imperial College London, UK
University of Nottingham, UK, School of Computer Science
University of Twente, The Netherlands
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1694605', 'Maja Pantic', 'maja pantic')
523b2cbc48decfabffb66ecaeced4fe6a6f2ac78Photorealistic Facial Expression Synthesis by the Conditional Difference Adversarial
Autoencoder
Department of Electronic and Computer Engineering
The Hong Kong University of Science and Technology
HKSAR, China
('1698743', 'Yuqian Zhou', 'yuqian zhou')yzhouas@ust.hk, eebert@ust.hk
52472ec859131844f38fc7d57944778f01d109acImproving speaker turn embedding by
crossmodal transfer learning from face embedding
Idiap Research Institute, Martigny, Switzerland
2 ´Ecole Polytechnique F´ed´eral de Lausanne, Switzerland
('39560344', 'Nam Le', 'nam le')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
{nle, odobez}@idiap.ch
5287d8fef49b80b8d500583c07e935c7f9798933Generative Adversarial Text to Image Synthesis
University of Michigan, Ann Arbor, MI, USA (UMICH.EDU
Max Planck Institute for Informatics, Saarbr ucken, Germany (MPI-INF.MPG.DE
REEDSCOT1, AKATA2, XCYAN1, LLAJAN1
HONGLAK1, SCHIELE2
('2893664', 'Zeynep Akata', 'zeynep akata')
('3084614', 'Xinchen Yan', 'xinchen yan')
('2876316', 'Lajanugen Logeswaran', 'lajanugen logeswaran')
('1697141', 'Honglak Lee', 'honglak lee')
('1697100', 'Bernt Schiele', 'bernt schiele')
52c59f9f4993c8248dd3d2d28a4946f1068bcbbeStructural Similarity and Distance in Learning
Dept. of Electrical and
Computer Engineering
Boston University
Boston, MA 02215
Dept. of Electrical and
Computer Engineering
Boston University
Boston, MA 02215
David A. Casta˜n´on
Dept. of Electrical and
Computer Engineering
Boston University
Boston, MA 02215
information,
('1928419', 'Joseph Wang', 'joseph wang')
('1699322', 'Venkatesh Saligrama', 'venkatesh saligrama')
Email: joewang@bu.edu
Email: srv@bu.edu
Email: dac@bu.edu
52bf00df3b970e017e4e2f8079202460f1c0e1bdLearning High-level Prior with Convolutional Neural Networks
for Semantic Segmentation
University of Science and Technology of China
Hefei, China
Tsinghua University
Beijing, China
The Hong Kong University of Science and Technology
HongKong, China
('2743695', 'Haitian Zheng', 'haitian zheng')
('1697194', 'Feng Wu', 'feng wu')
('39987643', 'Lu Fang', 'lu fang')
('1680777', 'Yebin Liu', 'yebin liu')
('1916870', 'Mengqi Ji', 'mengqi ji')
{zhenght,fengwu,fanglu}@mail.ustc.edu.cn
liuyebin@mail.tsinghua.edu.cn
mji@ust.hk
52c91fcf996af72d191520d659af44e310f86ef9Interactive Image Search with Attribute-based Guidance and Personalization
The University of Texas at Austin
('1770205', 'Adriana Kovashka', 'adriana kovashka')
('1794409', 'Kristen Grauman', 'kristen grauman')
{adriana, grauman}@cs.utexas.edu
52885fa403efbab5ef21274282edd98b9ca70cbfDiscriminant Graph Structures for Facial
Expression Recognition
Aristotle University of Thessaloniki
Department of Informatics
Box 451
54124 Thessaloniki, Greece
Address for correspondence :
Aristotle University of Thessaloniki
54124 Thessaloniki
GREECE
Tel. ++ 30 231 099 63 04
Fax ++ 30 231 099 63 04
April 2, 2008
DRAFT
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
email: pitas@zeus.csd.auth.gr
52f23e1a386c87b0dab8bfdf9694c781cd0a3984
52d7eb0fbc3522434c13cc247549f74bb9609c5dWIDER FACE: A Face Detection Benchmark
The Chinese University of Hong Kong
('1692609', 'Shuo Yang', 'shuo yang')
('47571885', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{ys014, pluo, ccloy, xtang}@ie.cuhk,edu.hk
528069963f0bd0861f380f53270c96c269a3ea1cCardi University
School of Computer Science and Informatics
Visual Computing Group
4D (3D Dynamic) Statistical Models of
Conversational Expressions and the
Synthesis of Highly-Realistic 4D Facial
Expression Sequences
Submitted in part fulfilment of the requirements for the degree of
Doctor of Philosophy in Computer Science at Cardi University, July 24th
('1812779', 'Jason Vandeventer', 'jason vandeventer')
529baf1a79cca813f8c9966ceaa9b3e42748c058Triangle Wise Mapping Technique to Transform one Face Image into Another Face Image

{tag} {/tag}

International Journal of Computer Applications

© 2014 by IJCA Journal
Volume 87 - Number 6

Year of Publication: 2014



Authors:

Bhogeswar Borah










10.5120/15209-3714
{bibtex}pxc3893714.bib{/bibtex}
5239001571bc64de3e61be0be8985860f08d7e7eSUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, JUNE 2016
Deep Appearance Models: A Deep Boltzmann
Machine Approach for Face Modeling
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
('1769788', 'Khoa Luu', 'khoa luu')
('2687827', 'Kha Gia Quach', 'kha gia quach')
('1699922', 'Tien D. Bui', 'tien d. bui')
556b9aaf1bc15c928718bc46322d70c691111158Exploiting Qualitative Domain Knowledge for Learning Bayesian
Network Parameters with Incomplete Data
Thomson-Reuters Corporation
Rensselaer Polytechnic Institute
('2460793', 'Wenhui Liao', 'wenhui liao')
('1726583', 'Qiang Ji', 'qiang ji')
wenhui.liao@thomsonreuters.com
qji@ecse.rpi.edu
55ea0c775b25d9d04b5886e322db852e86a556cdDOCK: Detecting Objects
by transferring Common-sense Knowledge
University of California, Davis 2University of Washington 3Allen Institute for AI
https://dock-project.github.io
('2270286', 'Ali Farhadi', 'ali farhadi')
('19553871', 'Krishna Kumar Singh', 'krishna kumar singh')
('1883898', 'Yong Jae Lee', 'yong jae lee')
550858b7f5efaca2ebed8f3969cb89017bdb739f
554b9478fd285f2317214396e0ccd81309963efdSpatio-Temporal Action Localization For Human Action
Recognition in Large Dataset
1L2TI, Institut Galil´ee, Universit´e Paris 13, France;
2SERCOM, Ecole Polytechnique de Tunisie
('3240115', 'Sameh MEGRHI', 'sameh megrhi')
('2504338', 'Marwa JMAL', 'marwa jmal')
('1731553', 'Azeddine BEGHDADI', 'azeddine beghdadi')
('14521102', 'Wided Mseddi', 'wided mseddi')
55c68c1237166679d2cb65f266f496d1ecd4bec6Learning to Score Figure Skating Sport Videos ('2708397', 'Chengming Xu', 'chengming xu')
('35782003', 'Yanwei Fu', 'yanwei fu')
('10110775', 'Zitian Chen', 'zitian chen')
('40379722', 'Bing Zhang', 'bing zhang')
('1717861', 'Yu-Gang Jiang', 'yu-gang jiang')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
558fc9a2bce3d3993a9c1f41b6c7f290cefcf92fDEPARTMENT OF INFORMATION ENGINEERING AND COMPUTER SCIENCE
ICT International Doctoral School
Efficient and Effective Solutions
for Video Classification
Advisor:
Prof. Nicu Sebe
University of Trento
Co-Advisor:
Prof. Bogdan Ionescu
University Politehnica of Bucharest
November 2017
('28957796', 'Ionut Cosmin Duta', 'ionut cosmin duta')
55138c2b127ebdcc508503112bf1d1eeb5395604Ensemble Nystr¨om Method
Google Research
New York, NY
Courant Institute and Google Research
New York, NY
Courant Institute of Mathematical Sciences
New York, NY
('2794322', 'Sanjiv Kumar', 'sanjiv kumar')
('1709415', 'Mehryar Mohri', 'mehryar mohri')
('8395559', 'Ameet Talwalkar', 'ameet talwalkar')
sanjivk@google.com
mohri@cs.nyu.edu
ameet@cs.nyu.edu
5502dfe47ac26e60e0fb25fc0f810cae6f5173c0Affordance Prediction via Learned Object Attributes ('2749326', 'Tucker Hermans', 'tucker hermans')
('1692956', 'James M. Rehg', 'james m. rehg')
('1688328', 'Aaron Bobick', 'aaron bobick')
55e18e0dde592258882134d2dceeb86122b366abJournal of Artificial Intelligence Research 37 (2010) 397-435
Submitted 11/09; published 03/10
Training a Multilingual Sportscaster:
Using Perceptual Context to Learn Language
Department of Computer Science
The University of Texas at Austin
University Station C0500, Austin TX 78712, USA
('39230960', 'David L. Chen', 'david l. chen')
('1765656', 'Joohyun Kim', 'joohyun kim')
('1797655', 'Raymond J. Mooney', 'raymond j. mooney')
DLCC@CS.UTEXAS.EDU
SCIMITAR@CS.UTEXAS.EDU
MOONEY@CS.UTEXAS.EDU
55a158f4e7c38fe281d06ae45eb456e05516af50The 22nd International Conference on Computer Graphics and Vision
108
GraphiCon’2012
5506a1a1e1255353fde05d9188cb2adc20553af5
55966926e7c28b1eee1c7eb7a0b11b10605a1af0Surpassing Human-Level Face Verification Performance on LFW with
GaussianFace
The Chinese University of Hong Kong
('2312486', 'Chaochao Lu', 'chaochao lu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{lc013, xtang}@ie.cuhk.edu.hk
552c55c71bccfc6de7ce1343a1cd12208e9a63b3Accurate Eye Center Location and Tracking Using Isophote Curvature
Intelligent Systems Lab Amsterdam
University of Amsterdam, The Netherlands
('9301018', 'Roberto Valenti', 'roberto valenti')
('1695527', 'Theo Gevers', 'theo gevers')
{rvalenti,gevers}@science.uva.nl
5517b28795d7a68777c9f3b2b46845dcdb425b2cDeep video gesture recognition using illumination invariants
Massachusetts Institute of Technology
Figure 1: Automated facial gesture recognition is a fundamental problem in human computer interaction. While tackling real world tasks of
expression recognition sudden changes in illumination from multiple sources can be expected. We show how to build a robust system to detect
human emotions while showing invariance to illumination.
('37381309', 'Otkrist Gupta', 'otkrist gupta')
('2283049', 'Dan Raviv', 'dan raviv')
('1717566', 'Ramesh Raskar', 'ramesh raskar')
55c81f15c89dc8f6eedab124ba4ccab18cf38327
5550a6df1b118a80c00a2459bae216a7e8e3966cISSN: 0974-2115
www.jchps.com Journal of Chemical and Pharmaceutical Sciences
A perusal on Facial Emotion Recognition System (FERS)
School of Information Technology and Engineering, VIT University, Vellore, 632014, India
*Corresponding author: E-Mail: krithika.lb@vit.ac.in
55e87050b998eb0a8f0b16163ef5a28f984b01faCAN YOU FIND A FACE IN A HEVC BITSTREAM?
School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
('3393216', 'Saeed Ranjbar Alvar', 'saeed ranjbar alvar')
('3320198', 'Hyomin Choi', 'hyomin choi')
55bc7abcef8266d76667896bbc652d081d00f797Impact of Facial Cosmetics on Automatic Gender and Age Estimation
Algorithms
Computer Science and Electrical Engineering, West Virginia University, Morgantown, USA
Computer Science and Engineering, Michigan State University, East Lansing, USA
Keywords:
Biometrics, Face Recognition, Facial Cosmetics, Makeup, Gender Spoofing, Age Alteration, Automatic
Gender Estimation, Automatic Age Estimation
('1751335', 'Cunjian Chen', 'cunjian chen')
('3299530', 'Antitza Dantcheva', 'antitza dantcheva')
('1698707', 'Arun Ross', 'arun ross')
cchen10@csee.wvu.edu, {antitza, rossarun}@msu.edu
55b4b1168c734eeb42882082bd131206dbfedd5bLearning to Align from Scratch
University of Massachusetts, Amherst, MA
University of Michigan, Ann Arbor, MI
('3219900', 'Gary B. Huang', 'gary b. huang'){gbhuang,mmattar,elm}@cs.umass.edu
honglak@eecs.umich.edu
55079a93b7d1eb789193d7fcdcf614e6829fad0fEfficient and Robust Inverse Lighting of a Single Face Image using Compressive
Sensing
Center for Sensor Systems (ZESS) and Institute for Vision and Graphics#, University of Siegen
57076 Siegen, Germany
('1747804', 'Miguel Heredia Conde', 'miguel heredia conde')
('1967283', 'Davoud Shahlaei', 'davoud shahlaei')
('2880906', 'Volker Blanz', 'volker blanz')
('1698728', 'Otmar Loffeld', 'otmar loffeld')
heredia@zess.uni-siegen.de
55804f85613b8584d5002a5b0ddfe86b0d0e3325Data Complexity in Machine Learning
Learning Systems Group, California Institute of Technology
('37715538', 'Ling Li', 'ling li')
('1817975', 'Yaser S. Abu-Mostafa', 'yaser s. abu-mostafa')
551fa37e8d6d03b89d195a5c00c74cc52ff1c67aGeThR-Net: A Generalized Temporally Hybrid
Recurrent Neural Network for Multimodal
Information Fusion
1 Xerox Research Centre India; 2 Amazon Development Center India
('2757149', 'Ankit Gandhi', 'ankit gandhi')
('34751361', 'Arjun Sharma', 'arjun sharma')
('2221075', 'Arijit Biswas', 'arijit biswas')
('2116262', 'Om Deshmukh', 'om deshmukh')
{ankit.g1290,arjunsharma.iitg,arijitbiswas87}@gmail.com;
om.deshmukh@xerox.com (*-equal contribution)
55eb7ec9b9740f6c69d6e62062a24bfa091bbb0cCAS(ME)2: A Database of Spontaneous
Macro-expressions and Micro-expressions
State Key Laboratory of Brain and Cognitive Science, Institute of Psychology
Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
Key Laboratory of Behavior Sciences, Institute of Psychology
Chinese Academy of Sciences, Beijing, China
Institute of Psychology and Behavioral Sciences
Wenzhou University, Wenzhou, China
('34495371', 'Fangbing Qu', 'fangbing qu')
('9185305', 'Wen-Jing Yan', 'wen-jing yan')
('1684007', 'Xiaolan Fu', 'xiaolan fu')
{qufb,fuxl}@psych.ac.cn
wangsujing@psych.ac.cn
yanwj@wzu.edu.cn
55b9b1c1c5487f5f62b44340104a9c4cc2ed7c961 Million Full-Sentences Visual Question Answering (FSVQA)
The Color of the Cat is Gray:
The University of Tokyo
7 Chome-3-1 Hongo, Bunkyo
Tokyo 113-8654, Japan
('2518695', 'Andrew Shin', 'andrew shin')
('3250559', 'Yoshitaka Ushiku', 'yoshitaka ushiku')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
55c40cbcf49a0225e72d911d762c27bb1c2d14aaIndian Face Age Database: A Database for Face Recognition with Age Variation
{tag} {/tag}
International Journal of Computer Applications

Foundation of Computer Science (FCS), NY, USA


Volume 126
-
Number 5


Year of Publication: 2015




Authors:











10.5120/ijca2015906055
{bibtex}2015906055.bib{/bibtex}
('2029759', 'Reecha Sharma', 'reecha sharma')
9788b491ddc188941dadf441fc143a4075bff764LOGAN: Membership Inference Attacks Against Generative Models∗
University College London
('9200194', 'Jamie Hayes', 'jamie hayes')
('2008164', 'Luca Melis', 'luca melis')
('1722262', 'George Danezis', 'george danezis')
('1728207', 'Emiliano De Cristofaro', 'emiliano de cristofaro')
{j.hayes, l.melis, g.danezis, e.decristofaro}@cs.ucl.ac.uk
973e3d9bc0879210c9fad145a902afca07370b86(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 7, No. 7, 2016
From Emotion Recognition to Website
Customizations
O.B. Efremides
School of Web Media
Bahrain Polytechnic
Isa Town, Kingdom of Bahrain
970c0d6c0fd2ebe7c5921a45bc70f6345c844ff3Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
Discriminative Log-Euclidean Feature Learning for Sparse
Representation-Based Recognition of Faces from Videos
Center for Automation Research, University of Maryland
College Park, MD
{mefathy, azadeh, rama} (at) umiacs.umd.edu
('4570075', 'Mohammed E. Fathy', 'mohammed e. fathy')
('2943431', 'Azadeh Alavi', 'azadeh alavi')
('9215658', 'Rama Chellappa', 'rama chellappa')
97b8249914e6b4f8757d22da51e8347995a4063728
Large-Scale Vehicle Detection, Indexing,
and Search in Urban Surveillance Videos
('1832513', 'Behjat Siddiquie', 'behjat siddiquie')
('3151405', 'James Petterson', 'james petterson')
('2029646', 'Yun Zhai', 'yun zhai')
('3233207', 'Ankur Datta', 'ankur datta')
('34609371', 'Lisa M. Brown', 'lisa m. brown')
('1767897', 'Sharath Pankanti', 'sharath pankanti')
972ef9ddd9059079bdec17abc8b33039ed25c99cInternational Journal of Innovations in Engineering and Technology (IJIET)
A Novel on understanding How IRIS
Recognition works
Dept. of Comp. Science
M.P.M. College, Bhopal, India
Asst. Professor CSE
M.P.M. College, Bhopal, India
('37930830', 'Vijay Shinde', 'vijay shinde')
('9345591', 'Prakash Tanwar', 'prakash tanwar')
97032b13f1371c8a813802ade7558e816d25c73fTotal Recall Final Report
Supervisor: Professor Duncan Gillies
January 11, 2006
('2561350', 'Peter Collingbourne', 'peter collingbourne')
('3036326', 'Khilan Gudka', 'khilan gudka')
('15490561', 'Steve Lovegrove', 'steve lovegrove')
('35260800', 'Jiefei Ma', 'jiefei ma')
97137d5154a9f22a5d9ecc32e8e2b95d07a5a571The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-016-3418-y
Facial Expression Recognition based on Local Region
Specific Features and Support Vector Machines
Park1
Korea Electronics Technology Institute, Jeonju-si, Jeollabuk-do 561-844, Rep. of Korea; E
Division of Computer Engineering, Jeonbuk National University, Jeonju-si, Jeollabuk-do
Tel.: +82-63-270-2406; Fax: +82-63-270-2394.
('32322842', 'Deepak Ghimire', 'deepak ghimire')
('31984909', 'SungHwan Jeong', 'sunghwan jeong')
('2034182', 'Joonwhoan Lee', 'joonwhoan lee')
Mails: (deepak, shjeong, shpark)@keti.re.kr
756, Rep. of Korea; E-Mail: chlee@jbnu.ac.kr
♣ Corresponding Author; E-Mail: chlee@jbnu.ac.kr;
9730b9cd998c0a549601c554221a596deda8af5bSpatio-temporal Person Retrieval via Natural Language Queries
Graduate School of Information Science and Technology, The University of Tokyo
('3369734', 'Masataka Yamaguchi', 'masataka yamaguchi')
('8915348', 'Kuniaki Saito', 'kuniaki saito')
('3250559', 'Yoshitaka Ushiku', 'yoshitaka ushiku')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
{yamaguchi, ksaito, ushiku, harada}@mi.t.u-tokyo.ac.jp
978a219e07daa046244821b341631c41f91daccdEmotional Intelligence: Giving Computers
Effective Emotional Skills to Aid Interaction
School of Computer Science, University of Birmingham, UK
1 Introduction
Why do computers need emotional intelligence? Science fiction often por-
trays emotional computers as dangerous and frightening, and as a serious
threat to human life. One of the most famous examples is HAL, the supercom-
puter onboard the spaceship Discovery, in the movie 2001: A Space Odyssey.
HAL could express, recognize and respond to human emotion, and generally
had strong emotional skills – the consequences of which were catastrophic.
However, since the movie’s release almost 40 years ago, the traditional view
of emotions as contributing to irrational and unpredictable behaviour has
changed. Recent research has suggested that emotions play an essential role
in important areas such as learning, memory, motivation, attention, creativ-
ity, and decision making. These findings have prompted a large number of
research groups around the world to start examining the role of emotions and
emotional intelligence in human-computer interaction (HCI).
For almost half a century, computer scientists have been attempting to build
machines that can interact intelligently with us, and despite initial optimism,
they are still struggling to do so. For much of this time, the role of emotion in
developing intelligent computers was largely overlooked, and it is only recently
that interest in this area has risen dramatically. This increased interest can
largely be attributed to the work of [6] and [85] who were amongst the first to
bring emotion to the attention of computer scientists. The former highlighted
emotion as a fundamental component required in building believable agents,
while the latter further raised the awareness of emotion and its potential
importance in HCI. Since these publications, the literature on emotions and
computing has grown considerably with progress being made on a number of
different fronts.
The concept of designing computers to have emotional intelligence may seem
strange, but equipping computers with this type of intelligence may provide
a number of important advantages. For example, in spite of a computer’s
('3134697', 'Chris Creed', 'chris creed')
('2282865', 'Russell Beale', 'russell beale')
cpc@cs.bham.ac.uk
r.beale@cs.bham.ac.uk
976e0264bb57786952a987d4456850e274714fb8Improving Semantic Concept Detection through the
Dictionary of Visually-distinct Elements
Center for Research in Computer Vision, University of Central Florida
('1707795', 'Afshin Dehghan', 'afshin dehghan')
('1803711', 'Haroon Idrees', 'haroon idrees')
('1745480', 'Mubarak Shah', 'mubarak shah')
{adehghan, haroon, shah}@cs.ucf.edu
9758f3fd94239a8d974217fe12599f88fb413f3dUC-HCC Submission to Thumos 2014
Vision and Sensing, HCC, ESTeM, University of Canberra
('1793720', 'O. V. Ramana Murthy', 'o. v. ramana murthy')
('1717204', 'Roland Goecke', 'roland goecke')
97f9c3bdb4668f3e140ded2da33fe704fc81f3eaAnExperimentalComparisonofAppearance
andGeometricModelBasedRecognition
J.Mundy,A.Liu,N.Pillow,A.Zisserman,S.Abdallah,S.Utcke,
S.NayarandC.Rothwell
GeneralElectricCorporateResearchandDevelopment,Schenectady,NY,USA
RoboticsResearchGroup, UniversityofOxford, Oxford, UK
ColumbiaUniversity, NY, USA
INRIA,SophiaAntipolis,France
97e569159d5658760eb00ca9cb662e6882d2ab0eCorrelation Filters for Object Alignment
Carnegie Mellon University
Carnegie Mellon University
B.V.K. Vijaya Kumar
Carnegie Mellon University
('2232940', 'Vishnu Naresh Boddeti', 'vishnu naresh boddeti')
('1733113', 'Takeo Kanade', 'takeo kanade')
naresh@cmu.edu
tk@cs.cmu.edu
kumar@ece.cmu.edu
97cf04eaf1fc0ac4de0f5ad4a510d57ce12544f5manuscript No.
(will be inserted by the editor)
Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge,
Deep Architectures, and Beyond
Zafeiriou4
('1811396', 'Dimitrios Kollias', 'dimitrios kollias')
('1757287', 'Guoying Zhao', 'guoying zhao')
97d1d561362a8b6beb0fdbee28f3862fb48f13801955
Age Synthesis and Estimation via Faces:
A Survey
('1708679', 'Yun Fu', 'yun fu')
('1822413', 'Guodong Guo', 'guodong guo')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
97540905e4a9fdf425989a794f024776f28a3fa9
97865d31b5e771cf4162bc9eae7de6991ceb8bbfFace and Gender Classification in Crowd Video
IIIT-D-MTech-CS-GEN-13-100
July 16, 2015
Indraprastha Institute of Information Technology
New Delhi
Thesis Advisors
Dr. Richa Singh
Submitted in partial fulfillment of the requirements
for the Degree of M.Tech. in Computer Science
c(cid:13) Verma, 2015
Keywords : Face Recognition, Gender Classification, Crowd database
('2578160', 'Priyanka Verma', 'priyanka verma')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
975978ee6a32383d6f4f026b944099e7739e5890Privacy-Preserving Age Estimation
for Content Rating
Binglin Li∗
University of Manitoba
Simon Fraser University
Winnipeg, Canada
Burnaby, Canada
Noman Mohammed
University of Manitoba
Winnipeg, Canada
Yang Wang
Jie Liang
University of Manitoba
Simon Fraser University
Winnipeg, Canada
Burnaby, Canada
('2373631', 'Linwei Ye', 'linwei ye')yel3@cs.umanitoba.ca
binglinl@sfu.ca
noman@cs.umanitoba.ca
ywang@cs.umanitoba.ca
jiel@sfu.ca
9755554b13103df634f9b1ef50a147dd02eab02fHow Transferable are CNN-based Features for
Age and Gender Classification?
1
('2850086', 'Gökhan Özbulak', 'gökhan özbulak')
('3152281', 'Yusuf Aytar', 'yusuf aytar')
635158d2da146e9de559d2742a2fa234e06b52db
63d8110ac76f57b3ba8a5947bc6bdbb86f25a342On Modeling Variations for Face Authentication
Carnegie Mellon University, Pittsburgh, PA
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
xiaoming@andrew.cmu.edu tsuhan@cmu.edu kumar@ece.cmu.edu
63cf5fc2ee05eb9c6613043f585dba48c5561192Prototype Selection for
Classification in Standard
and Generalized
Dissimilarity Spaces
632b24ddd42fda4aebc5a8af3ec44f7fd3ecdc6cReal-Time Facial Segmentation
and Performance Capture from RGB Input
Pinscreen
University of Southern California
('2059597', 'Shunsuke Saito', 'shunsuke saito')
('50290121', 'Tianye Li', 'tianye li')
('1706574', 'Hao Li', 'hao li')
6324fada2fb00bd55e7ff594cf1c41c918813030Uncertainty Reduction For Active Image Clustering
via a Hybrid Global-Local Uncertainty Model
State University of New York at Buffalo
Department of Computer Science and Engineering
338 Davis Hall, Buffalo, NY, 14260-2500
('2228109', 'Caiming Xiong', 'caiming xiong')
('34187462', 'David M. Johnson', 'david m. johnson')
('3587688', 'Jason J. Corso', 'jason j. corso')
{cxiong,davidjoh,jcorso}@buffalo.edu
6308e9c991125ee6734baa3ec93c697211237df8LEARNING THE SPARSE REPRESENTATION FOR CLASSIFICATION
Beckman Institute, University of Illinois at Urbana-Champaign, USA
('1706007', 'Jianchao Yang', 'jianchao yang')
('7898154', 'Jiangping Wang', 'jiangping wang')
{jyang29, jwang63, huang}@ifp.illinois.edu
6342a4c54835c1e14159495373ab18b4233d2d9bTOWARDS POSE-ROBUST
FACE RECOGNITION ON VIDEO
Submitted as a requirement of the degree
of doctor of philosophy
at the
Science and Engineering Faculty
Queensland University of Technology
September, 2014
('23168868', 'Moh Edi Wibowo', 'moh edi wibowo')
63d8d69e90e79806a062cb8654ad78327c8957bb
63c109946ffd401ee1195ed28f2fb87c2159e63d14-1
MVA2011 IAPR Conference on Machine Vision Applications, June 13-15, 2011, Nara, JAPAN
Robust Facial Feature Localization using Improved Active Shape
Model and Gabor Filter
Engineering, National Formosa University
Taiwan
('1711364', 'Hui-Yu Huang', 'hui-yu huang')E-mail: hyhuang@nfu.edu.tw
63b29886577a37032c7e32d8899a6f69b11a90deImage-set based Face Recognition Using Boosted Global
and Local Principal Angles
Xi an Jiaotong University, China
University of Tsukuba, Japan
('6916241', 'Xi Li', 'xi li')
('1770128', 'Kazuhiro Fukui', 'kazuhiro fukui')
('1715389', 'Nanning Zheng', 'nanning zheng')
lxaccv09@yahoo.com,
znn@xjtu.edu.cn
kf@cs.tsukuba.ac.jp
631483c15641c3652377f66c8380ff684f3e365cSync-DRAW: Automatic Video Generation using Deep Recurrent
A(cid:130)entive Architectures
Gaurav Mi(cid:138)al∗
IIT Hyderabad
Vineeth N Balasubramanian
IIT Hyderabad
('8268761', 'Tanya Marwah', 'tanya marwah')gaurav.mi(cid:138)al.191013@gmail.com
ee13b1044@iith.ac.in
vineethnb@iith.ac.in
63a6c256ec2cf2e0e0c9a43a085f5bc94af84265Complexity of Multiverse Networks and
their Multilayer Generalization
The Blavatnik School of Computer Science
Tel Aviv University
('1762320', 'Etai Littwin', 'etai littwin')
('1776343', 'Lior Wolf', 'lior wolf')
63213d080a43660ac59ea12e3c35e6953f6d7ce8ActionVLAD: Learning spatio-temporal aggregation for action classification
Robotics Institute, Carnegie Mellon University
2Adobe Research
3INRIA
http://rohitgirdhar.github.io/ActionVLAD
('3102850', 'Rohit Girdhar', 'rohit girdhar')
('1770537', 'Deva Ramanan', 'deva ramanan')
('1782755', 'Josef Sivic', 'josef sivic')
('2015670', 'Bryan Russell', 'bryan russell')
630d1728435a529d0b0bfecb0e7e335f8ea2596dFacial Action Unit Detection by Cascade of Tasks
School of Information Science and Engineering, Southeast University, Nanjing, China
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
University of Pittsburgh, Pittsburgh, PA
('2499751', 'Xiaoyu Ding', 'xiaoyu ding')
('18870591', 'Qiao Wang', 'qiao wang')
63eefc775bcd8ccad343433fc7a1dd8e1e5ee796
632fa986bed53862d83918c2b71ab953fd70d6ccGÜNEL ET AL.: WHAT FACE AND BODY SHAPES CAN TELL ABOUT HEIGHT
What Face and Body Shapes Can Tell
About Height
CVLab
EPFL,
Lausanne, Switzerland
('46211822', 'Semih Günel', 'semih günel')
('2933543', 'Helge Rhodin', 'helge rhodin')
('1717736', 'Pascal Fua', 'pascal fua')
semih.gunel@epfl.ch
helge.rhodin@epfl.ch
pascal.fua@epfl.ch
63340c00896d76f4b728dbef85674d7ea8d5ab261732
Discriminant Subspace Analysis:
A Fukunaga-Koontz Approach
('40404906', 'Sheng Zhang', 'sheng zhang')
('1715286', 'Terence Sim', 'terence sim')
633101e794d7b80f55f466fd2941ea24595e10e6In submission to IEEE conference
Face Attribute Prediction with classification CNN
FACE ATTRIBUTE PREDICTION WITH
CLASSIFICATION CNN
Computer Science and Communication
KTH Royal Institute of Technology
100 44 Stockholm, Sweden
('50262049', 'Yang Zhong', 'yang zhong')
('1736906', 'Josephine Sullivan', 'josephine sullivan')
('40565290', 'Haibo Li', 'haibo li')
{yzhong, sullivan, haiboli}@kth.se
63a2e2155193dc2da9764ae7380cdbd044ff2b94A Dense SURF and Triangulation based
Spatio-Temporal Feature for Action Recognition
The University of Electro-Communications
Chofu, Tokyo 182-8585 JAPAN
('2274625', 'Do Hang Nga', 'do hang nga')
('1681659', 'Keiji Yanai', 'keiji yanai')
fdohang,yanaig@mm.cs.uec.ac.jp
63d865c66faaba68018defee0daf201db8ca79edDeep Regression for Face Alignment
1Dept. of Electronics and Information Engineering, Huazhong Univ. of Science and Technology, China
2Microsoft Research, Beijing, China
('2276155', 'Baoguang Shi', 'baoguang shi')
('1688516', 'Jingdong Wang', 'jingdong wang')
shibaoguang@gmail.com,{xbai,liuwy}@hust.edu.cn,jingdw@microsoft.com
63cff99eff0c38b633c8a3a2fec8269869f81850Feature Correlation Filter for Face Recognition
Center for Biometrics and Security Research & National Laboratory of Pattern
Recognition,
Institute of Automation, Chinese Academy of Sciences
95 Zhongguancun East Road, 100080 Beijing, China
http://www.cbsr.ia.ac.cn
('32015491', 'XiangXin Zhu', 'xiangxin zhu')
('40397682', 'Shengcai Liao', 'shengcai liao')
('1718623', 'Zhen Lei', 'zhen lei')
('3168566', 'Rong Liu', 'rong liu')
('34679741', 'Stan Z. Li', 'stan z. li')
{xxzhu,scliao,zlei,rliu,szli}@nlpr.ia.ac.cn
634541661d976c4b82d590ef6d1f3457d2857b19AAllmmaa MMaatteerr SSttuuddiioorruumm –– UUnniivveerrssiittàà ddii BBoollooggnnaa
in cotutela con Università di Sassari
DOTTORATO DI RICERCA IN
INGEGNERIA ELETTRONICA, INFORMATICA E DELLE
TELECOMUNICAZIONI
Ciclo XXVI
Settore Concorsuale di afferenza: 09/H1
Settore Scientifico disciplinare: ING-INF/05
ADVANCED TECHNIQUES FOR FACE RECOGNITION
UNDER CHALLENGING ENVIRONMENTS
TITOLO TESI
Presentata da:
Coordinatore Dottorato
ALESSANDRO VANELLI-CORALLI

Relatore
DAVIDE MALTONI
Relatore
MASSIMO TISTARELLI
Esame finale anno 2014
('2384894', 'Yunlian Sun', 'yunlian sun')
6332a99e1680db72ae1145d65fa0cccb37256828MASTER IN COMPUTER VISION AND ARTIFICIAL INTELLIGENCE
REPORT OF THE RESEARCH PROJECT
OPTION: COMPUTER VISION
Pose and Face Recovery via
Spatio-temporal GrabCut Human
Segmentation
Date: 13/07/2010
('4765407', 'Antonio Hernández Vela', 'antonio hernández vela')
('10722928', 'Sergio Escalera Guerrero', 'sergio escalera guerrero')
63488398f397b55552f484409b86d812dacde99aLearning Universal Multi-view Age Estimator by Video Contexts
2 School of Computing, National University of Singapore
3 Advanced Digital Sciences Center, Singapore; 4 Facebook
('1964516', 'Zheng Song', 'zheng song')
('5796401', 'Bingbing Ni', 'bingbing ni')
('39034731', 'Dong Guo', 'dong guo')
('1715286', 'Terence Sim', 'terence sim')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
{zheng.s, eleyans}@nus.edu.sg, bingbing.ni@adsc.com.sg, dnguo@fb.com, tsim@comp.nus.edu.sg
6341274aca0c2977c3e1575378f4f2126aa9b050A Multi-Scale Cascade Fully Convolutional
Network Face Detector
Institute for Robotics and Intelligent Systems
University of Southern California
Los Angeles, California 90089
('3469030', 'Zhenheng Yang', 'zhenheng yang')
('1694832', 'Ramakant Nevatia', 'ramakant nevatia')
Email:(cid:8)zhenheny,nevatia(cid:9)@usc.edu
63c022198cf9f084fe4a94aa6b240687f21d8b41425
632441c9324cd29489cee3da773a9064a46ae26bVideo-based Cardiac Physiological Measurements Using
Joint Blind Source Separation Approaches
by
B. Eng., Zhejiang University
A THESIS SUBMITTED IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
Master of Applied Science
in
THE FACULTY OF GRADUATE AND POSTDOCTORAL
STUDIES
(Electrical and Computer Engineering)
The University of British Columbia
(Vancouver)
July 2015
('33064881', 'Huan Qi', 'huan qi')
('33064881', 'Huan Qi', 'huan qi')
0f65c91d0ed218eaa7137a0f6ad2f2d731cf8dabMulti-Directional Multi-Level Dual-Cross
Patterns for Robust Face Recognition
('37990555', 'Changxing Ding', 'changxing ding')
('3826759', 'Jonghyun Choi', 'jonghyun choi')
('1692693', 'Dacheng Tao', 'dacheng tao')
('1693428', 'Larry S. Davis', 'larry s. davis')
0f112e49240f67a2bd5aaf46f74a924129f03912947
Age-Invariant Face Recognition
('2222919', 'Unsang Park', 'unsang park')
('3225345', 'Yiying Tong', 'yiying tong')
('6680444', 'Anil K. Jain', 'anil k. jain')
0fc254272db096a9305c760164520ad9914f4c9eUNSUPERVISED CONVOLUTIONAL NEURAL NETWORKS FOR MOTION ESTIMATION
School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End road, E1 4NS, London, UK
('29946980', 'Aria Ahmadi', 'aria ahmadi')
('1744405', 'Ioannis Patras', 'ioannis patras')
0fae5d9d2764a8d6ea691b9835d497dd680bbccdFace Recognition using Canonical Correlation Analysis
Department of Electrical Engineering
Indian Institute of Technology, Madras
Department of Electrical Engineering
Indian Institute of Technology, Madras
('37274547', 'Amit C. Kale', 'amit c. kale')
('4436239', 'R. Aravind', 'r. aravind')
ee04s043@ee.iitm.ac.in
aravind@tenet.res.in
0f4cfcaca8d61b1f895aa8c508d34ad89456948eLOCAL APPEARANCE BASED FACE RECOGNITION USING
DISCRETE COSINE TRANSFORM (WedPmPO4)
Author(s) :
0fdcfb4197136ced766d538b9f505729a15f0dafMultiple Pattern Classification by Sparse Subspace Decomposition
Institute of Media and Information Technology, Chiba University
1-33 Yayoi, Inage, Chiba, Japan
('1688743', 'Tomoya Sakai', 'tomoya sakai')tsakai@faculty.chiba-u.jp
0fad544edfc2cd2a127436a2126bab7ad31ec333Decorrelating Semantic Visual Attributes by Resisting the Urge to Share
UT Austin
USC
UT Austin
('2228235', 'Dinesh Jayaraman', 'dinesh jayaraman')
('1693054', 'Fei Sha', 'fei sha')
('1794409', 'Kristen Grauman', 'kristen grauman')
dineshj@cs.utexas.edu
feisha@usc.edu
grauman@cs.utexas.edu
0f32df6ae76402b98b0823339bd115d33d3ec0a0Emotion recognition from embedded bodily
expressions and speech during dyadic interactions
('40404576', 'Sikandar Amin', 'sikandar amin')
('2766593', 'Prateek Verma', 'prateek verma')
('1906895', 'Mykhaylo Andriluka', 'mykhaylo andriluka')
('3194727', 'Andreas Bulling', 'andreas bulling')
∗Max Planck Institute for Informatics, Germany, {pmueller,andriluk,bulling}@mpi-inf.mpg.de
†Stanford University, USA, prateekv@stanford.edu
‡Technical University of Munich, Germany, sikandar.amin@in.tum.de
0fd1715da386d454b3d6571cf6d06477479f54fcJ Intell Robot Syst (2016) 82:101–133
DOI 10.1007/s10846-015-0259-2
A Survey of Autonomous Human Affect Detection Methods
for Social Robots Engaged in Natural HRI
Received: 10 December 2014 / Accepted: 11 August 2015 / Published online: 23 August 2015
© Springer Science+Business Media Dordrecht 2015
('2929516', 'Derek McColl', 'derek mccoll')
('31839336', 'Naoaki Hatakeyama', 'naoaki hatakeyama')
('1719617', 'Beno Benhabib', 'beno benhabib')
0f9bf5d8f9087fcba419379600b86ae9e9940013
0f829fee12e86f980a581480a9e0cefccb59e2c5Bird Part Localization Using Exemplar-Based Models with Enforced
Pose and Subcategory Consistency
Columbia University
Problem
The goal of our work is to localize the parts au-
tomatically and accurately for fine-grained cate-
gories. We evaluate our method on bird images in
the CUB-200-2011 [1] dataset.
Pipeline
Approach
Subcategory Detectors
Localization Examples
(1) Sliding-window detection. (2) Matching and ranking exemplars. (3) Predicting the final part configuration.
Does Xk,t match the image I? ⇐⇒ P (Xk,t|I) =?
k, si
k,t|di
k,t])}
P (Xk,t|I) = P (Xk,t|Dp)αP (Xk,t|Ds)1−α
P (Xk,t|Dp) = Gavg{P (xi
P (Xk,t|Ds) = max
P (Xk,t|l, Ds) = Gavg{P (xi
(1)
(2)
(3)
k,t])} (4)
We use the most likely models M to predict the
part locations of the testing sample:
k,t)P (xi|di
p[ci
P (Xk,t|l, Ds)
s[l, si
ˆxi = arg max
(cid:88)
P ((cid:52)xi
k,t]) (5)
k,t|di
p[ci
k, si
k,t, θi
xi
k,t∈M
Species 1
Species 2
Species 3
Subcategory clusters of Back
For each species l of part i, we build a detector after
aligning the samples. Assuming the detector scans
the image over scales and orientations, then the re-
sponse map of this detector at a particular scale si
and orientation θi is denoted as di
Enforcing Consistency
P (xi
P (xi
s[l, si, θi].
s[l, si
k,t, θi
k,t|di
k,t|di
p[ci
k, si
k,t])
k,t])
Pose Detectors
Pose 1
Pose 2
Pose 3
Poses clusters of Back
For each pose cluster ci of part i, we build a de-
tector. The detector scans the image over scales,
and the response map of this detector at a particu-
lar scale si is denoted as di
p[ci, si].
References
[1] C. Wah, S. Branson, P. Welinder, P. Perona, S. Belongie. The
Caltech-UCSD Birds-200-2011 Dataset. Computation & Neu-
ral Systems Technical Report, CNS-TR-2011-001, 2011
[2] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, N. Kumar.
Localizing Parts of Faces Using a Consensus of Exemplars.
In CVPR ’11
Comparisons
PCP
Back
Beak
Belly
Breast
Crown
Forehead
Left Eye
Left Leg
Left Wing
Nape
Right Eye
Right Leg
Right Wing
Tail
Throat
Average
CoE [2] Ours
62.08
46.29
49.02
43.08
69.02
54.44
66.98
54.19
72.85
64.69
58.46
51.48
55.78
47.53
40.94
29.67
71.57
59.58
70.78
58.91
55.51
46.50
40.52
29.03
71.56
58.47
40.16
27.77
70.83
58.89
59.74
48.70
mAP
Birdlets
Template bagging
Pose pooling
Ours
200 species
14 species
28.18
44.13
40.25
44.73
57.44
62.42
('2454675', 'Jiongxin Liu', 'jiongxin liu')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
{liujx09, belhumeur}@cs.columbia.edu
0faee699eccb2da6cf4307ded67ba8434368257bTAIGMAN: MULTIPLE ONE-SHOTS FOR UTILIZING CLASS LABEL INFORMATION
Multiple One-Shots for Utilizing Class Label
Information
1 The Blavatnik School of Computer
Science,
Tel-Aviv University, Israel
2 Computer Science Division,
The Open University of Israel
3 face.com
Tel-Aviv, Israel
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('1776343', 'Lior Wolf', 'lior wolf')
('1756099', 'Tal Hassner', 'tal hassner')
yaniv@face.com
wolf@cs.tau.ac.il
hassner@openu.ac.il
0fabb4a40f2e3a2502cd935e54e090a304006c1cRegularized Robust Coding for Face Recognition
The Hong Kong Polytechnic University, Hong Kong, China
bSchool of Computer Science and Technology, Nanjing Univ. of Science and Technology, Nanjing, China
('5828998', 'Meng Yang', 'meng yang')
('36685537', 'Lei Zhang', 'lei zhang')
('37081450', 'Jian Yang', 'jian yang')
('1698371', 'David Zhang', 'david zhang')
0f92e9121e9c0addc35eedbbd25d0a1faf3ab529MORPH-II: A Proposed Subsetting Scheme
NSF-REU Site at UNC Wilmington, Summer 2017
('1940145', 'K. Park', 'k. park')
('11134292', 'Y. Wang', 'y. wang')
('1693283', 'C. Chen', 'c. chen')
('3369885', 'T. Kling', 't. kling')
0f0366070b46972fcb2976775b45681e62a94a26Reliable Posterior Probability Estimation for Streaming Face Recognition
University of Colorado at Colorado Springs
Terrance Boult
University of Colorado at Colorado Springs
('3274223', 'Abhijit Bendale', 'abhijit bendale')abendale@vast.uccs.edu
tboult@vast.uccs.edu
0ff23392e1cb62a600d10bb462d7a1f171f579d0Toward Sparse Coding on Cosine
Distance
Jonghyun Choi, Hyunjong Cho, Jungsuk Kwak#,
Larry S. Davis
UMIACS | University of Maryland, College Park
Stanford University
0fd3a7ee228bbc3dd4a111dae04952a1ee58a8cdHair Style Retrieval by Semantic Mapping on
Informative Patches
Tsinghua University, Beijing, China
('38081719', 'Nan Wang', 'nan wang')
('1679380', 'Haizhou Ai', 'haizhou ai')
wang-n04@mails.tsinghua.edu.cn, ahz@mail.tsinghua.edu.cn
0f533bc9fdfb75a3680d71c84f906bbd59ee48f1Illumination Invariant Feature Extraction Based on Natural Images Statistics –
Taking Face Images as An Example
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
National Taiwan University, Taipei, Taiwan
('2314709', 'Lu-Hung Chen', 'lu-hung chen')
('1934873', 'Yao-Hsiang Yang', 'yao-hsiang yang')
('1720473', 'Chu-Song Chen', 'chu-song chen')
('2809590', 'Ming-Yen Cheng', 'ming-yen cheng')
luhung.chen,yhyang@statistics.twbbs.org song@iis.sincia.edu.tw
cheng@math.ntu.edu.tw
0f4eb63402a4f3bae8f396e12133684fb760def1LONG, LIU, SHAO: ATTRIBUTE EMBEDDING WITH VSAR FOR ZERO-SHOT LEARNING 1
Attribute Embedding with Visual-Semantic
Ambiguity Removal for Zero-shot Learning
1 Department of Electronic and Electrical
Engineering
The University of Shef eld
Sheffield , UK
2 Department of Computer and
Information Sciences
Northumbria University
Newcastle upon Tyne, UK
('39650869', 'Yang Long', 'yang long')
('40017778', 'Li Liu', 'li liu')
('40799321', 'Ling Shao', 'ling shao')
ylong2@sheffield.ac.uk
li2.liu@northumbria.ac.uk
ling.shao@ieee.org
0fba39bf12486c7684fd3d51322e3f0577d3e4e8Task Specific Local Region Matching
Department of Computer Science and Engineering
University of California, San Diego
('2490700', 'Boris Babenko', 'boris babenko'){bbabenko,pdollar,sjb}@cs.ucsd.edu
0f395a49ff6cbc7e796656040dbf446a40e300aaORIGINAL RESEARCH
published: 22 December 2015
doi: 10.3389/fpsyg.2015.01937
The Change of Expression
Configuration Affects
Identity-Dependent Expression
Aftereffect but Not
Identity-Independent Expression
Aftereffect
College of Information Engineering, Shanghai Maritime University, Shanghai, China, 2 School of Information, Kochi University
of Technology, Kochi, Japan, 3 Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science
and Technology, Kunming, China
The present study examined the influence of expression configuration on cross-identity
expression aftereffect. The expression configuration refers to the spatial arrangement
of facial features in a face for conveying an emotion, e.g., an open-mouth smile vs.
a closed-mouth smile. In the first of two experiments, the expression aftereffect is
measured using a cross-identity/cross-expression configuration factorial design. The
facial
identities of test faces were the same or different from the adaptor, while
orthogonally, the expression configurations of those facial identities were also the same
or different. The results show that the change of expression configuration impaired
the expression aftereffect when the facial
identities of adaptor and tests were the
same; however, the impairment effect disappears when facial identities were different,
indicating the identity-independent expression representation is more robust to the
change of the expression configuration in comparison with the identity-dependent
expression representation. In the second experiment, we used schematic line faces
as adaptors and real faces as tests to minimize the similarity between the adaptor
and tests, which is expected to exclude the contribution from the identity-dependent
expression representation to expression aftereffect. The second experiment yields a
similar result as the identity-independent expression aftereffect observed in Experiment 1.
The findings indicate the different neural sensitivities to expression configuration for
identity-dependent and identity-independent expression systems.
Keywords: facial expression, adaptation, aftereffect, visual representation, vision
INTRODUCTION
One key issue in face study is to understand how emotional expression is represented in the
human visual system. According to the classical cognitive model (Bruce and Young, 1986) and
neural model (Haxby et al., 2000), emotional expression is consider to be represented and
processed independent of facial identity. This view is supported by several lines of evidence.
Edited by:
Wenfeng Chen,
Institute of Psychology, Chinese
Academy of Sciences, China
Reviewed by:
Marianne Latinus,
Aix Marseille Université, France
Jan Van den Stock,
KU Leuven, Belgium
*Correspondence:
Specialty section:
This article was submitted to
Emotion Science,
a section of the journal
Frontiers in Psychology
Received: 03 January 2015
Accepted: 02 December 2015
Published: 22 December 2015
Citation:
Song M, Shinomori K, Qian Q, Yin J
and Zeng W (2015) The Change of
Expression Configuration Affects
Identity-Dependent Expression
Aftereffect but Not
Identity-Independent Expression
Aftereffect. Front. Psychol. 6:1937.
doi: 10.3389/fpsyg.2015.01937
Frontiers in Psychology | www.frontiersin.org
December 2015 | Volume 6 | Article 1937
('1692572', 'Miao Song', 'miao song')
('1970678', 'Keizo Shinomori', 'keizo shinomori')
('2431558', 'Qian Qian', 'qian qian')
('40596849', 'Jun Yin', 'jun yin')
('2161630', 'Weiming Zeng', 'weiming zeng')
('1692572', 'Miao Song', 'miao song')
songmiaolm@gmail.com
0fb8317a8bf5feaf297af8e9b94c50c5ed0e8277Detecting Hands in Egocentric Videos: Towards
Action Recognition
Gran Via de les Corts Catalanes, 585, 08007 Barcelona, Spain
University of Barcelona
2 Computer Vision Centre,
Campus UAB, 08193 Cerdanyola del Valls, Barcelona, Spain
('1901010', 'Alejandro Cartas', 'alejandro cartas')
('2837527', 'Mariella Dimiccoli', 'mariella dimiccoli')
('1724155', 'Petia Radeva', 'petia radeva')
alejandro.cartas@ub.edu
0fe96806c009e8d095205e8f954d41b2b9fd5dcfOn-the-Job Learning with Bayesian Decision Theory
Department of Computer Science
Stanford University
Arun Chaganty
Department of Computer Science
Stanford University
Department of Computer Science
Stanford University
Department of Computer Science
Stanford University
('2795219', 'Keenon Werling', 'keenon werling')
('40085065', 'Percy Liang', 'percy liang')
('1812612', 'Christopher D. Manning', 'christopher d. manning')
keenon@cs.stanford.edu
chaganty@cs.stanford.edu
pliang@cs.stanford.edu
manning@cs.stanford.edu
0f940d2cdfefc78c92ec6e533a6098985f47a377A Hierarchical Framework for Simultaneous Facial Activity Tracking
Department of Electrical,Computer and System Engineering
Rensselaer Polytechnic Institute
Troy, NY 12180
('1713712', 'Jixu Chen', 'jixu chen')
('1726583', 'Qiang Ji', 'qiang ji')
chenj4@rpi.edu
qji@ecse.rpi.edu
0f21a39fa4c0a19c4a5b4733579e393cb1d04f71Evaluation of optimization
components of a 3D to 2D
landmark fitting algorithm for
head pose estimation
11029668
Bachelor thesis
Credits: 18 EC
Bachelor Opleiding Kunstmatige Intelligentie
University of Amsterdam
Faculty of Science
Science Park 904
1098 XH Amsterdam
Supervisors
dr. Sezer Karaoglu
MSc. Minh Ngo
Informatics Institute
Faculty of Science
University of Amsterdam
Science Park 904
1090 GH Amsterdam
June 29th, 2018
0fd1bffb171699a968c700f206665b2f8837d953Weakly Supervised Object Localization with
Multi-fold Multiple Instance Learning
('1939006', 'Ramazan Gokberk Cinbis', 'ramazan gokberk cinbis')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
0faeec0d1c51623a511adb779dabb1e721a6309bSeeing is Worse than Believing: Reading
People’s Minds Better than Computer-Vision
Methods Recognize Actions
1 MIT, Cambridge, MA, USA
Purdue University, West Lafayette, IN, USA
3 SUNY Buffalo, Buffalo, NY, USA
Stanford University, Stanford, CA, USA
University of California at Los Angeles, Los Angeles, CA, USA
University of Michigan, Ann Arbor, MI, USA
Princeton University, Princeton, NJ, USA
Rutgers University, Newark, NJ, USA
University of Texas at Arlington, Arlington, TX, USA
National University of Ireland Maynooth, Co. Kildare, Ireland
('21570451', 'Andrei Barbu', 'andrei barbu')
('1728624', 'Wei Chen', 'wei chen')
('2228109', 'Caiming Xiong', 'caiming xiong')
('3587688', 'Jason J. Corso', 'jason j. corso')
('2663295', 'Christiane D. Fellbaum', 'christiane d. fellbaum')
('32218165', 'Catherine Hanson', 'catherine hanson')
('20009336', 'Evguenia Malaia', 'evguenia malaia')
('1700974', 'Barak A. Pearlmutter', 'barak a. pearlmutter')
('2465833', 'Ronnie B. Wilbur', 'ronnie b. wilbur')
andrei@0xab.com
{dpbarret,shelie,qobi,tmt,wilbur}@purdue.edu
wchen23@buffalo.edu
nsid@stanford.edu
caimingxiong@ucla.edu
jjcorso@eecs.umich.edu
fellbaum@princeton.edu
{cat,jose}@psychology.rutgers.edu
malaia@uta.edu
barak@cs.nuim.ie
0f81b0fa8df5bf3fcfa10f20120540342a0c92e5Mirror, mirror on the wall, tell me, is the error small?
Queen Mary University of London
Queen Mary University of London
('2966679', 'Heng Yang', 'heng yang')
('1744405', 'Ioannis Patras', 'ioannis patras')
heng.yang@qmul.ac.uk
i.patras@qmul.ac.uk
0f0241124d6092a0bb56259ac091467c2c6938caAssociating Faces and Names in Japanese Photo News Articles on the Web
The University of Electro-Communications, JAPAN
('32572703', 'Akio Kitahara', 'akio kitahara')
('2558848', 'Taichi Joutou', 'taichi joutou')
('1681659', 'Keiji Yanai', 'keiji yanai')
0a6d344112b5af7d1abbd712f83c0d70105211d0Constrained Local Neural Fields for robust facial landmark detection in the wild
Tadas Baltruˇsaitis
University of Cambridge Computer Laboratory
USC Institute for Creative Technologies
15 JJ Thomson Avenue
12015 Waterfront Drive
('40609287', 'Peter Robinson', 'peter robinson')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
tb346@cl.cam.ac.uk
pr10@cl.cam.ac.uk
morency@ict.usc.edu
0a64f4fec592662316764283575d05913eb2135bJoint Pixel and Feature-level Domain Adaptation in the Wild
Michigan State University
2NEC Labs America
3UC San Diego
('1849929', 'Luan Tran', 'luan tran')
0a0321785c8beac1cbaaec4d8ad0cfd4a0d6d457Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Learning Invariant Deep Representation
for NIR-VIS Face Recognition
National Laboratory of Pattern Recognition, CASIA
Center for Research on Intelligent Perception and Computing, CASIA
Center for Excellence in Brain Science and Intelligence Technology, CAS
University of Chinese Academy of Sciences, Beijing 100190, China
('1705643', 'Ran He', 'ran he')
('2225749', 'Xiang Wu', 'xiang wu')
('1757186', 'Zhenan Sun', 'zhenan sun')
('1688870', 'Tieniu Tan', 'tieniu tan')
{rhe,znsun,tnt}@nlpr.ia.ac.cn, alfredxiangwu@gmail.com
0a2ddf88bd1a6c093aad87a8c7f4150bfcf27112Patch-based Models For Visual Object Classes
A dissertation submitted in partial fulfilment
of the requirements for the degree of
Doctor of Philosophy
at
University College London
Department of Computer Science
University College London
February 24, 2011
('1904148', 'Jania Aghajanian', 'jania aghajanian')
0a5ffc55b584da7918c2650f9d8602675d256023Efficient Face Alignment via Locality-constrained Representation for Robust
Recognition
School of Electronic and Information Engineering, South China University of Technology
School of Electronic and Computer Engineering, Peking University
School of Computer Science and Software Engineering, Shenzhen University
4SIAT, Chinese Academy of Sciences
('36326884', 'Weiyang Liu', 'weiyang liu')
0aeb5020003e0c89219031b51bd30ff1bceea363Sparsifying Neural Network Connections for Face Recognition
1SenseTime Group
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1681656', 'Yi Sun', 'yi sun')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
sunyi@sensetime.com
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
0a511058edae582e8327e8b9d469588c25152dc6
0a4f3a423a37588fde9a2db71f114b293fc09c50
0aa74ad36064906e165ac4b79dec298911a7a4dbVariational Inference for the Indian Buffet Process
Engineering Department
Cambridge University
Cambridge, UK
Engineering Department
Cambridge University
Cambridge, UK
Gatsby Unit
University College London
London, UK
Kurt T. Miller∗
Computer Science Division
University of California, Berkeley
Berkeley, CA
('2292194', 'Finale Doshi-Velez', 'finale doshi-velez')
('1689857', 'Jurgen Van Gael', 'jurgen van gael')
('1725303', 'Yee Whye Teh', 'yee whye teh')
0abf67e7bd470d9eb656ea2508beae13ca173198Going Deeper into First-Person Activity Recognition
Carnegie Mellon University
Pittsburgh, PA 15213, USA
('2238622', 'Minghuang Ma', 'minghuang ma')
('2681569', 'Haoqi Fan', 'haoqi fan')
('37991449', 'Kris M. Kitani', 'kris m. kitani')
minghuam@andrew.cmu.edu haoqif@andrew.cmu.edu kkitani@cs.cmu.edu
0af33f6b5fcbc5e718f24591b030250c6eec027aText Analysis for Automatic Image Annotation
Interdisciplinary Centre for Law & IT
Department of Computer Science
Katholieke Universiteit Leuven
Tiensestraat 41, 3000 Leuven, Belgium
('1797588', 'Koen Deschacht', 'koen deschacht')
('1802161', 'Marie-Francine Moens', 'marie-francine moens')
{koen.deschacht,marie-france.moens}@law.kuleuven.ac.be
0a3863a0915256082aee613ba6dab6ede962cdcdEarly and Reliable Event Detection Using Proximity Space Representation
LTCI, CNRS, T´el´ecom ParisTech, Universit´e Paris-Saclay, 75013, Paris, France
J´erˆome Gauthier
LADIS, CEA, LIST, 91191, Gif-sur-Yvette, France
Normandie Universit´e, UR, LITIS EA 4108, Avenue de l’universit´e, 76801, Saint-Etienne-du-Rouvray, France
('2527457', 'Maxime Sangnier', 'maxime sangnier')
('1792962', 'Alain Rakotomamonjy', 'alain rakotomamonjy')
MAXIME.SANGNIER@TELECOM-PARISTECH.FR
JEROME.GAUTHIER@CEA.FR
ALAIN.RAKOTO@INSA-ROUEN.FR
0a60d9d62620e4f9bb3596ab7bb37afef0a90a4fChimpanzee Faces in the Wild: Log-Euclidean CNNs for Predicting Identities and Attributes of Primates. GCPR 2016
c(cid:13) Copyright by Springer. The final publication will be available at link.springer.com
A. Freytag, E. Rodner, M. Simon, A. Loos, H. K¨uhl and J. Denzler
Chimpanzee Faces in the Wild:
Log-Euclidean CNNs for Predicting Identities
and Attributes of Primates
Computer Vision Group, Friedrich Schiller University Jena, Germany
2Michael Stifel Center Jena, Germany
Fraunhofer Institute for Digital Media Technology, Germany
Max Planck Institute for Evolutionary Anthropology, Germany
5German Centre for Integrative Biodiversity Research (iDiv), Germany
('1720839', 'Alexander Freytag', 'alexander freytag')
('1679449', 'Erik Rodner', 'erik rodner')
('49675890', 'Marcel Simon', 'marcel simon')
('4572597', 'Alexander Loos', 'alexander loos')
('1728382', 'Joachim Denzler', 'joachim denzler')
0a34fe39e9938ae8c813a81ae6d2d3a325600e5cFacePoseNet: Making a Case for Landmark-Free Face Alignment
Institute for Robotics and Intelligent Systems, USC, CA, USA
Information Sciences Institute, USC, CA, USA
The Open University of Israel, Israel
('1752756', 'Feng-Ju Chang', 'feng-ju chang')
('46634688', 'Anh Tuan Tran', 'anh tuan tran')
('1756099', 'Tal Hassner', 'tal hassner')
('11269472', 'Iacopo Masi', 'iacopo masi')
{fengjuch,anhttran,iacopoma,nevatia,medioni}@usc.edu, hassner@isi.edu
0ad8149318912b5449085187eb3521786a37bc78CP-mtML: Coupled Projection multi-task Metric Learning
for Large Scale Face Retrieval
Frederic Jurie1,∗
University of Caen, France
2MPI for Informatics, Germany
3IIT Kanpur, India
('2078892', 'Binod Bhattarai', 'binod bhattarai')
('2515597', 'Gaurav Sharma', 'gaurav sharma')
0a9d204db13d395f024067cf70ac19c2eeb5f942Viewpoint-aware Video Summarization
The University of Tokyo, 2RIKEN, 3ETH Z urich, 4KU Leuven
('2551640', 'Atsushi Kanehira', 'atsushi kanehira')
('1681236', 'Luc Van Gool', 'luc van gool')
('3250559', 'Yoshitaka Ushiku', 'yoshitaka ushiku')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
0aa9872daf2876db8d8e5d6197c1ce0f8efee4b7Imperial College of Science, Technology and Medicine
Department of Computing
Timing is everything
A spatio-temporal approach to the analysis of facial
actions
Michel Fran¸cois Valstar
Submitted in part fulfilment of the requirements for the degree of
Doctor of Philosophy in Computing of Imperial College, February
0aae88cf63090ea5b2c80cd014ef4837bcbaadd83D Face Structure Extraction from Images at Arbitrary Poses and under
Arbitrary Illumination Conditions
A Thesis
Submitted to the Faculty
Of
Drexel University
By
In partial fulfillment of the
Requirements for the degree
Of
Doctor of Philosophy
October 2006
('40531119', 'Cuiping Zhang', 'cuiping zhang')
0a87d781fe2ae2e700237ddd00314dbc10b1429cDistribution Statement A: Approved for public release; distribution unlimited.
Multi-scale HOG Prescreening Algorithm for Detection of Buried
Explosive Hazards in FL-IR and FL-GPR Data
University of Missouri, Columbia, MO
('2741325', 'K. Stone', 'k. stone')
('9187168', 'J. M. Keller', 'j. m. keller')
0ad90118b4c91637ee165f53d557da7141c3fde0
0a82860d11fcbf12628724333f1e7ada8f3cd255Action Temporal Localization in Untrimmed Videos via Multi-stage CNNs
Columbia University
New York, NY, USA
('2195345', 'Zheng Shou', 'zheng shou')
('2704179', 'Dongang Wang', 'dongang wang')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{zs2262,dw2648,sc250}@columbia.edu
0a4fc9016aacae9cdf40663a75045b71e64a70c9JOURNAL OF INFORMATION SCIENCE AND ENGINEERING XX, XXX-XXX (201X)
Illumination Normalization Based on
Homomorphic Wavelet Filtering for Face Recognition
1School of Electronic and Information Engineering
Beijing Jiaotong University
No.3 Shang Yuan Cun,Hai Dian District
Beijing 100044,China
2School of Physics Electrical Information Engineering
Ningxia University
Yinchuan Ningxia 750021,China
Phone number: 086-010-51688165
The performance of face recognition techniques is greatly challenged by the pose,
expression and illumination of the image. For most existing systems, the recognition rate
will decrease due to changes in environmental illumination. In this paper, a
Homomorphic Wavelet-based Illumination Normalization (HWIN) method is proposed.
The purpose of this method is to normalize the uneven illumination of the facial image.
The image is analyzed in the logarithm domain with wavelet transform, the
approximation coefficients of the image are mapped according to the reference
illumination map in order to normalize the distribution of illumination energy resulting
from different lighting effects, and the detail components are enhanced to achieve detail
information emphasis. Then, a Difference of Gaussian (DoG) filter is also applied to
reduce the noise resulting from different lighting effects, which exists on detail
components. The proposed methods are implemented on Yale B and Extended Yale B
facial databases. The experimental results show that the methods described in this study
are capable of effectively eliminating the effects of uneven illumination and of greatly
improving the recognition rate, and are therefore more effective than other popular
methods.
Keywords: face recognition; homomorphic filtering; wavelet transfer; illumination
mapping
1. INTRODUCTION
Automatic face recognition has received significant attention over the past several
decades due to its numerous potential applications, such as human-computer interfaces,
access control, security and surveillance, e-commerce, entertainment, and so on. Related
research performed in recent years has made great progress, and a number of face
recognition systems have achieved strong results, as shown in the latest report of Face
Recognition Vendor Test (FRVT, 2006). Despite this remarkable progress, face
recognition still faces a challenging problem, which is its sensitivity to the dramatic
variations among images of the same face. For example, facial expression, pose, ageing,
make-up, background and illumination variations are all factors which may result in
significant variations [1-26].
Illumination variation is one of the most significant factors limiting the performance
of face recognition. Since several images of the same person appear to be dramatically
1
('2613621', 'Xue Yuan', 'xue yuan')
('47884608', 'Yifei Meng', 'yifei meng')
E-mail: 10111045@bjtu.edu.cn
0a85afebaa19c80fddb660110a4352fd22eb2801Neural Animation and Reenactment of Human Actor Videos
Fig. 1. We propose a novel learning-based approach for the animation and reenactment of human actor videos. The top row shows some frames of the video
We propose a method for generating (near) video-realistic animations of
real humans under user control. In contrast to conventional human char-
acter rendering, we do not require the availability of a production-quality
photo-realistic 3D model of the human, but instead rely on a video sequence
in conjunction with a (medium-quality) controllable 3D template model
of the person. With that, our approach significantly reduces production
cost compared to conventional rendering approaches based on production-
quality 3D models, and can also be used to realistically edit existing videos.
Technically, this is achieved by training a neural network that translates
simple synthetic images of a human character into realistic imagery. For
training our networks, we first track the 3D motion of the person in the
video using the template model, and subsequently generate a synthetically
mpg.de, Max Planck Institute for Informatics
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
© 2018 Copyright held by the owner/author(s).
XXXX-XXXX/2018/9-ART282
https://doi.org/10.475/123_4
rendered version of the video. These images are then used to train a con-
ditional generative adversarial network that translates synthetic images of
the 3D model into realistic imagery of the human. We evaluate our method
for the reenactment of another person that is tracked in order to obtain the
motion data, and show video results generated from artist-designed skeleton
motion. Our results outperform the state-of-the-art in learning-based human
image synthesis.
CCS Concepts: • Computing methodologies → Computer graphics;
Neural networks; Appearance and texture representations; Animation; Ren-
dering;
Additional Key Words and Phrases: Video-based Characters, Deep Learning,
Conditional GAN, Rendering-to-Video Translation
ACM Reference Format:
Animation and Reenactment of Human Actor Videos. 1, 1, Article 282
(September 2018), 13 pages. https://doi.org/10.475/123_4
INTRODUCTION
The creation of realistically rendered and controllable animations
of human characters is a crucial task in many computer graphics
applications. Virtual actors play a key role in games and visual ef-
fects, in telepresence, or in virtual and augmented reality. Today, the
plausible rendition of video-realistic characters—i.e., animations in-
distinguishable from a video of a human—under user control is also
Submission ID: 282. 2018-09-12 00:32. Page 1 of 1–13.
, Vol. 1, No. 1, Article 282. Publication date: September 2018.
('46458089', 'Lingjie Liu', 'lingjie liu')
('9765909', 'Weipeng Xu', 'weipeng xu')
('1699058', 'Michael Zollhöfer', 'michael zollhöfer')
('3022958', 'Hyeongwoo Kim', 'hyeongwoo kim')
('39600032', 'Florian Bernard', 'florian bernard')
('14210288', 'Marc Habermann', 'marc habermann')
('1698520', 'Wenping Wang', 'wenping wang')
('1680185', 'Christian Theobalt', 'christian theobalt')
('3022958', 'Hyeongwoo Kim', 'hyeongwoo kim')
('46458089', 'Lingjie Liu', 'lingjie liu')
('9765909', 'Weipeng Xu', 'weipeng xu')
('1699058', 'Michael Zollhöfer', 'michael zollhöfer')
('3022958', 'Hyeongwoo Kim', 'hyeongwoo kim')
('39600032', 'Florian Bernard', 'florian bernard')
('14210288', 'Marc Habermann', 'marc habermann')
('1698520', 'Wenping Wang', 'wenping wang')
('1680185', 'Christian Theobalt', 'christian theobalt')
Authors’ addresses: Lingjie Liu, liulingjie0206@gmail.com, University of Hong Kong,
Max Planck Institute for Informatics; Weipeng Xu, wxu@mpi-inf.mpg.de, Max Planck
Institute for Informatics; Michael Zollhöfer, zollhoefer@cs.stanford.edu, Stanford
kim@mpi-inf.mpg.de; Florian Bernard, fbernard@mpi-inf.mpg.de; Marc Habermann,
mhaberma@mpi-inf.mpg.de, Max Planck Institute for Informatics; Wenping Wang,
wenping@cs.hku.hk, University of Hong Kong; Christian Theobalt, theobalt@mpi-inf.
0ac442bb570b086d04c4d51a8410fcbfd0b1779dWarpNet: Weakly Supervised Matching for Single-view Reconstruction
University of Maryland, College Park
Manmohan Chandraker
NEC Labs America
('20615377', 'Angjoo Kanazawa', 'angjoo kanazawa')
('34734622', 'David W. Jacobs', 'david w. jacobs')
0af48a45e723f99b712a8ce97d7826002fe4d5a52982
Toward Wide-Angle Microvision Sensors
Todd Zickler, Member, IEEE
('2724462', 'Sanjeev J. Koppal', 'sanjeev j. koppal')
('2407724', 'Ioannis Gkioulekas', 'ioannis gkioulekas')
('3091204', 'Travis Young', 'travis young')
('2070262', 'Hyunsung Park', 'hyunsung park')
('2140759', 'Kenneth B. Crozier', 'kenneth b. crozier')
('40431923', 'Geoffrey L. Barrows', 'geoffrey l. barrows')
0aa8a0203e5f406feb1815f9b3dd49907f5fd05bMixture subclass discriminant analysis ('1827419', 'Nikolaos Gkalelis', 'nikolaos gkalelis')
('1737436', 'Vasileios Mezaris', 'vasileios mezaris')
0ac664519b2b8abfb8966dafe60d093037275573Facial Action Unit Detection Using Kernel Partial Least Squares -
Supplemental Material
Facial Image Processing and Analysis Group, Institute for Anthropomatics
Karlsruhe Institute of Technology
D-76131 Karlsruhe, P.O. Box 6980 Germany
1. Introduction
In this document we present additional results corre-
sponding to the experiments shown in [1].
A. ROC Curves
The ROC curves for the AU estimates are shown in this
section.
A.1. Evaluation on a Single Dataset
A.1.1 Experiment on the CK+ Dataset with Eye Labels
See Figure 1.
A.1.2 Experiment on the CK+ Dataset with Automatic
Eye Detection
See Figure 2.
A.1.3 Experiment on the GEMEP-FERA Dataset
See Figure 3.
A.2. Evaluation across Datasets
A.2.1 Generalization from Constrained to less Con-
strained Condition
See Figure 4.
A.2.2 Generalization from less Constrained to Con-
strained Condition
See Figure 5.
B. F1-Score
The F1-Scores for the AU estimates are shown in this
section. If no threshold optimization is performed then the
thresholds are set to 0.5 for the PLS-based approaches and
Table 1. F1 scores in % on CK+ using eye labels. AVG is the
weighted average over the individual results, depending on the
number of positive samples given by in the column N.
linear PLS
RBF PLS
linear SVM RBF SVM
176
117
193
102
123
120
75
34
131
94
201
79
60
58
324
50
81
AU
11
12
15
17
20
23
24
25
26
27
AVG
78.1
80.4
74.2
77.5
72.8
64.0
84.3
15.0
84.7
60.3
77.4
64.8
35.2
38.2
85.4
15.6
85.9
72.3
77.5
76.2
75.9
76.2
68.2
51.0
84.2
5.7
81.9
51.5
78.3
57.1
28.6
26.7
86.5
7.4
83.0
69.5
69.6
78.9
72.8
74.3
67.0
51.9
84.5
14.6
78.3
52.6
73.6
49.6
28.9
14.1
86.5
5.9
84.6
67.4
71.5
76.7
68.0
73.8
65.7
42.3
83.0
0.0
80.0
49.6
76.8
28.0
14.3
9.0
86.1
0.0
77.7
64.4
0.0 for the SVM-based approaches. Otherwise thresholds
are optimized using equal error rate (EER) or F1 score as
metrics [2] on either the training folds of the LOSO scheme
or the whole training data in case of the cross-dataset tests.
B.1. Evaluation on a Single Dataset
B.1.1 Experiment on the CK+ Dataset with Eye Labels
See Table 1 for F1 scores without threshold optimization,
Table 2 for F1 scores using threshold optimization based on
EER and Table 3 for F1 scores using threshold optimization
based on F1 score.
B.1.2 Experiment on the CK+ Dataset with Automatic
Eye Detection
See Table 4 for F1 scores without threshold optimization,
Table 5 for F1 scores using threshold optimization based on
EER and Table 6 for F1 scores using threshold optimization
based on F1 score.
('40303076', 'Tobias Gehrig', 'tobias gehrig'){tobias.gehrig, ekenel}@kit.edu
0a9345ea6e488fb936e26a9ba70b0640d3730ba7Deep Bi-directional Cross-triplet Embedding for
Cross-Domain Clothing Retrieval
Northeastern University, Boston, USA
College of Computer and Information Science, Northeastern University, Boston, USA
('3343578', 'Shuhui Jiang', 'shuhui jiang')
('1746738', 'Yue Wu', 'yue wu')
('37771688', 'Yun Fu', 'yun fu')
{shjiang, yuewu, yunfu}@ece.neu.edu
0a79d0ba1a4876086e64fc0041ece5f0de90fbeaFACE ILLUMINATION NORMALIZATION
WITH SHADOW CONSIDERATION
By
SUBMITTED IN PARTIAL FULFILLMENT OF THE
REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
AT
CARNEGIE MELLON UNIVERSITY
5000 FORBES AVENUE PITTSBURGH PA 15213-3890
MAY 2004
('3039721', 'Avinash B. Baliga', 'avinash b. baliga')
('3039721', 'Avinash B. Baliga', 'avinash b. baliga')
0a7309147d777c2f20f780a696efe743520aa2dbStories for Images-in-Sequence by using Visual
and Narrative Components (cid:63)
Ss. Cyril and Methodius University, Skopje, Macedonia
2 Pendulibrium, Skopje, Macedonia
3 Elevate Global, Skopje, Macedonia
('46205557', 'Marko Smilevski', 'marko smilevski')
('46242132', 'Ilija Lalkovski', 'ilija lalkovski')
{marko.smilevski,ilija}@webfactory.mk, gjorgji.madjarov@finki.ukim.mk
0a11b82aa207d43d1b4c0452007e9388a786be12Feature Level Multiple Model Fusion Using Multilinear
Subspace Analysis with Incomplete Training Set
and Its Application to Face Image Analysis
School of IoT Engineering, Jiangnan University, Wuxi, 214122, China
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH
United Kingdom
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('1748684', 'Josef Kittler', 'josef kittler')
xiaojun wu jnu@163.com
{Z.Feng,J.Kittler,W.Christmas}@surrey.ac.uk
0a1138276c52c734b67b30de0bf3f76b0351f097This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.
The final version of record is available at
http://dx.doi.org/10.1109/TIP.2016.2539502
Discriminant Incoherent Component Analysis
('2812961', 'Christos Georgakis', 'christos georgakis')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1694605', 'Maja Pantic', 'maja pantic')
0a6a25ee84fc0bf7284f41eaa6fefaa58b5b329a('1802883', 'Soufiane Belharbi', 'soufiane belharbi')
0ae9cc6a06cfd03d95eee4eca9ed77b818b59cb7Noname manuscript No.
(will be inserted by the editor)
Multi-task, multi-label and multi-domain learning with
residual convolutional networks for emotion recognition
Received: date / Accepted: date
('10157512', 'Gerard Pons', 'gerard pons')
0acf23485ded5cb9cd249d1e4972119239227ddbDual coordinate solvers for large-scale structural SVMs
UC Irvine
This manuscript describes a method for training linear SVMs (including binary SVMs, SVM regression,
and structural SVMs) from large, out-of-core training datasets. Current strategies for large-scale learning fall
into one of two camps; batch algorithms which solve the learning problem given a finite datasets, and online
algorithms which can process out-of-core datasets. The former typically requires datasets small enough to fit
in memory. The latter is often phrased as a stochastic optimization problem [4, 15]; such algorithms enjoy
strong theoretical properties but often require manual tuned annealing schedules, and may converge slowly
for problems with large output spaces (e.g., structural SVMs). We discuss an algorithm for an “intermediate”
regime in which the data is too large to fit in memory, but the active constraints (support vectors) are small
enough to remain in memory.
In this case, one can design rather efficient learning algorithms that are
as stable as batch algorithms, but capable of processing out-of-core datasets. We have developed such a
MATLAB-based solver and used it to train a series of recognition systems [19, 7, 21, 12] for articulated pose
estimation, facial analysis, 3D object recognition, and action classification, all with publicly-available code.
This writeup describes the solver in detail.
Approach: Our approach is closely based on data-subsampling algorithms for collecting hard exam-
ples [9, 10, 6], combined with the dual coordinate quadratic programming (QP) solver described in liblinear
[8]. The latter appears to be current fastest method for learning linear SVMs. We make two extensions (1)
We show how to generalize the solver to other types of SVM problems such as (latent) structural SVMs (2)
We show how to modify it to behave as a partially-online algorithm, which only requires access to small
amounts of data at a time.
Overview: Sec. 1 describes a general formulation of an SVM problem that encompasses many standard
tasks such as multi-class classification and (latent) structural prediction. Sec. 2 derives its dual QP, and Sec. 3
describes a dual coordinate descent optimization algorithm. Sec. 4 describes modifications for optimizing
in an online fashion, allowing one to learn near-optimal models with a single pass over large, out-of-core
datasets. Sec. 5 briefly touches on some theoretical issues that are necessary to ensure convergence. Finally,
Sec. 6 and Sec. 7 describe modifications to our basic formulation to accommodate non-negativity constraints
and flexible regularization schemes during learning.
1 Generalized SVMs
We first describe a general formulation of a SVM which encompasses various common problems such as
binary classification, regression, and structured prediction. Assume we are given training data where the ith
example is described by a set of Ni vectors {xij} and a set of Ni scalars {lij}, where j varies from 1 to Ni.
We wish to solve the following optimization problem:
(0, lij − wT xij)
max
j∈Ni
(1)
(cid:88)
argmin
L(w) =
||w||2 +
('1770537', 'Deva Ramanan', 'deva ramanan')
0ad4a814b30e096ad0e027e458981f812c835aa0
6448d23f317babb8d5a327f92e199aaa45f0efdc
6412d8bbcc01f595a2982d6141e4b93e7e982d0fDeep Convolutional Neural Network using Triplets of Faces, Deep Ensemble, and
Score-level Fusion for Face Recognition
1Department of Creative IT Engineering, POSTECH, Korea
2Department of Computer Science and Engineering, POSTECH, Korea
('2794366', 'Bong-Nam Kang', 'bong-nam kang')
('1804861', 'Yonghyun Kim', 'yonghyun kim')
('1695669', 'Daijin Kim', 'daijin kim')
{bnkang, gkyh0805, dkim}@postech.ac.kr
641f0989b87bf7db67a64900dcc9568767b7b50fReconstructing Faces from their Signatures using RBF
Regression
To cite this version:
sion. British Machine Vision Conference 2013, Sep 2013, Bristol, United Kingdom. pp.103.1–
103.12, 2013, <10.5244/C.27.103>.
HAL Id: hal-00943426
https://hal.archives-ouvertes.fr/hal-00943426
Submitted on 13 Feb 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('34723309', 'Alexis Mignon', 'alexis mignon')
('34723309', 'Alexis Mignon', 'alexis mignon')
6409b8879c7e61acf3ca17bcc62f49edca627d4cLearning Finite Beta-Liouville Mixture Models via
Variational Bayes for Proportional Data Clustering
Electrical and Computer Engineering
Institute for Information Systems Engineering
Concordia University, Canada
Concordia University, Canada
('2038786', 'Wentao Fan', 'wentao fan')
('1729109', 'Nizar Bouguila', 'nizar bouguila')
wenta fa@encs.concordia.ca
nizar.bouguila@concordia.ca
64153df77fe137b7c6f820a58f0bdb4b3b1a879bShape Invariant Recognition of Segmented Human
Faces using Eigenfaces
Department of Informatics
Technical University of Munich, Germany
('1725709', 'Zahid Riaz', 'zahid riaz')
('1746229', 'Michael Beetz', 'michael beetz')
('1699132', 'Bernd Radig', 'bernd radig')
{riaz,beetz,radig}@in.tum.de
649eb674fc963ce25e4e8ce53ac7ee20500fb0e3
64ec0c53dd1aa51eb15e8c2a577701e165b8517bOnline Regression with Feature Selection in
Stochastic Data Streams
Florida State University
Florida State University
('5517409', 'Lizhe Sun', 'lizhe sun')
('2455529', 'Adrian Barbu', 'adrian barbu')
lizhe.sun@stat.fsu.edu
abarbu@stat.fsu.edu
642c66df8d0085d97dc5179f735eed82abf110d0
6459f1e67e1ea701b8f96177214583b0349ed964GENERALIZED SUBSPACE BASED HIGH DIMENSIONAL DENSITY ESTIMATION
University of California Santa Barbara
University of California Santa Barbara
('3231876', 'Karthikeyan Shanmuga Vadivel', 'karthikeyan shanmuga vadivel')(cid:63){karthikeyan,msargin,sjoshi,manj}@ece.ucsb.edu
†grafton@psych.ucsb.edu
64cf86ba3b23d3074961b485c16ecb99584401deSingle Image 3D Interpreter Network
Massachusetts Institute of Technology
Stanford University
3Facebook AI Research
4Google Research
('3045089', 'Jiajun Wu', 'jiajun wu')
('3222730', 'Tianfan Xue', 'tianfan xue')
('35198686', 'Joseph J. Lim', 'joseph j. lim')
('39402399', 'Yuandong Tian', 'yuandong tian')
('1763295', 'Joshua B. Tenenbaum', 'joshua b. tenenbaum')
('1690178', 'Antonio Torralba', 'antonio torralba')
('1768236', 'William T. Freeman', 'william t. freeman')
6424b69f3ff4d35249c0bb7ef912fbc2c86f4ff4Deep Learning Face Attributes in the Wild∗
The Chinese University of Hong Kong
The Chinese University of Hong Kong
('3243969', 'Ziwei Liu', 'ziwei liu')
('1693209', 'Ping Luo', 'ping luo')
{lz013,pluo,xtang}@ie.cuhk.edu.hk, xgwang@ee.cuhk.edu.hk
6479b61ea89e9d474ffdefa71f068fbcde22cc44University of Exeter
Department of Computer Science
Some Topics on Similarity Metric Learning
June 2015
Supervised by Dr. Yiming Ying
Philosophy in Computer Science , June 2015.
This thesis is available for Library use on the understanding that it is copyright material
and that no quotation from the thesis may be published without proper acknowledgement.
I certify that all material in this thesis which is not my own work has been identified and
that no material has previously been submitted and approved for the award of a degree by this or
any other University
(signature) .................................................................................................
('1954340', 'Qiong Cao', 'qiong cao')
('1954340', 'Qiong Cao', 'qiong cao')
64e75f53ff3991099c3fb72ceca55b76544374e5Simultaneous Feature Selection and Classifier Training via Linear
Programming: A Case Study for Face Expression Recognition
Computer Sciences Department
University of Wisconsin-Madison
Madison, WI 53706
('1822413', 'Guodong Guo', 'guodong guo')
('1724754', 'Charles R. Dyer', 'charles r. dyer')
fgdguo, dyerg@cs.wisc.edu
64f9519f20acdf703984f02e05fd23f5e2451977Learning Temporal Alignment Uncertainty for
Efficient Event Detection
Image and Video Laboratory, Queensland University of Technology (QUT), Brisbane, QLD, Australia
The Robotics Institute, Carnegie Mellon University, 5000 Forbes Ave, PA, USA
('2838646', 'Iman Abbasnejad', 'iman abbasnejad')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
('1980700', 'Simon Denman', 'simon denman')
('3140440', 'Clinton Fookes', 'clinton fookes')
('1820249', 'Simon Lucey', 'simon lucey')
Email:{i.abbasnejad, s.sridharan, s.denman, c.fookes}@qut.edu.au, slucey@cs.cmu.edu
641f34deb3bdd123c6b6e7b917519c3e56010cb7
64782a2bc5da11b1b18ca20cecf7bdc26a538d68JOURNAL OF INFORMATION SCIENCE AND ENGINEERING XX, XXX-XXX (2011)
Facial Expression Recognition using
Spectral Supervised Canonical Correlation Analysis*
Institute of Information Science
Beijing Jiaotong University
Beijing, 100044 P.R. China
Feature extraction plays an important role in facial expression recognition. Canoni-
cal correlation analysis (CCA), which studies the correlation between two random vec-
tors, is a major linear feature extraction method based on feature fusion. Recent studies
have shown that facial expression images often reside on a latent nonlinear manifold.
However, either CCA or its kernel version KCCA, which is globally linear or nonlinear,
cannot effectively utilize the local structure information to discover the low-dimensional
manifold embedded in the original data. Inspired by the successful application of spectral
graph theory in classification, we proposed spectral supervised canonical correlation
analysis (SSCCA) to overcome the shortcomings of CCA and KCCA. In SSCCA, we
construct an affinity matrix, which incorporates both the class information and local
structure information of the data points, as the supervised matrix. The spectral feature of
covariance matrices is used to extract a new combined feature with more discriminative
information, and it can reveal the nonlinear manifold structure of the data. Furthermore,
we proposed a unified framework for CCA to offer an effective methodology for
non-empirical structural comparison of different forms of CCA as well as providing a
way to extend the CCA algorithm. The correlation feature extraction power is then pro-
posed to evaluate the effectiveness of our method. Experimental results on two facial ex-
pression databases validate the effectiveness of our method.
Keywords: spectral supervised canonical correlation analysis, spectral classification, fea-
ture fusion, feature extraction, facial expression recognition
1. INTRODUCTION
Facial expression conveys visual human emotions, which makes the facial expres-
sion recognition (FER) plays an important role in human–computer interaction, image
retrieval, synthetic face animation, video conferencing, human emotion analysis [1, 2].
Due to its wide range of applications, FER has attracted much attention in recent years.
Generally speaking, a FER system consists of three major components: face detection,
facial expression feature extraction and facial expression classification [1, 2]. Since ap-
propriate facial expression representation can effectively alleviate the complexity of the
design of classification and improve the performance of the FER system, most researches
currently concentrate on how to extract effective facial expression features.
A variety of methods have been proposed for facial expression feature extraction
[3-7], and there are generally two common approaches: single feature extraction and
feature fusion. Single feature extraction is based on a particular method, i.e. principal
component analysis (PCA) [3], fisher’s linear discriminant (FLD) [4], locality preserving
*This paper was supported by the National Natural Science Foundation of China (Grant No.60973060), Spe-
cialized Research Fund for the Doctoral Program of Higher Education (Grant No. 200800040008), Beijing
Program (Grant No. YB20081000401) and the Fundamental Research Funds for the Central Universities
(Grant No. 2011JBM022).
1
('1701978', 'Song Guo', 'song guo')
('1738408', 'Qiuqi Ruan', 'qiuqi ruan')
('1718667', 'Zhan Wang', 'zhan wang')
('1702894', 'Shuai Liu', 'shuai liu')
645de797f936cb19c1b8dba3b862543645510544Deep Temporal Linear Encoding Networks
1ESAT-PSI, KU Leuven, 2CVL, ETH Z¨urich
('3310120', 'Ali Diba', 'ali diba')
('50633941', 'Vivek Sharma', 'vivek sharma')
('1681236', 'Luc Van Gool', 'luc van gool')
{firstname.lastname}@esat.kuleuven.be
6462ef39ca88f538405616239471a8ea17d76259
64d5772f44efe32eb24c9968a3085bc0786bfca7Morphable Displacement Field Based Image
Matching for Face Recognition across Pose
1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing, 100190, China
Graduate University of Chinese Academy of Sciences, Beijing 100049, China
3 Omron Social Solutions Co., LTD., Kyoto, Japan
('1688086', 'Shaoxin Li', 'shaoxin li')
('1731144', 'Xin Liu', 'xin liu')
('1695600', 'Xiujuan Chai', 'xiujuan chai')
('1705483', 'Haihong Zhang', 'haihong zhang')
('1710195', 'Shihong Lao', 'shihong lao')
('1685914', 'Shiguang Shan', 'shiguang shan')
{shaoxin.li,xiujuan.chai,xin.liu,shiguang.shan}@vipl.ict.ac.cn,
lao@ari.ncl.omron.co.jp, angelazhang@ssb.kusatsu.omron.co.jp
64d7e62f46813b5ad08289aed5dc4825d7ec5cffYAMAGUCHI et al.: MIX AND MATCH
Mix and Match: Joint Model for Clothing and
Attribute Recognition
http://vision.is.tohoku.ac.jp/~kyamagu
Tohoku University
Sendai, Japan
2 NTT
Yokosuka, Japan
Tokyo University of Science
Tokyo, Japan
('1721910', 'Kota Yamaguchi', 'kota yamaguchi')
('1718872', 'Takayuki Okatani', 'takayuki okatani')
('1745497', 'Kyoko Sudo', 'kyoko sudo')
('2023568', 'Kazuhiko Murasaki', 'kazuhiko murasaki')
('2113938', 'Yukinobu Taniguchi', 'yukinobu taniguchi')
okatani@vision.is.tohoku.ac.jp
sudo.kyoko@lab.ntt.co.jp
murasaki.kazuhiko@lab.ntt.co.jp
ytaniguti@ms.kagu.tus.ac.jp
90ac0f32c0c29aa4545ed3d5070af17f195d015f
90d735cffd84e8f2ae4d0c9493590f3a7d99daf1Original Research Paper
American Journal of Engineering and Applied Sciences
Recognition of Faces using Efficient Multiscale Local Binary
Pattern and Kernel Discriminant Analysis in Varying
Environment
V.H. Mankar
Priyadarshini College of Engg, Nagpur, India
2Department of Electronics Engg, Government Polytechnic, Nagpur, India
Article history
Received: 20-06-2017
Revised: 18-07-2017
Accepted: 21-08-2017
Corresponding Author:
Department of Electronics
Engg, Priyadarshini College of
Engg, Nagpur, India
face
('9128944', 'Sujata G. Bhele', 'sujata g. bhele')
('9128944', 'Sujata G. Bhele', 'sujata g. bhele')
Email: sujata_bhele@yahoo.co.in
90298f9f80ebe03cb8b158fd724551ad711d4e71A Pursuit of Temporal Accuracy in General Activity Detection
The Chinese University of Hong Kong
2Computer Vision Laboratory, ETH Zurich, Switzerland
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('1695765', 'Yue Zhao', 'yue zhao')
('33345248', 'Limin Wang', 'limin wang')
('1807606', 'Dahua Lin', 'dahua lin')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
900207b3bc3a4e5244cae9838643a9685a84fee0Reconstructing Geometry from Its Latent Structures
A Thesis
Submitted to the Faculty
of
Drexel University
by
Geoffrey Oxholm
in partial fulfillment of the
requirements for the degree
of
Doctor of Philosophy
June 2014
90498b95fe8b299ce65d5cafaef942aa58bd68b7Face Recognition: Primates in the Wild∗
Michigan State University, East Lansing, MI, USA
University of Chester, UK, 3Conservation Biologist
('32623642', 'Debayan Deb', 'debayan deb')
('46516859', 'Susan Wiper', 'susan wiper')
('9658130', 'Sixue Gong', 'sixue gong')
('9644181', 'Yichun Shi', 'yichun shi')
('41022894', 'Cori Tymoszek', 'cori tymoszek')
E-mail: 1{debdebay, gongsixu, shiyichu, tymoszek, jain}@cse.msu.edu,
2s.wiper@chester.ac.uk, 3alexandra.h.russo@gmail.com
90cc2f08a6c2f0c41a9dd1786bae097f9292105eTop-down Attention Recurrent VLAD Encoding
for Action Recognition in Videos
1 Fondazione Bruno Kessler, Trento, Italy
University of Trento, Trento, Italy
('1756362', 'Swathikiran Sudhakaran', 'swathikiran sudhakaran')
('1717522', 'Oswald Lanz', 'oswald lanz')
{sudhakaran,lanz}@fbk.eu
90fb58eeb32f15f795030c112f5a9b1655ba3624INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS
www.ijrcar.com
Vol.4 Issue 6, Pg.: 12-27
June 2016
INTERNATIONAL JOURNAL OF
RESEARCH IN COMPUTER
APPLICATIONS AND ROBOTICS
ISSN 2320-7345
FACE AND IRIS RECOGNITION IN A
VIDEO SEQUENCE USING DBPNN AND
ADAPTIVE HAMMING DISTANCE
PG Scholar, Hindusthan College of Engineering and Technology, Coimbatore, India
Hindusthan College of Engineering and Technology, Coimbatore, India
('3406423', 'S. Revathy', 's. revathy')Email id: revathysreeni14@gmail.com
90c4f15f1203a3a8a5bf307f8641ba54172ead30A 2D Morphable Model of Craniofacial Profile
and Its Application to Craniosynostosis
University of York, York, UK
2 Alder Hey Craniofacial Unit, Liverpool, UK
https://www-users.cs.york.ac.uk/~nep/research/LYHM/
('1694260', 'Hang Dai', 'hang dai')
('1737428', 'Nick Pears', 'nick pears')
('14154312', 'Christian Duncan', 'christian duncan')
{hd816,nick.pears}@york.ac.uk
Christian.Duncan@alderhey.nhs.uk
902114feaf33deac209225c210bbdecbd9ef33b1KAN et al.: SIDE-INFORMATION BASED LDA FOR FACE RECOGNITION
Side-Information based Linear
Discriminant Analysis for Face
Recognition
Digital Media Research Center
Institute of Computing
Technology, CAS, Beijing, China
2 Key Laboratory of Intelligent
Information Processing, Chinese
Academy of Sciences, Beijing,
China
3 School of Computer Engineering,
Nanyang Technological
University, Singapore
('1693589', 'Meina Kan', 'meina kan')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1714390', 'Dong Xu', 'dong xu')
('1710220', 'Xilin Chen', 'xilin chen')
mnkan@jdl.ac.cn
sgshan@jdl.ac.cn
dongxu@ntu.edu.sg
xlchen@jdl.ac.cn
90ad0daa279c3e30b360f9fe9371293d68f4cebfSPATIO-TEMPORAL FRAMEWORK AND
ALGORITHMS FOR VIDEO-BASED FACE
RECOGNITION
DOCTOR OF PHILOSOPHY
MULTIMEDIA UNIVERSITY
MAY 2014
('2339975', 'JOHN SEE', 'john see')
90a754f597958a2717862fbaa313f67b25083bf9REVIEW
published: 16 November 2015
doi: 10.3389/frobt.2015.00028
A Review of Human Activity
Recognition Methods
University of Ioannina, Ioannina, Greece, 2 Computational Biomedicine
Laboratory, University of Houston, Houston, TX, USA
Recognizing human activities from video sequences or still images is a challenging task
due to problems, such as background clutter, partial occlusion, changes in scale, view-
point, lighting, and appearance. Many applications, including video surveillance systems
human-computer interaction, and robotics for human behavior characterization, require
a multiple activity recognition system. In this work, we provide a detailed review of recent
and state-of-the-art research advances in the field of human activity classification. We
propose a categorization of human activity methodologies and discuss their advantages
and limitations. In particular, we divide human activity classification methods into two large
categories according to whether they use data from different modalities or not. Then, each
of these categories is further analyzed into sub-categories, which reflect how they model
human activities and what type of activities they are interested in. Moreover, we provide
a comprehensive analysis of the existing, publicly available human activity classification
datasets and examine the requirements for an ideal human activity recognition dataset.
Finally, we report the characteristics of future research directions and present some open
issues on human activity recognition.
Keywords: human activity recognition, activity categorization, activity datasets, action representation,
review, survey
1. INTRODUCTION
Human activity recognition plays a significant role in human-to-human interaction and interper-
sonal relations. Because it provides information about the identity of a person, their personality,
and psychological state, it is difficult to extract. The human ability to recognize another person’s
activities is one of the main subjects of study of the scientific areas of computer vision and machine
learning. As a result of this research, many applications, including video surveillance systems
human-computer interaction, and robotics for human behavior characterization, require a multiple
activity recognition system.
Among various classification techniques two main questions arise: “What action?” (i.e., the
recognition problem) and “Where in the video?” (i.e., the localization problem). When attempting to
recognize human activities, one must determine the kinetic states of a person, so that the computer
can efficiently recognize this activity. Human activities, such as “walking” and “running,” arise very
naturally in daily life and are relatively easy to recognize. On the other hand, more complex activities,
such as “peeling an apple,” are more difficult to identify. Complex activities may be decomposed into
other simpler activities, which are generally easier to recognize. Usually, the detection of objects in
a scene may help to better understand human activities as it may provide useful information about
the ongoing event (Gupta and Davis, 2007).
Edited by:
Venkatesh Babu Radhakrishnan,
Indian Institute of Science, India
Reviewed by:
Stefano Berretti,
University of Florence, Italy
Xinlei Chen,
Carnegie Mellon University, USA
*Correspondence:
Specialty section:
This article was submitted to Vision
Systems Theory, Tools and
Applications, a section of the
journal Frontiers in Robotics and AI
Received: 09 July 2015
Accepted: 29 October 2015
Published: 16 November 2015
Citation:
Vrigkas M, Nikou C and Kakadiaris IA
(2015) A Review of Human Activity
Recognition Methods.
Front. Robot. AI 2:28.
doi: 10.3389/frobt.2015.00028
Frontiers in Robotics and AI | www.frontiersin.org
November 2015 | Volume 2 | Article 28
('2045915', 'Michalis Vrigkas', 'michalis vrigkas')
('1727495', 'Christophoros Nikou', 'christophoros nikou')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
('1727495', 'Christophoros Nikou', 'christophoros nikou')
cnikou@cs.uoi.gr
90d9209d5dd679b159051a8315423a7f796d704dTemporal Sequence Distillation: Towards Few-Frame Action
Recognition in Videos
Wuhan University
SenseTime Research
SenseTime Research
The Chinese University of Hong Kong
SenseTime Research
SenseTime Research
('40192003', 'Zhaoyang Zhang', 'zhaoyang zhang')
('1874900', 'Zhanghui Kuang', 'zhanghui kuang')
('47571885', 'Ping Luo', 'ping luo')
('1739512', 'Litong Feng', 'litong feng')
('1726357', 'Wei Zhang', 'wei zhang')
zhangzhaoyang@whu.edu.cn
kuangzhanghui@sensetime.com
pluo@ie.cuhk.edu.hk
fenglitong@sensetime.com
wayne.zhang@sensetime.com
90dd2a53236b058c79763459b9d8a7ba5e58c4f1Capturing Correlations Among Facial Parts for
Facial Expression Analysis
Department of Computer Science
Queen Mary, University of London
Mile End Road, London E1 4NS, UK
('10795229', 'Caifeng Shan', 'caifeng shan')
('2073354', 'Shaogang Gong', 'shaogang gong')
('2803283', 'Peter W. McOwan', 'peter w. mcowan')
{cfshan, sgg, pmco}@dcs.qmul.ac.uk
90cb074a19c5e7d92a1c0d328a1ade1295f4f311MIT. Media Laboratory Affective Computing Technical Report #571
Appears in IEEE International Workshop on Analysis and Modeling of Faces and Gestures , Oct 2003
Fully Automatic Upper Facial Action Recognition
MIT Media Laboratory
Cambridge, MA 02139
('2189118', 'Ashish Kapoor', 'ashish kapoor')
90b11e095c807a23f517d94523a4da6ae6b12c76
90c2d4d9569866a0b930e91713ad1da01c2a6846528
The Open Automation and Control Systems Journal, 2014, 6, 528-534
Dimensionality Reduction Based on Low Rank Representation
Open Access
School of Electronic and Information Engineering, Tongji University, Shanghai, China
('40328872', 'Cheng Luo', 'cheng luo')
('40174994', 'Yang Xiang', 'yang xiang')
Send Orders for Reprints to reprints@benthamscience.ae
907475a4febf3f1d4089a3e775ea018fbec895feSTATISTICAL MODELING FOR FACIAL EXPRESSION ANALYSIS AND SYNTHESIS
Heudiasyc Laboratory, CNRS, University of Technology of Compi`egne
BP 20529, 60205 COMPIEGNE Cedex, FRANCE.
('2371236', 'Bouchra Abboud', 'bouchra abboud')
('1742818', 'Franck Davoine', 'franck davoine')
E-mail: Franck.Davoine@hds.utc.fr
9028fbbd1727215010a5e09bc5758492211dec19Solving the Uncalibrated Photometric Stereo
Problem using Total Variation
1 IRIT, UMR CNRS 5505, Toulouse, France
2 Dept. of Computer Science, Univ. of Copenhagen, Denmark
('2233590', 'Jean-Denis Durou', 'jean-denis durou')yvain.queau@enseeiht.fr
durou@irit.fr
francois@diku.dk
bff77a3b80f40cefe79550bf9e220fb82a74c084Facial Expression Recognition Based on Local Binary Patterns and
Local Fisher Discriminant Analysis
1School of Physics and Electronic Engineering
Taizhou University
Taizhou 318000
CHINA
2Department of Computer Science
Taizhou University
Taizhou 318000
CHINA
('1695589', 'SHIQING ZHANG', 'shiqing zhang')
('1730594', 'XIAOMING ZHAO', 'xiaoming zhao')
('38909691', 'BICHENG LEI', 'bicheng lei')
tzczsq@163.com, leibicheng@163.com
tzxyzxm@163.com
bf03f0fe8f3ba5b118bdcbb935bacb62989ecb11EFFECT OF FACIAL EXPRESSIONS ON FEATURE-BASED
LANDMARK LOCALIZATION IN STATIC GREY SCALE
IMAGES
Research Group for Emotions, Sociality, and Computing, Tampere Unit for Computer-Human Interaction (TAUCHI)
University of Tampere, Kanslerinnrinne 1, 33014, Tampere, Finland
Keywords:
Image processing and computer vision, segmentation, edge detection, facial landmark localization, facial
expressions, action units.
('2935367', 'Yulia Gizatdinova', 'yulia gizatdinova')
('1718377', 'Veikko Surakka', 'veikko surakka')
{yulia.gizatdinova, veikko.surakka}@cs.uta.fi
bf961e4a57a8f7e9d792e6c2513ee1fb293658e9EURASIP Journal on Applied Signal Processing 2004:16, 2533–2543
c(cid:1) 2004 Hindawi Publishing Corporation
Robust Face Image Matching under
Illumination Variations
National Tsing Hua University, 101 Kuang Fu Road, Section 2, Hsinchu 300, Taiwan
National Tsing Hua University, 101 Kuang Fu Road, Section 2, Hsinchu 300, Taiwan
National Tsing Hua University, 101 Kuang Fu Road, Section 2, Hsinchu 300, Taiwan
Received 1 September 2003; Revised 21 September 2004
Face image matching is an essential step for face recognition and face verification. It is difficult to achieve robust face matching
under various image acquisition conditions. In this paper, a novel face image matching algorithm robust against illumination
variations is proposed. The proposed image matching algorithm is motivated by the characteristics of high image gradient along
the face contours. We define a new consistency measure as the inner product between two normalized gradient vectors at the
corresponding locations in two images. The normalized gradient is obtained by dividing the computed gradient vector by the
corresponding locally maximal gradient magnitude. Then we compute the average consistency measures for all pairs of the corre-
sponding face contour pixels to be the robust matching measure between two face images. To alleviate the problem due to shadow
and intensity saturation, we introduce an intensity weighting function for each individual consistency measure to form a weighted
average of the consistency measure. This robust consistency measure is further extended to integrate multiple face images of the
same person captured under different illumination conditions, thus making our robust face matching algorithm. Experimental
results of applying the proposed face image matching algorithm on some well-known face datasets are given in comparison with
some existing face recognition methods. The results show that the proposed algorithm consistently outperforms other methods
and achieves higher than 93% recognition rate with three reference images for different datasets under different lighting condi-
tions.
Keywords and phrases: robust image matching, face recognition, illumination variations, normalized gradient.
INTRODUCTION
1.
Face recognition has attracted the attention of a number
of researchers from academia and industry because of its
challenges and related applications, such as security access
control, personal ID verification, e-commerce, video surveil-
lance, and so forth. The details of these applications are re-
ferred to in the surveys [1, 2, 3]. Face matching is the most
important and crucial component in face recognition. Al-
though there have been many efforts in previous works to
achieve robust face matching under a wide variety of dif-
ferent image capturing conditions, such as lighting changes,
head pose or view angle variations, expression variations,
and so forth, these problems are still difficult to overcome.
It is a great challenge to achieve robust face matching under
all kinds of different face imaging variations. A practical face
recognition system needs to work under different imaging
conditions, such as different face poses, or different illumi-
nation conditions. Therefore, a robust face matching method
is essential to the development of an illumination-insensitive
face recognition system. In this paper, we particularly focus
on robust face matching under different illumination condi-
tions.
Many researchers have proposed face recognition meth-
ods or face verification systems under different illumination
conditions. Some of these methods extracted representative
features from face images to compute the distance between
these features. In general, these methods can be categorized
into the feature-based approach [4, 5, 6, 7, 8, 9, 10, 11], the
appearance-based approach [12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23], and the hybrid approach [22, 24].
('2393568', 'Chyuan-Huei Thomas Yang', 'chyuan-huei thomas yang')
('1696527', 'Shang-Hong Lai', 'shang-hong lai')
('39505245', 'Long-Wen Chang', 'long-wen chang')
Email: chyang@cs.nthu.edu.tw
Email: lai@cs.nthu.edu.tw
Email: lchang@cs.nthu.edu.tw
bf54b5586cdb0b32f6eed35798ff91592b03fbc4Journal of Signal and Information Processing, 2017, 8, 78-98
http://www.scirp.org/journal/jsip
ISSN Online: 2159-4481
ISSN Print: 2159-4465
Methodical Analysis of Western-Caucasian and
East-Asian Basic Facial Expressions of Emotions
Based on Specific Facial Regions
The University of Electro-Communications, Tokyo, Japan
How to cite this paper: Benitez-Garcia, G.,
Nakamura, T. and Kaneko, M. (2017) Me-
thodical Analysis of Western-Caucasian and
East-Asian Basic Facial Expressions of Emo-
tions Based on Specific Facial Regions. Jour-
nal of Signal and Information Processing, 8,
78-98.
https://doi.org/10.4236/jsip.2017.82006
Received: March 30, 2017
Accepted: May 15, 2017
Published: May 18, 2017
Copyright © 2017 by authors and
Scientific Research Publishing Inc.
This work is licensed under the Creative
Commons Attribution International
License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/

Open Access
('2567776', 'Gibran Benitez-Garcia', 'gibran benitez-garcia')
('1693821', 'Tomoaki Nakamura', 'tomoaki nakamura')
('49061848', 'Masahide Kaneko', 'masahide kaneko')
bf1e0279a13903e1d43f8562aaf41444afca4fdc International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 10 | Oct -2017 www.irjet.net p-ISSN: 2395-0072
Different Viewpoints of Recognizing Fleeting Facial Expressions with
DWT
information
to get desired
information
Introduction
---------------------------------------------------------------------***---------------------------------------------------------------------
('1848141', 'SANJEEV SHRIVASTAVA', 'sanjeev shrivastava')
('34417227', 'MOHIT GANGWAR', 'mohit gangwar')
bf0f0eb0fb31ee498da4ae2ca9b467f730ea9103Brain Sci. 2015, 5, 369-386; doi:10.3390/brainsci5030369
OPEN ACCESS
brain sciences
ISSN 2076-3425
www.mdpi.com/journal/brainsci/
Article
Emotion Regulation in Adolescent Males with Attention-Deficit
Hyperactivity Disorder: Testing the Effects of Comorbid
Conduct Disorder
School of Psychology, Cardiff University, Cardiff, CF10 3AT, UK
MRC Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff
Tel.: +44-2920-874630; Fax: +44-2920-874545.
Received: 17 July 2015 / Accepted: 25 August 2015 / Published: 7 September 2015
('5383377', 'Clare Northover', 'clare northover')
('4094135', 'Anita Thapar', 'anita thapar')
('39373878', 'Kate Langley', 'kate langley')
('4552820', 'Stephanie van Goozen', 'stephanie van goozen')
('2928107', 'Derek G.V. Mitchell', 'derek g.v. mitchell')
E-Mails: NorthoverC@cardiff.ac.uk (C.N.); LangleyK@cardiff.ac.uk (K.L.)
CF24 4HQ, UK; E-Mail: Thapar@cardiff.ac.uk
* Author to whom correspondence should be addressed; E-Mail: vangoozens@cardiff.ac.uk;
bf3f8726f2121f58b99b9e7287f7fbbb7ab6b5f5Visual face scanning and emotion
perception analysis between Autistic
and Typically Developing children
University of Dhaka
University of Dhaka
Dhaka, Bangladesh
Dhaka, Bangladesh
('24613724', 'Uzma Haque Syeda', 'uzma haque syeda')
('24572640', 'Syed Mahir Tazwar', 'syed mahir tazwar')
bf4825474673246ae855979034c8ffdb12c80a98UNIVERSITY OF CALIFORNIA
RIVERSIDE
Active Learning in Multi-Camera Networks, With Applications in Person
Re-Identification
A Dissertation submitted in partial satisfaction
of the requirements for the degree of
Doctor of Philosophy
in
Electrical Engineering
by
December 2015
Dissertation Committee:
('40521893', 'Abir Das', 'abir das')
('1688416', 'Amit K. Roy-Chowdhury', 'amit k. roy-chowdhury')
('1751869', 'Anastasios Mourikis', 'anastasios mourikis')
('1778860', 'Walid Najjar', 'walid najjar')
bf8a520533f401347e2f55da17383a3e567ef6d8Bounded-Distortion Metric Learning
The Chinese University of Hong Kong
University of Chinese Academy of Sciences
Tsinghua University
The Chinese University of Hong Kong
('2246396', 'Renjie Liao', 'renjie liao')
('1788070', 'Jianping Shi', 'jianping shi')
('2376789', 'Ziyang Ma', 'ziyang ma')
('37670465', 'Jun Zhu', 'jun zhu')
('1729056', 'Jiaya Jia', 'jiaya jia')
rjliao,jpshi@cse.cuhk.edu.hk
maziyang08@gmail.com
dcszj@mail.tsinghua.edu.cn
leojia@cse.cuhk.edu.hk
bf5940d57f97ed20c50278a81e901ae4656f0f2cQuery-free Clothing Retrieval via Implicit
Relevance Feedback
('26331884', 'Zhuoxiang Chen', 'zhuoxiang chen')
('1691461', 'Zhe Xu', 'zhe xu')
('48380192', 'Ya Zhang', 'ya zhang')
('48531192', 'Xiao Gu', 'xiao gu')
bff567c58db554858c7f39870cff7c306523dfeeNeural Task Graphs: Generalizing to Unseen
Tasks from a Single Video Demonstration
Stanford University
('38485317', 'De-An Huang', 'de-an huang')
('4734949', 'Suraj Nair', 'suraj nair')
('2068265', 'Danfei Xu', 'danfei xu')
('2117748', 'Yuke Zhu', 'yuke zhu')
('1873736', 'Animesh Garg', 'animesh garg')
('3216322', 'Li Fei-Fei', 'li fei-fei')
('1702137', 'Silvio Savarese', 'silvio savarese')
('9200530', 'Juan Carlos Niebles', 'juan carlos niebles')
bfb98423941e51e3cd067cb085ebfa3087f3bfbeSparseness helps: Sparsity Augmented
Collaborative Representation for Classification
('2941543', 'Naveed Akhtar', 'naveed akhtar')
('1688013', 'Faisal Shafait', 'faisal shafait')
bffbd04ee5c837cd919b946fecf01897b2d2d432Boston University Computer Science Technical Report No
Facial Feature Tracking and Occlusion
Recovery in American Sign Language
1 Department of Computer Science, 2 Department of Modern Foreign Languages
Boston University
Facial features play an important role in expressing grammatical information
in signed languages, including American Sign Language (ASL). Gestures such
as raising or furrowing the eyebrows are key indicators of constructions such
as yes-no questions. Periodic head movements (nods and shakes) are also an
essential part of the expression of syntactic information, such as negation
(associated with a side-to-side headshake). Therefore, identification of these
facial gestures is essential to sign language recognition. One problem with
detection of such grammatical indicators is occlusion recovery. If the signer’s
hand blocks his/her eyebrows during production of a sign, it becomes difficult
to track the eyebrows. We have developed a system to detect such grammatical
markers in ASL that recovers promptly from occlusion.
Our system detects and tracks evolving templates of facial features, which
are based on an anthropometric face model, and interprets the geometric
relationships of these templates to identify grammatical markers. It was tested
on a variety of ASL sentences signed by various Deaf 1native signers and
detected facial gestures used to express grammatical information, such as
raised and furrowed eyebrows as well as headshakes.
1 Introduction
A computer-based translator of American Sign Language (ASL) would be
useful in enabling people who do not know ASL to communicate with Deaf1
individuals. Facial gesture interpretation would be an essential part of an in-
terface that eliminates the language barrier between Deaf and hearing people.
Our work focuses on facial feature detection and tracking in ASL, specifically
in occlusion processing and recovery.
1 The word “Deaf” is capitalized to designate those individuals who are linguisti-
cally and culturally deaf and who use ASL as their primary language, whereas
“deaf” refers to the status of those who cannot hear [25].
('2313369', 'Thomas J. Castelli', 'thomas j. castelli')
('1723703', 'Margrit Betke', 'margrit betke')
('1732359', 'Carol Neidle', 'carol neidle')
d35534f3f59631951011539da2fe83f2844ca245Published as a conference paper at ICLR 2018
SEMANTICALLY DECOMPOSING THE LATENT SPACES
OF GENERATIVE ADVERSARIAL NETWORKS
Department of Music
University of California, San Diego
Department of Genetics
Stanford University
Zachary C. Lipton
Carnegie Mellon University
Amazon AI
Department of Computer Science
University of California, San Diego
('1872307', 'Chris Donahue', 'chris donahue')
('1693411', 'Akshay Balsubramani', 'akshay balsubramani')
('1814008', 'Julian McAuley', 'julian mcauley')
cdonahue@ucsd.edu
abalsubr@stanford.edu
zlipton@cmu.edu
jmcauley@eng.ucsd.edu
d3edbfe18610ce63f83db83f7fbc7634dde1eb40Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Large Graph Hashing with Spectral Rotation
School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL),
Northwestern Polytechnical University
Xi’an 710072, Shaanxi, P. R. China
('1720243', 'Xuelong Li', 'xuelong li')
('48080389', 'Di Hu', 'di hu')
('1688370', 'Feiping Nie', 'feiping nie')
xuelong li@opt.ac.cn, hdui831@mail.nwpu.edu.cn, feipingnie@gmail.com
d3424761e06a8f5f3c1f042f1f1163a469872129Pose-invariant, model-based object
recognition, using linear combination of views
and Bayesian statistics.
A dissertation submitted in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
of the
University of London
Department of Computer Science
University College London
2009
('1797883', 'Vasileios Zografos', 'vasileios zografos')
d33b26794ea6d744bba7110d2d4365b752d7246fTransfer Feature Representation via Multiple Kernel Learning
1. Science and Technology on Integrated Information System Laboratory
2. State Key Laboratory of Computer Science
Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
('40451597', 'Wei Wang', 'wei wang')
('39483391', 'Hao Wang', 'hao wang')
('1783918', 'Chen Zhang', 'chen zhang')
('34532334', 'Fanjiang Xu', 'fanjiang xu')
weiwangpenny@gmail.com
d3b73e06d19da6b457924269bb208878160059daProceedings of the 5th International Conference on Computing and Informatics, ICOCI 2015
11-13 August, 2015 Istanbul, Turkey. Universiti Utara Malaysia (http://www.uum.edu.my )
Paper No.
065
IMPLEMENTATION OF AN AUTOMATED SMART HOME
CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL
DETECTION
Osman4
('9164797', 'Lim Teck Boon', 'lim teck boon')
('2229534', 'Mohd Heikal Husin', 'mohd heikal husin')
('1881455', 'Zarul Fitri Zaaba', 'zarul fitri zaaba')
1Universiti Sains Malaysia, Malaysia, ltboon.ucom10@student.usm.my
2Universiti Sains Malaysia, Malaysia, heikal@usm.my
3Universiti Sains Malaysia, Malaysia, zarulfitri@usm.my
4Universiti Sains Malaysia, Malaysia, azam@usm.my
d3d5d86afec84c0713ec868cf5ed41661fc96edcA Comprehensive Analysis of Deep Learning Based Representation
for Face Recognition
Mostafa Mehdipour Ghazi
Faculty of Engineering and Natural Sciences
Sabanci University, Istanbul, Turkey
Hazım Kemal Ekenel
Department of Computer Engineering
Istanbul Technical University, Istanbul, Turkey
mehdipour@sabanciuniv.edu
ekenel@itu.edu.tr
d3e04963ff42284c721f2bc6a90b7a9e20f0242fOn Forensic Use of Biometrics
University of Southampton, UK, 2University of Warwick, UK
This chapter discusses the use of biometrics techniques within forensic science. It outlines the
historic connections between the subjects and then examines face and ear biometrics as two
case studies to demonstrate the application, the challenges and the acceptability of biometric
features and techniques in forensics. The detailed examination starts with one of the most
common and familiar biometric features, face, and then examines an emerging biometric
feature, ear.
1.1 Introduction
Forensic science largely concerns the analysis of crime: its existence, the perpetrator(s) and
the modus operandi. The science of biometrics has been developing approaches that can
be used to automatically identify individuals by personal characteristics. The relationship
of biometrics and forensics centers primarily on identifying people: the central question is
whether a perpetrator can reliably be identified from scene-of-crime data or can reliably
be excluded, wherein the reliability concerns reasonable doubt. The personal characteristics
which can be used as biometrics include face, finger, iris, gait, ear, electroencephalogram
(EEG), handwriting, voice and palm. Those which are suited to forensic use concern traces
left at a scene-of-crime, such as latent fingerprints, palmprints or earprints, or traces which
have been recorded, such as face, gait or ear in surveillance video.
Biometrics is generally concerned with the recognition of individuals based on their
physical or behavioral attributes. So far, biometric techniques have primarily been used to
assure identity (in immigration and commerce etc.). These techniques are largely automatic
or semi-automatic approaches steeped in pattern recognition and computer vision. The main
steps of a biometric recognition approach are: (1) acquisition of the biometric data; (2)
localization and alignment of the data; (3) feature extraction; and (4) matching. Feature
This is a Book Title Name of the Author/Editor
c(cid:13) XXXX John Wiley & Sons, Ltd
('2804800', 'Banafshe Arbab-Zavar', 'banafshe arbab-zavar')
('40655450', 'Xingjie Wei', 'xingjie wei')
('2365596', 'John D. Bustard', 'john d. bustard')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('1799504', 'Chang-Tsun Li', 'chang-tsun li')
1{baz10v,jdb,msn}@ecs.soton.ac.uk, 2{x.wei, c-t.li}@warwick.ac.uk
d3d71a110f26872c69cf25df70043f7615edcf922736
Learning Compact Feature Descriptor and Adaptive
Matching Framework for Face Recognition
improvements
('1911510', 'Zhifeng Li', 'zhifeng li')
('2856494', 'Dihong Gong', 'dihong gong')
('1720243', 'Xuelong Li', 'xuelong li')
('1692693', 'Dacheng Tao', 'dacheng tao')
d35c82588645b94ce3f629a0b98f6a531e4022a3Scalable Online Annotation &
Object Localisation
For Broadcast Media Production
Submitted for the Degree of
Master of Philosophy
from the
University of Surrey
Centre for Vision, Speech and Signal Processing
Faculty of Engineering and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
August 2016
('39222045', 'Charles Gray', 'charles gray')
('39222045', 'Charles Gray', 'charles gray')
d3b18ba0d9b247bfa2fb95543d172ef888dfff95Learning and Using the Arrow of Time
Harvard University 2University of Southern California
University of Oxford 4Massachusetts Institute of Technology 5Google Research
(a)
(c)
(b)
(d)
Figure 1: Seeing these ordered frames from videos, can you tell whether each video is playing forward or backward? (answer
below1). Depending on the video, solving the task may require (a) low-level understanding (e.g. physics), (b) high-level
reasoning (e.g. semantics), or (c) familiarity with very subtle effects or with (d) camera conventions. In this work, we learn
and exploit several types of knowledge to predict the arrow of time automatically with neural network models trained on
large-scale video datasets.
('1766333', 'Donglai Wei', 'donglai wei')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
('1768236', 'William T. Freeman', 'william t. freeman')
donglai@seas.harvard.edu, limjj@usc.edu, az@robots.ox.ac.uk, billf@mit.edu
d309e414f0d6e56e7ba45736d28ee58ae2bad478Efficient Two-Stream Motion and Appearance 3D CNNs for
Video Classification
Ali Diba
ESAT-KU Leuven
Ali Pazandeh
Sharif UTech
Luc Van Gool
ESAT-KU Leuven, ETH Zurich
ali.diba@esat.kuleuven.be
pazandeh@ee.sharif.ir
luc.vangool@esat.kuleuven.be
d394bd9fbaad1f421df8a49347d4b3fca307db83Recognizing Facial Expressions at Low Resolution
Deparment of Computer Science, Queen Mary, University of London, London, E1 4NS, UK
('10795229', 'Caifeng Shan', 'caifeng shan')
('2073354', 'Shaogang Gong', 'shaogang gong')
('2803283', 'Peter W. McOwan', 'peter w. mcowan')
{cfshan, sgg, pmco}@dcs.qmul.ac.uk
d3f5a1848b0028d8ab51d0b0673732cad2e3c8c9
d3b550e587379c481392fb07f2cbbe11728cf7a6Small Sample Size Face Recognition using Random Quad-Tree based
Ensemble Algorithm
Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
('7923772', 'Cuicui Zhang', 'cuicui zhang')
('2735528', 'Xuefeng Liang', 'xuefeng liang')
('1731351', 'Takashi Matsuyama', 'takashi matsuyama')
zhang@vision.kuee.kyoto-u.ac.jp, fxliang, tmg@i.kyoto-u.ac.jp
d307a766cc9c728a24422313d4c3dcfdb0d16dd5Deep Keyframe Detection in Human Action Videos
School of Physics and Optoelectronic Engineering, Xidian University, China
School of Computer Science and Software Engineering, University of Western Australia
College of Electrical and Information Engineering, Hunan University, China
School of Software, Xidian University, China
('46580760', 'Xiang Yan', 'xiang yan')
('1746166', 'Syed Zulqarnain Gilani', 'syed zulqarnain gilani')
('2404621', 'Hanlin Qin', 'hanlin qin')
('3446916', 'Mingtao Feng', 'mingtao feng')
('48570713', 'Liang Zhang', 'liang zhang')
('46332747', 'Ajmal Mian', 'ajmal mian')
xyan@stu.xidian.edu.cn, hlqin@mail.xidian.edu.cn
{zulqarnain.gilani, ajmal.mian}@uwa.edu.au
mintfeng@hnu.edu.cn
liangzhang@xidian.edu.cn
d31af74425719a3840b496b7932e0887b35e9e0dArticle
A Multimodal Deep Log-Based User Experience (UX)
Platform for UX Evaluation
Ubiquitous Computing Lab, Kyung Hee University
College of Electronics and Information Engineering, Sejong University
Received: 16 March 2018; Accepted: 15 May 2018; Published: 18 May 2018
('33081617', 'Jamil Hussain', 'jamil hussain')
('2794241', 'Wajahat Ali Khan', 'wajahat ali khan')
('27531310', 'Anees Ul Hassan', 'anees ul hassan')
('1765947', 'Muhammad Afzal', 'muhammad afzal')
('1700806', 'Sungyoung Lee', 'sungyoung lee')
Giheung-gu, Yongin-si, Gyeonggi-do, Seoul 446-701, Korea; jamil@oslab.khu.ac.kr (J.H.);
wajahat.alikhan@oslab.khu.ac.kr (W.A.K.); hth@oslab.khu.ac.kr (T.H.); bilalrizvi@oslab.khu.ac.kr (H.S.M.B.);
jhb@oslab.khu.ac.kr (J.B.); anees@oslab.khu.ac.kr (A.U.H.)
Seoul 05006, Korea; mafzal@sejong.ac.kr
* Correspondence: sylee@oslab.khu.ac.kr; Tel.: +82-31-201-2514
d3b0839324d0091e70ce34f44c979b9366547327Precise Box Score: Extract More Information from Datasets to Improve the
Performance of Face Detection
1School of Information and Communication Engineering
2Beijing Key Laboratory of Network System and Network Culture
Beijing University of Posts and Telecommunications, Beijing, China
('49712251', 'Ce Qi', 'ce qi')
('1684263', 'Fei Su', 'fei su')
('8120542', 'Pingyu Wang', 'pingyu wang')
d30050cfd16b29e43ed2024ae74787ac0bbcf2f7Facial Expression Classification Using
Convolutional Neural Network and Support Vector
Machine
Graduate Program in Electrical and Computer Engineering
Federal University of Technology - Paran a
Department of Electrical and Computer Engineering
Opus College of Engineering
Marquette University
('11857183', 'Cristian Bortolini', 'cristian bortolini')
('2357308', 'Humberto R. Gamba', 'humberto r. gamba')
('2432946', 'Gustavo Benvenutti Borba', 'gustavo benvenutti borba')
('2767912', 'Henry Medeiros', 'henry medeiros')
Email: vpillajr@mail.com
d3faed04712b4634b47e1de0340070653546deb2Neural Best-Buddies: Sparse Cross-Domain Correspondence
Fig. 1. Top 5 Neural Best-Buddies for two cross-domain image pairs. Using deep features of a pre-trained neural network, our coarse-to-fine sparse
correspondence algorithm first finds high-level, low resolution, semantically matching areas (indicated by the large blue circles), then narrows down the search
area to intermediate levels (middle green circles), until precise localization on well-defined edges in the pixel space (colored in corresponding unique colors).
Correspondence between images is a fundamental problem in computer
vision, with a variety of graphics applications. This paper presents a novel
method for sparse cross-domain correspondence. Our method is designed for
pairs of images where the main objects of interest may belong to different
semantic categories and differ drastically in shape and appearance, yet still
contain semantically related or geometrically similar parts. Our approach
operates on hierarchies of deep features, extracted from the input images
by a pre-trained CNN. Specifically, starting from the coarsest layer in both
hierarchies, we search for Neural Best Buddies (NBB): pairs of neurons
that are mutual nearest neighbors. The key idea is then to percolate NBBs
through the hierarchy, while narrowing down the search regions at each
level and retaining only NBBs with significant activations. Furthermore, in
order to overcome differences in appearance, each pair of search regions is
transformed into a common appearance.
We evaluate our method via a user study, in addition to comparisons
with alternative correspondence approaches. The usefulness of our method
is demonstrated using a variety of graphics applications, including cross
domain image alignment, creation of hybrid images, automatic image mor-
phing, and more.
CCS Concepts: • Computing methodologies → Interest point and salient
region detections; Matching; Image manipulation;
University
© 2018 Association for Computing Machinery.
This is the author’s version of the work. It is posted here for your personal use. Not for
redistribution. The definitive Version of Record was published in ACM Transactions on
Graphics, https://doi.org/10.1145/3197517.3201332.
Additional Key Words and Phrases: cross-domain correspondence, image
hybrids, image morphing
ACM Reference Format:
Cohen-Or. 2018. Neural Best-Buddies: Sparse Cross-Domain Correspon-
//doi.org/10.1145/3197517.3201332
INTRODUCTION
Finding correspondences between a pair of images has been a long
standing problem, with a multitude of applications in computer
vision and graphics. In particular, sparse sets of corresponding point
pairs may be used for tasks such as template matching, image align-
ment, and image morphing, to name a few. Over the years, a variety
of dense and sparse correspondence methods have been developed,
most of which assume that the input images depict the same scene
or object (with differences in viewpoint, lighting, object pose, etc.),
or a pair of objects from the same class.
In this work, we are concerned with sparse cross-domain corre-
spondence: a more general and challenging version of the sparse
correspondence problem, where the object of interest in the two
input images can differ more drastically in their shape and appear-
ance, such as objects belonging to different semantic categories
(domains). It is, however, assumed that the objects contain at least
some semantically related parts or geometrically similar regions, oth-
erwise the correspondence task cannot be considered well-defined.
Two examples of cross-domain scenarios and the results of our ap-
proach are shown in Figure 1. We focus on sparse correspondence,
since in many cross-domain image pairs, dense correspondence
ACM Transactions on Graphics, Vol. 37, No. 4, Article 69. Publication date: August 2018.
('3451442', 'Kfir Aberman', 'kfir aberman')
('39768043', 'Jing Liao', 'jing liao')
('5807605', 'Mingyi Shi', 'mingyi shi')
('1684384', 'Dani Lischinski', 'dani lischinski')
('1748939', 'Baoquan Chen', 'baoquan chen')
('1701009', 'Daniel Cohen-Or', 'daniel cohen-or')
('3451442', 'Kfir Aberman', 'kfir aberman')
('39768043', 'Jing Liao', 'jing liao')
('5807605', 'Mingyi Shi', 'mingyi shi')
('1684384', 'Dani Lischinski', 'dani lischinski')
('1748939', 'Baoquan Chen', 'baoquan chen')
('1701009', 'Daniel Cohen-Or', 'daniel cohen-or')
('3451442', 'Kfir Aberman', 'kfir aberman')
('39768043', 'Jing Liao', 'jing liao')
('5807605', 'Mingyi Shi', 'mingyi shi')
('1684384', 'Dani Lischinski', 'dani lischinski')
('1748939', 'Baoquan Chen', 'baoquan chen')
d3c004125c71942846a9b32ae565c5216c068d1eRESEARCH ARTICLE
Recognizing Age-Separated Face Images:
Humans and Machines
West Virginia University, Morgantown, West Virginia, United States of America, 2. IIIT Delhi, New Delhi
Delhi, India
('3017294', 'Daksha Yadav', 'daksha yadav')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('2487227', 'Afzel Noore', 'afzel noore')
*mayank@iiitd.ac.in
d350a9390f0818703f886138da27bf8967fe8f51LIGHTING DESIGN FOR PORTRAITS WITH A VIRTUAL LIGHT STAGE
Institute for Vision and Graphics, University of Siegen, Germany
('1967283', 'Davoud Shahlaei', 'davoud shahlaei')
('2712313', 'Marcel Piotraschke', 'marcel piotraschke')
('2880906', 'Volker Blanz', 'volker blanz')
d33fcdaf2c0bd0100ec94b2c437dccdacec66476Neurons with Paraboloid Decision Boundaries for
Improved Neural Network Classification
Performance
('2320550', 'Nikolaos Tsapanos', 'nikolaos tsapanos')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
d4a5eaf2e9f2fd3e264940039e2cbbf08880a090An Occluded Stacked Hourglass Approach to Facial
Landmark Localization and Occlusion Estimation
University of California San Diego
('2812409', 'Kevan Yuen', 'kevan yuen')kcyuen@eng.ucsd.edu, mtrivedi@eng.ucsd.edu
d46b790d22cb59df87f9486da28386b0f99339d3Learning Face Deblurring Fast and Wide
University of Bern
Switzerland
Amazon Research
Germany
University of Bern
Switzerland
('39866194', 'Meiguang Jin', 'meiguang jin')
('36266446', 'Michael Hirsch', 'michael hirsch')
('1739080', 'Paolo Favaro', 'paolo favaro')
jin@inf.unibe.ch
hirsch@amazon.com
favaro@inf.unibe.ch
d41c11ebcb06c82b7055e2964914b9af417abfb2CDI-Type I: Unsupervised and Weakly-Supervised
1 Introduction
Discovery of Facial Events
The face is one of the most powerful channels of nonverbal communication. Facial expression has been a
focus of emotion research for over a hundred years [12]. It is central to several leading theories of emotion
[18, 31, 54] and has been the focus of at times heated debate about issues in emotion science [19, 24, 50].
Facial expression gures prominently in research on almost every aspect of emotion, including psychophys
iology [40], neural correlates [20], development [11], perception [4], addiction [26], social processes [30],
depression [49] and other emotion disorders [55], to name a few. In general, facial expression provides cues
about emotional response, regulates interpersonal behavior, and communicates aspects of psychopathology.
Because of its importance to behavioral science and the emerging fields of computational behavior
science, perceptual computing, and human-robot interaction, significant efforts have been applied toward
developing algorithms that automatically detect facial expression. With few exceptions, previous work on
facial expression relies on supervised approaches to learning (i.e. event categories are defined in advance
in labeled training data). While supervised learning has important advantages, two critical limitations may
be noted. One, because labeling facial expression is highly labor intensive, progress in automated facial
expression recognition and analysis is slowed. For the most detailed and comprehensive labeling or coding
systems, such as Facial Action Coding System (FACS), three to four months is typically required to train
a coder (’coding’ refers to the labeling of video using behavioral descriptors). Once trained, each minute
of video may require 1 hour or more to code [9]. No wonder relatively few databases are yet available,
especially those of real-world rather than posed behavior [61]. Second, research has been limited to the
perceptual categories used by human observers. Those categories were operationalized in large part based
on technology available in the past [36]. While a worthy goal of computer vision and machine learning
is to efficiently replicate human-based measurement, should that be our only goal? New measurement
approaches make possible new scientific discoveries. Two in particular, unsupervised and weakly-supervised
learning have the potential to inform new ways of perceiving and modeling human behavior, to impact the
infrastructure of science, and contribute to the design of perceptual computing applications.
We propose that unsupervised and weakly-supervised approaches to automatic facial expression analysis
can increase the efficiency of current measurement approaches in behavioral science, demonstrate conver-
gent validity with supervised approaches, and lead to new knowledge in clinical and developmental science.
Specifically, we will:
• Develop two novel non-parametric algorithms for unsupervised and weakly-supervised time-series
analysis. The proposed approaches are general and can be applied to a myriad of problems in behav-
ioral science and computer vision (e.g., gesture or activity recognition).
• Exploit the potential of these algorithms in four applications:
1) New tools to improve the reliability and utility of human FACS coding. Using unsupervised learn-
ing, we will develop and validate a computer-assisted approach to FACS coding that doubles the
efficiency of human FACS coding.
2) At present, taxonomies of facial expression are based on FACS or other observer-based schemes.
Consequently, approaches to automatic facial expression recognition are dependent on access to cor-
puses of FACS or similarly labeled video. In the proposed work we raise the question of whether
d444e010049944c1b3438c9a25ae09b292b17371Structure Preserving Video Prediction
Shanghai Institute for Advanced Communication and Data Science
Shanghai Key Laboratory of Digital Media Processing and Transmission
Shanghai Jiao Tong University, Shanghai 200240, China
('47882735', 'Jingwei Xu', 'jingwei xu')
('47889348', 'Shuo Cheng', 'shuo cheng')
{xjwxjw,nibingbing,Leezf,xkyang}@sjtu.edu.cn, acccheng94@gmail.com
d46fda4b49bbc219e37ef6191053d4327e66c74bFacial Expression Recognition Based on Complexity Perception Classification
Algorithm
School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
('36047279', 'Tianyuan Chang', 'tianyuan chang')
('9725901', 'Guihua Wen', 'guihua wen')
('39946628', 'Yang Hu', 'yang hu')
('35847383', 'JiaJiong Ma', 'jiajiong ma')
tianyuan_chang@163.com, crghwen@scut.edu.cn
d448d67c6371f9abf533ea0f894ef2f022b12503Weakly Supervised Collective Feature Learning from Curated Media
1. NTT Communication Science Laboratories, Japan.
University of Cambridge, United Kingdom
The University of Tokyo, Japan
Technical University of Munich, Germany
5. Uber AI Labs, USA.
('2374364', 'Yusuke Mukuta', 'yusuke mukuta')
('34454585', 'Akisato Kimura', 'akisato kimura')
('2584289', 'David B. Adrian', 'david b. adrian')
('1983575', 'Zoubin Ghahramani', 'zoubin ghahramani')
mukuta@mi.t.u-tokyo.ac.jp, akisato@ieee.org, david.adrian@tum.de, zoubin@eng.cam.ac.uk
d492dbfaa42b4f8b8a74786d7343b3be6a3e9a1dDeep Cost-Sensitive and Order-Preserving Feature Learning for
Cross-Population Age Estimation
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
University of Chinese Academy of Sciences
3 KingSoft Ltd.
4 CAS Center for Excellence in Brain Science and Intelligence Technology
5 Vimicro AI Chip Technology Corporation
Birkbeck University of London
('2168945', 'Kai Li', 'kai li')
('1757173', 'Junliang Xing', 'junliang xing')
('49734675', 'Chi Su', 'chi su')
('40506509', 'Weiming Hu', 'weiming hu')
('2373307', 'Yundong Zhang', 'yundong zhang')
{kai.li,jlxing,wmhu}@nlpr.ia.ac.cn suchi@kingsoft.com raymond@vimicro.com sjmaybank@dcs.bbk.ac.uk
d444368421f456baf8c3cb089244e017f8d32c41CNN for IMU Assisted Odometry Estimation using Velodyne LiDAR ('3414588', 'Martin Velas', 'martin velas')
('2131298', 'Michal Spanel', 'michal spanel')
('1700956', 'Michal Hradis', 'michal hradis')
('1785162', 'Adam Herout', 'adam herout')
d4885ca24189b4414031ca048a8b7eb2c9ac646cEfficient Facial Representations for Age, Gender
and Identity Recognition in Organizing Photo
Albums using Multi-output CNN
Samsung-PDMI Joint AI Center
Mathematics
National Research University Higher School of Economics
Nizhny Novgorod, Russia
('35153729', 'Andrey V. Savchenko', 'andrey v. savchenko')
d4c7d1a7a03adb2338704d2be7467495f2eb6c7b
d4001826cc6171c821281e2771af3a36dd01ffc0Modélisation de contextes pour l’annotation sémantique
de vidéos
To cite this version:
Ecole Nationale Supérieure des Mines de Paris, 2013. Français. . 00958135>
HAL Id: pastel-00958135
https://pastel.archives-ouvertes.fr/pastel-00958135
Submitted on 11 Mar 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('2482072', 'Nicolas Ballas', 'nicolas ballas')
('2482072', 'Nicolas Ballas', 'nicolas ballas')
d46b4e6871fc9974542215f001e92e3035aa08d9A Gabor Quotient Image for Face Recognition
under Varying Illumination
Mahanakorn University of Technology
51 Cheum-Sampan Rd., Nong Chok, Bangkok, THAILAND 10530
('1805935', 'Sanun Srisuk', 'sanun srisuk')
('2337544', 'Amnart Petpon', 'amnart petpon')
sanun@mut.ac.th, amnartpe@dtac.co.th
d458c49a5e34263c95b3393386b5d76ba770e497Middle-East Journal of Scientific Research 20 (1): 01-13, 2014
ISSN 1990-9233
© IDOSI Publications, 2014
DOI: 10.5829/idosi.mejsr.2014.20.01.11434
A Comparative Analysis of Gender Classification Techniques
Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, Pakistan
('46883468', 'Sajid Ali Khan', 'sajid ali khan')
('48767110', 'Maqsood Ahmad', 'maqsood ahmad')
('2521631', 'Naveed Riaz', 'naveed riaz')
d454ad60b061c1a1450810a0f335fafbfeceecccDeep Regression Forests for Age Estimation
1 Key Laboratory of Specialty Fiber Optics and Optical Access Networks,
Shanghai Institute for Advanced Communication and Data Science
School of Communication and Information Engineering, Shanghai University
Johns Hopkins University
College of Computer and Control Engineering, Nankai University 4 Hikvision Research
('41187410', 'Wei Shen', 'wei shen')
('9544564', 'Yilu Guo', 'yilu guo')
('47906413', 'Yan Wang', 'yan wang')
('1681247', 'Kai Zhao', 'kai zhao')
('49292319', 'Bo Wang', 'bo wang')
{shenwei1231,gyl.luan0,wyanny.9,zhaok1206,wangbo.yunze,alan.l.yuille}@gmail.com
d40cd10f0f3e64fd9b0c2728089e10e72bea9616Article
Enhancing Face Identification Using Local Binary
Patterns and K-Nearest Neighbors
School of Communication Engineering, Hangzhou Dianzi University, Xiasha Higher Education Zone
Received: 21 March 2017; Accepted: 29 August 2017; Published: 5 September 2017
('11249315', 'Idelette Laure Kambi Beli', 'idelette laure kambi beli')
('2826297', 'Chunsheng Guo', 'chunsheng guo')
Hangzhou 310018, China; guo.chsh@gmail.com
* Correspondence: kblaure@yahoo.fr
d4ebf0a4f48275ecd8dbc2840b2a31cc07bd676d
d4e669d5d35fa0ca9f8d9a193c82d4153f5ffc4eA Lightened CNN for Deep Face Representation
School of Computer and Communication Engineering
University of Science and Technology Beijing, Beijing, China
National Laboratory of Pattern Recognition
Institute of Automation Chinese Academy of Sciences, Beijing, China
('2225749', 'Xiang Wu', 'xiang wu')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
aflredxiangwu@gmail.com
{rhe, znsun}@nlpr.ia.ac.cn
d46e793b945c4f391031656357625e902c4405e8Face-off: Automatic Alteration of Facial Features
Department of Information Management
National Taiwan University of Science and Technology
No. 43, Sec. 4, Keelung Road
Taipei, 106, Taiwan, ROC
('40119465', 'Jia-Kai Chou', 'jia-kai chou')
('2241272', 'Chuan-Kai Yang', 'chuan-kai yang')
('2553196', 'Sing-Dong Gong', 'sing-dong gong')
A9409004@mail.ntust.edu.tw,ckyang@cs.ntust.edu.tw,hgznrn@uj.com.tw
d44a93027208816b9e871101693b05adab576d89
d4c2d26523f577e2d72fc80109e2540c887255c8Face-space Action Recognition by Face-Object Interactions
Weizmann Institute of Science
Rehovot, 7610001, Israel
('32928116', 'Amir Rosenfeld', 'amir rosenfeld')
('1743045', 'Shimon Ullman', 'shimon ullman')
{amir.rosenfeld,shimon.ullman}@weizmann.ac.il
d4b88be6ce77164f5eea1ed2b16b985c0670463aTECHNICAL REPORT JAN.15.2016
A Survey of Different 3D Face Reconstruction
Methods
Department of Computer Science and Engineering
('2357264', 'Amin Jourabloo', 'amin jourabloo')jourablo@msu.edu
d44ca9e7690b88e813021e67b855d871cdb5022fQUT Digital Repository:
http://eprints.qut.edu.au/
Zhang, Ligang and Tjondronegoro, Dian W. (2009) Selecting, optimizing and
fusing ‘salient’ Gabor features for facial expression recognition. In: Neural
Information Processing (Lecture Notes in Computer Science), 1-5 December
2009, Hotel Windsor Suites Bangkok, Bangkok.

© Copyright 2009 Springer-Verlag GmbH Berlin Heidelberg
baaaf73ec28226d60d923bc639f3c7d507345635Stanford University
CS229 : Machine Learning techniques
Project report
Emotion Classification on face images
Authors:
Instructor
December 12, 2015
('40503018', 'Mikael Jorda', 'mikael jorda')
('2765850', 'Nina Miolane', 'nina miolane')
('34699434', 'Andrew Ng', 'andrew ng')
ba2bbef34f05551291410103e3de9e82fdf9ddddA Study on Cross-Population Age Estimation
LCSEE, West Virginia University
LCSEE, West Virginia University
('1822413', 'Guodong Guo', 'guodong guo')
('1720735', 'Chao Zhang', 'chao zhang')
guodong.guo@mai1.wvu.edu
cazhang@mix.wvu.edu
bafb8812817db7445fe0e1362410a372578ec1fc805
Image-Quality-Based Adaptive Face Recognition
('2284264', 'Harin Sellahewa', 'harin sellahewa')
baa0fe4d0ac0c7b664d4c4dd00b318b6d4e09143International Journal of Signal Processing, Image Processing and Pattern Recognition
Vol. 8, No. 1 (2015), pp. 9-22
http://dx.doi.org/10.14257/ijsip.2015.8.1.02
Facial Expression Analysis using Active Shape Model
School of Engineering, University of Portsmouth, United Kingdom
('2226048', 'Reda Shbib', 'reda shbib')
('32991189', 'Shikun Zhou', 'shikun zhou')
reda.shbib@port.ac.uk, Shikun.zhou@port.ac.uk
ba99c37a9220e08e1186f21cab11956d3f4fccc2A Fast Factorization-based Approach to Robust PCA
Southern Illinois University, Carbondale, IL 62901 USA
('33048613', 'Chong Peng', 'chong peng')
('1686710', 'Zhao Kang', 'zhao kang')
('39951979', 'Qiang Cheng', 'qiang cheng')
Email: {pchong,zhao.kang,qcheng}@siu.edu
ba816806adad2030e1939450226c8647105e101cMindLAB at the THUMOS Challenge
Fabi´an P´aez
Fabio A. Gonz´alez
MindLAB Research Group
MindLAB Research Group
MindLAB Research Group
Bogot´a, Colombia
Bogot´a, Colombia
Bogot´a, Colombia
('1939861', 'Jorge A. Vanegas', 'jorge a. vanegas')fmpaezri@unal.edu.co
javanegasr@unal.edu.co
fagonzalezo@unal.edu.co
badcd992266c6813063c153c41b87babc0ba36a3Recent Advances in Object Detection in the Age
of Deep Convolutional Neural Networks
,1,2), Fr´ed´eric Jurie(1)
(∗) equal contribution
(1)Normandie Univ, UNICAEN, ENSICAEN, CNRS
(2)Safran Electronics and Defense
September 11, 2018
('51443250', 'Shivang Agarwal', 'shivang agarwal')
('35527701', 'Jean Ogier Du Terrail', 'jean ogier du terrail')
ba788365d70fa6c907b71a01d846532ba3110e31
badcfb7d4e2ef0d3e332a19a3f93d59b4f85668eThe Application of Extended Geodesic Distance
in Head Poses Estimation
Institute of Computing Technology
Chinese Academy of Sciences, Beijing 100080, China
2 Department of Computer Science and Engineering,
Harbin Institute of Technology, Harbin, China
3 Graduate School of the Chinese Academy of Sciences, Beijing 100039, China
('1798982', 'Bingpeng Ma', 'bingpeng ma')
('1684164', 'Fei Yang', 'fei yang')
('1698902', 'Wen Gao', 'wen gao')
('1740430', 'Baochang Zhang', 'baochang zhang')
ba8a99d35aee2c4e5e8a40abfdd37813bfdd0906ELEKTROTEHNI ˇSKI VESTNIK 78(1-2): 12–17, 2011
EXISTING SEPARATE ENGLISH EDITION
Uporaba emotivno pogojenega raˇcunalniˇstva v
priporoˇcilnih sistemih
Marko Tkalˇciˇc, Andrej Koˇsir, Jurij Tasiˇc
1Univerza v Ljubljani, Fakulteta za elektrotehniko, Trˇzaˇska 25, 1000 Ljubljana, Slovenija
2Univerza v Ljubljani, Fakulteta za raˇcunalniˇstvo in informatiko, Trˇzaˇska 25, 1000 Ljubljana, Slovenija
Povzetek. V ˇclanku predstavljamo rezultate treh raziskav, vezanih na izboljˇsanje delovanja multimedijskih
priporoˇcilnih sistemov s pomoˇcjo metod emotivno pogojenega raˇcunalniˇstva (ang. affective computing).
Vsebinski priporoˇcilni sistem smo izboljˇsali s pomoˇcjo metapodatkov, ki opisujejo emotivne odzive uporabnikov.
Pri skupinskem priporoˇcilnem sistemu smo dosegli znaˇcilno izboljˇsanje v obmoˇcju hladnega zagona z uvedbo
nove mere podobnosti, ki temelji na osebnostnem modelu velikih pet (ang. five factor model). Razvili smo tudi
sistem za neinvazivno oznaˇcevanje vsebin z emotivnimi parametri, ki pa ˇse ni zrel za uporabo v priporoˇcilnih
sistemih.
Kljuˇcne besede: priporoˇcilni sistemi, emotivno pogojeno raˇcunalniˇstvo, strojno uˇcenje, uporabniˇski profil,
emocije
Uporaba emotivnega raˇcunalniˇstva v priporoˇcilnih
sistemih
In this paper we present the results of three investigations of
our broad research on the usage of affect and personality in
recommender systems. We improved the accuracy of content-
based recommender system with the inclusion of affective
parameters of user and item modeling. We improved the
accuracy of a content filtering recommender system under the
cold start conditions with the introduction of a personality
based user similarity measure. Furthermore we developed a
system for implicit tagging of content with affective metadata.
1 UVOD
Uporabniki (porabniki) multimedijskih (MM) vsebin so
v ˇcedalje teˇzjem poloˇzaju, saj v veliki koliˇcini vse-
bin teˇzko najdejo zanje primerne. Pomagajo si s pri-
poroˇcilnimi sistemi, ki na podlagi osebnih preferenc
uporabnikov izberejo manjˇso koliˇcino relevantnih MM
vsebin, med katerimi uporabnik laˇze izbira. Noben danes
znan priporoˇcilni sistem ne zadoˇsˇca v celoti potrebam
uporabnikov, saj je izbor priporoˇcenih vsebin obiˇcajno
nezadovoljive kakovosti [10]. Cilj tega ˇclanka je pred-
staviti metode emotivno pogojenega raˇcunalniˇstva (ang.
affective computing - glej [12]) za izboljˇsanje kakovosti
priporoˇcilnih sistemov in utrditi za slovenski prostor
novo terminologijo.
1.1 Opis problema
Za izboljˇsanje kakovosti priporoˇcilnih sistemov sta
na voljo dve poti: (i) optimizacija algoritmov ali (ii)
uporaba boljˇsih znaˇcilk, ki bolje razloˇzijo neznano
Prejet 13. oktober, 2010
Odobren 1. februar, 2011
varianco [8]. V tem ˇclanku predstavljamo izboljˇsanje
priporoˇcilnih sistemov z uporabo novih znaˇcilk, ki te-
meljijo na emotivnih odzivih uporabnikov in na njiho-
vih osebnostnih lastnostih. Te znaˇcilke razloˇzijo velik
del uporabnikovih preferenc, ki se izraˇzajo v obliki
ocen posameznih vsebin (npr. Likertova lestvica, binarne
ocene itd.). Ocene vsebin se pri priporoˇcilnih sistemih
zajemajo eksplicitno (ocena) ali implicitno, pri ˇcemer o
oceni sklepamo na podlagi opazovanj (npr. ˇcas gledanja
kot indikator vˇseˇcnosti [7].
Izboljˇsanja uˇcinkovitosti priporoˇcilnih sistemov smo
se lotili na treh podroˇcjih: (i) uporaba emotivnega
modeliranja uporabnikov v vsebinskem priporoˇcilnem
sistemu, (ii) neinvazivna (implicitna) detekcija emocij za
emotivno modeliranje in (iii) uporaba osebnostne mere
podobnosti v skupinskem priporoˇcilnem sistemu. Slika 1
prikazuje arhitekturo emotivnega priporoˇcilnega sistema
in mesta, kjer smo vnesli opisane izboljˇsave.
Preostanek ˇclanka je strukturiran tako: v razdelku
2 je predstavljen zajem podatkov. V razdelku 3 je
predstavljen vsebinski priporoˇcilni sistem z emotivnimi
metapodatki. V razdelku 4 je predstavljen skupinski
priporoˇcilni sistem, ki uporablja mero podobnosti na
podlagi osebnosti, v razdelku 5 pa algoritem za razpo-
znavo emocij. Vsak od teh razdelov je sestavljen iz opisa
eksperimenta in predstavitve rezultatov. V razdelku 6 so
predstavljeni sklepi.
1.2 Sorodno delo
Najbolj groba delitev priporoˇcilnih sistemov je na vse-
binske, skupinske ter hibridne sisteme [1]. Z izjemo vse-
binskih priporoˇcilnih sistemov, ki sta ga razvila Arapakis
[2] in Tkalˇciˇc [14], sorodnega dela na podroˇcju emotivno
pogojenih priporoˇcilnih sistemov takorekoˇc ni. Panti´c in
E-poˇsta: avtor@naslov.com
bac11ce0fb3e12c466f7ebfb6d036a9fe62628eaWeakly Supervised Learning of Heterogeneous
Concepts in Videos
Larry Davis1
University of Maryland, College Park; 2Arizona State University; 3Xerox Research Centre
India
('36861219', 'Sohil Shah', 'sohil shah')
('40222634', 'Kuldeep Kulkarni', 'kuldeep kulkarni')
('2221075', 'Arijit Biswas', 'arijit biswas')
('2757149', 'Ankit Gandhi', 'ankit gandhi')
('2116262', 'Om Deshmukh', 'om deshmukh')
ba29ba8ec180690fca702ad5d516c3e43a7f0bb8
ba7b12c8e2ff3c5e4e0f70b58215b41b18ff8febNatural and Effective Obfuscation by Head Inpainting
Max Planck Institute for Informatics, Saarland Informatics Campus
2KU-Leuven/PSI, Toyota Motor Europe (TRACE)
3ETH Zurich
('32222907', 'Qianru Sun', 'qianru sun')
('1681236', 'Luc Van Gool', 'luc van gool')
('1697100', 'Bernt Schiele', 'bernt schiele')
{qsun, joon, schiele, mfritz}@mpi-inf.mpg.de
{liqian.ma, luc.vangool}@esat.kuleuven.be
vangool@vision.ee.ethz.ch
bab88235a30e179a6804f506004468aa8c28ce4f
badd371a49d2c4126df95120902a34f4bee01b00GONDA, WEI, PARAG, PFISTER: PARALLEL SEPARABLE 3D CONVOLUTION
Parallel Separable 3D Convolution for Video
and Volumetric Data Understanding
Harvard John A. Paulson School of
Engineering and Applied Sciences
Camabridge MA, USA
Toufiq Parag
Hanspeter Pfister
('49147616', 'Felix Gonda', 'felix gonda')
('1766333', 'Donglai Wei', 'donglai wei')
fgonda@g.harvard.edu
donglai@seas.harvard.edu
paragt@seas.harvard.edu
pfister@g.harvard.edu
a065080353d18809b2597246bb0b48316234c29aFHEDN: A based on context modeling Feature Hierarchy
Encoder-Decoder Network for face detection
College of Computer Science, Chongqing University, Chongqing, China
College of Medical Informatics, Chongqing Medical University, Chongqing, China
Sichuan Fine Arts Institute, Chongqing, China
('6030130', 'Zexun Zhou', 'zexun zhou')
('7686690', 'Zhongshi He', 'zhongshi he')
('2685579', 'Ziyu Chen', 'ziyu chen')
('33458882', 'Yuanyuan Jia', 'yuanyuan jia')
('1768826', 'Haiyan Wang', 'haiyan wang')
('8784203', 'Jinglong Du', 'jinglong du')
('2961485', 'Dingding Chen', 'dingding chen')
{zexunzhou,zshe,chenziyu,yyjia,jldu,dingding}@cqu.edu.cn;{why}@scfai.edu.cn
a0f94e9400938cbd05c4b60b06d9ed58c34583031118
Value-Directed Human Behavior Analysis
from Video Using Partially Observable
Markov Decision Processes
('1773895', 'Jesse Hoey', 'jesse hoey')
('1710980', 'James J. Little', 'james j. little')
a022eff5470c3446aca683eae9c18319fd2406d52017-ENST-0071
EDITE - ED 130
Doctorat ParisTech
T H È S E
pour obtenir le grade de docteur délivré par
TÉLÉCOM ParisTech
Spécialité « SIGNAL et IMAGES »
présentée et soutenue publiquement par
le 15 décembre 2017
Apprentissage Profond pour la Description Sémantique des Traits
Visuels Humains
Directeur de thèse : Jean-Luc DUGELAY
Co-encadrement de la thèse : Moez BACCOUCHE
Jury
Mme Bernadette DORIZZI, PRU, Télécom SudParis
Mme Jenny BENOIS-PINEAU, PRU, Université de Bordeaux
M. Christian WOLF, MC/HDR, INSA de Lyon
M. Patrick PEREZ, Chercheur/HDR, Technicolor Rennes
M. Moez BACCOUCHE, Chercheur/Docteur, Orange Labs Rennes
M. Jean-Luc DUGELAY, PRU, Eurecom Sophia Antipolis
M. Sid-Ahmed BERRANI, Directeur de l’Innovation/HDR, Algérie Télécom
Présidente
Rapporteur
Rapporteur
Examinateur
Encadrant
Directeur de Thèse
Invité
TÉLÉCOM ParisTech
école de l’Institut Télécom - membre de ParisTech
N°: 2009 ENAM XXXX T H È S E
('3116433', 'Grigory Antipov', 'grigory antipov')
a0f193c86e3dd7e0020c0de3ec1e24eaff343ce4JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 21, 819-828 (2005)
Short Paper_________________________________________________
A New Classification Approach using
Discriminant Functions
Department of Computer Engineering
+Department of Electrical and Electronic Engineering
Sakarya University
54187 Sakarya, Turkey
In this study, an approach involving new types of cost functions is given for the
construction of discriminant functions. Centers of mass, not specified a priori, around
feature vectors are clustered using cost function. Thus, the algorithms yield both the
centers of mass and the distinct classes.
Keywords: classification, feature vectors, linear discriminant function, Fisher’s LDF,
dimension reduction
1. INTRODUCTION
There are many algorithms for, and many applications of classification and dis-
crimination (grouping of a set of objects into subsets of similar objects where the objects
in different subsets are different) in several diverse fields [2-15, 23, 24], ranging from
engineering to medicine, to econometrics, etc. Some examples are automatic target rec-
ognition (ATR), fault and maintenance-time recognition, optical character recognition
(OCR), speech and speaker recognition, etc.
In this study, a new approach and algorithm to the classification problem are de-
scribed with the goal of finding a single (possibly vector-valued) linear discriminant
function. This approach is in terms of some optimal centers of mass for the transformed
feature vectors of each class, the transforms being performed via the discriminant func-
tions. As such, it follows the same philosophy which is behind the approaches such as
principal component analysis (PCA), Fisher’s linear discriminant functions (LDF), and
minimum total covariance (MTC) [1-16, 22, 25-28], providing alternatives which extend
this work.
Linear discriminant functions (LDF) are often used in pattern recognition to classify
a given object or pattern, based on its features, into one of several given classes. For sim-
plicity, consider the discrimination problem for two classes. Let x = [x1, x2, …, xm] be the
Received April 28, 2003; revised March 1 and March 29, 2004; accepted May 3, 2004.
Communicated by H. Y. Mark Liao.
819
('7605725', 'Zafer Demir', 'zafer demir')
('2279264', 'Erol Emre', 'erol emre')
E-mail: {askind, zdemir, eemre}@sakarya.edu.tr
a0c37f07710184597befaa7e6cf2f0893ff440e9
a0dc68c546e0fc72eb0d9ca822cf0c9ccb4b4c4fFusing with Context: a Bayesian Approach to Combining Descriptive Attributes
University of Colorado at Colorado Springs and Securics, Inc., Colorado Springs, CO, USA
Columbia University, New York, NY, USA
University of North Carolina Wilmington, Wilmington, NC, USA
('2613438', 'Walter J. Scheirer', 'walter j. scheirer')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
a0021e3bbf942a88e13b67d83db7cf52e013abfdHuman concerned object detecting in video
School of Computer Science and Technology, Shandong Institute of Business and Technology
Yantai, Shandong, 264005, China
School of Computer Science and Technology, Shandong University
Jinan, Shandong, 250101, China
Received 11 December 2014
('2525711', 'Jinjiang LI', 'jinjiang li')
('1733582', 'Jie GUO', 'jie guo')
('9242942', 'Hui FAN', 'hui fan')
E-mail: lijinjiang@gmail.com
a0d6390dd28d802152f207940c7716fe5fae8760Bayesian Face Revisited: A Joint Formulation
University of Science and Technology of China
The Chinese University of Hong Kong
3 Microsoft Research Asia, Beijing, China
('39447786', 'Dong Chen', 'dong chen')
('2032273', 'Xudong Cao', 'xudong cao')
('34508239', 'Liwei Wang', 'liwei wang')
('1716835', 'Fang Wen', 'fang wen')
('40055995', 'Jian Sun', 'jian sun')
chendong@mail.ustc.edu.cn
lwwang@cse.cuhk.edu.hk
{xudongca,fangwen,jiansun}@microsoft.com
a0fb5b079dd1ee5ac6ac575fe29f4418fdb0e670
a0fd85b3400c7b3e11122f44dc5870ae2de9009aLearning Deep Representation for Face
Alignment with Auxiliary Attributes
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('47571885', 'Ping Luo', 'ping luo')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
a0dfb8aae58bd757b801e2dcb717a094013bc178Reconocimiento de expresiones faciales con base
en la din´amica de puntos de referencia faciales
Instituto Nacional de Astrof´ısica ´Optica y Electr´onica,
Divisi´on de Ciencias Computacionales, Tonantzintla, Puebla,
M´exico
Resumen. Las expresiones faciales permiten a las personas comunicar
emociones, y es pr´acticamente lo primero que observamos al interactuar
con alguien. En el ´area de computaci´on, el reconocimiento de expresiones
faciales es importante debido a que su an´alisis tiene aplicaci´on directa en
´areas como psicolog´ıa, medicina, educaci´on, entre otras. En este articulo
se presenta el proceso de dise˜no de un sistema para el reconocimiento de
expresiones faciales utilizando la din´amica de puntos de referencia ubi-
cados en el rostro, su implementaci´on, experimentos realizados y algunos
de los resultados obtenidos hasta el momento.
Palabras clave: Expresiones faciales, clasificaci´on, m´aquinas de soporte
vectorial,modelos activos de apariencia.
Facial Expressions Recognition Based on Facial
Landmarks Dynamics
('40452660', 'E. Morales-Vargas', 'e. morales-vargas')
('2737777', 'Hayde Peregrina-Barreto', 'hayde peregrina-barreto')
emoralesv@inaoep.mx, kargaxxi@inaoep.mx, hperegrina@inaoep.mx
a0aa32bb7f406693217fba6dcd4aeb6c4d5a479bCascaded Regressor based 3D Face Reconstruction
from a Single Arbitrary View Image
College of Computer Science, Sichuan University, Chengdu, China
('50207647', 'Feng Liu', 'feng liu')
('39422721', 'Dan Zeng', 'dan zeng')
('1723081', 'Jing Li', 'jing li')
('7345195', 'Qijun Zhao', 'qijun zhao')
qjzhao@scu.edu.cn
a03cfd5c0059825c87d51f5dbf12f8a76fe9ff60Simultaneous Learning and Alignment:
Multi-Instance and Multi-Pose Learning?
1 Comp. Science & Eng.
Univ. of CA, San Diego
2 Electrical Engineering
California Inst. of Tech.
3 Lab of Neuro Imaging
Univ. of CA, Los Angeles
('2490700', 'Boris Babenko', 'boris babenko')
('1736745', 'Zhuowen Tu', 'zhuowen tu')
('1769406', 'Serge Belongie', 'serge belongie')
{bbabenko,sjb}@cs.ucsd.edu
pdollar@caltech.edu
zhuowen.tu@loni.ucla.edu
a06b6d30e2b31dc600f622ab15afe5e2929581a7Robust Joint and Individual Variance Explained
Imperial College London, UK
2Onfido, UK
Middlesex University London, UK
('3320415', 'Christos Sagonas', 'christos sagonas')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('28943361', 'Alina Leidinger', 'alina leidinger')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
christos.sagonas@onfido.com, {i.panagakis, s.zafeiriou}@imperial.ac.uk
a0b1990dd2b4cd87e4fd60912cc1552c34792770Deep Constrained Local Models for Facial Landmark Detection
Carnegie Mellon University
Tadas Baltruaitis
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213, USA
5000 Forbes Ave, Pittsburgh, PA 15213, USA
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213, USA
('1783029', 'Amir Zadeh', 'amir zadeh')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
abagherz@cs.cmu.edu
tbaltrus@cs.cmu.edu
morency@cs.cmu.edu
a090d61bfb2c3f380c01c0774ea17929998e0c96On the Dimensionality of Video Bricks under Varying Illumination
Beijing Lab of Intelligent Information Technology, School of Computer Science,
Beijing Institute of Technology, Beijing 100081, PR China
('2852150', 'Youdong Zhao', 'youdong zhao')
('38150687', 'Xi Song', 'xi song')
('7415267', 'Yunde Jia', 'yunde jia')
{zyd458, songxi, jiayunde}@bit.edu.cn
a0e7f8771c7d83e502d52c276748a33bae3d5f81Ensemble Nystr¨om
A common problem in many areas of large-scale machine learning involves ma-
nipulation of a large matrix. This matrix may be a kernel matrix arising in Support
Vector Machines [9, 15], Kernel Principal Component Analysis [47] or manifold
learning [43,51]. Large matrices also naturally arise in other applications, e.g., clus-
tering, collaborative filtering, matrix completion, and robust PCA. For these large-
scale problems, the number of matrix entries can easily be in the order of billions
or more, making them hard to process or even store. An attractive solution to this
problem involves the Nystr¨om method, in which one samples a small number of
columns from the original matrix and generates its low-rank approximation using
the sampled columns [53]. The accuracy of the Nystr¨om method depends on the
number columns sampled from the original matrix. Larger the number of samples,
higher the accuracy but slower the method.
In the Nystr¨om method, one needs to perform SVD on a l × l matrix where l is
the number of columns sampled from the original matrix. This SVD operation is
typically carried out on a single machine. Thus, the maximum value of l used for an
application is limited by the capacity of the machine. That is why in practice, one
restricts l to be less than 20K or 30K, even when the size of matrix is in millions.
This restricts the accuracy of the Nystr¨om method in very large-scale settings.
This chapter describes a family of algorithms based on mixtures of Nystr¨om
approximations called, Ensemble Nystr¨om algorithms, which yields more accurate
low-rank approximations than the standard Nystr¨om method. The core idea of En-
semble Nystr¨om is to sample many subsets of columns from the original matrix,
each containing a relatively small number of columns. Then, Nystr¨om method is
Division of Computer Science, University of California, Berkeley, CA, USA e-mail
('2794322', 'Sanjiv Kumar', 'sanjiv kumar')
('1709415', 'Mehryar Mohri', 'mehryar mohri')
('8395559', 'Ameet Talwalkar', 'ameet talwalkar')
('2794322', 'Sanjiv Kumar', 'sanjiv kumar')
('1709415', 'Mehryar Mohri', 'mehryar mohri')
('8395559', 'Ameet Talwalkar', 'ameet talwalkar')
Google Research, New York, NY, USA e-mail: sanjivk@google.com
Courant Institute, New York, NY, USA e-mail: mohri@cs.nyu.edu
ameet@eecs.berkeley.edu
a0061dae94d916f60a5a5373088f665a1b54f673Research Article
Lensless computational imaging through deep
learning
Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA
Institute for Medical Engineering Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA
3Singapore-MIT Alliance for Research and Technology (SMART) Centre, One Create Way, Singapore 117543, Singapore
†These authors contributed equally
Compiled March 1, 2017
Deep learning has been proven to yield reliably generalizable answers to numerous classification and
decision tasks. Here, we demonstrate for the first time, to our knowledge, that deep neural networks
(DNNs) can be trained to solve inverse problems in computational imaging. We experimentally demon-
strate a lens-less imaging system where a DNN was trained to recover a phase object given a raw
intensity image recorded some distance away. ©
OCIS codes:
(110.1758) Computational imaging.
(100.3190) Inverse problems; (100.4996) Pattern recognition, neural networks; (100.5070) Phase retrieval;
http://dx.doi.org/10.1364/optica.XX.XXXXXX
1. INTRODUCTION
Neural network training can be thought of as generic function approxi-
mation, as follows: given a training set (i.e., examples of matched input
and output data obtained from a hitherto-unknown model), generate
the computational architecture that most accurately maps all inputs in
In this paper, we propose that deep neural networks may “learn” to
approximate solutions to inverse problems in computational imaging.
A general computational imaging system consists of a physical part
where light propagates through one or more objects of interest as well
as optical elements such as lenses, prisms, etc. finally producing a
raw intensity image on a digital camera. The raw intensity image is
then computationally processed to yield object attributes, e.g. a spatial
map of light attenuation and/or phase delay through the object—what
we call traditionally “intensity image” and “quantitative phase image,”
respectively. The computational part of the system is then said to solve
the inverse problem.
The study of inverse problems is traced back at least a century ago
to Tikhonov [1] and Wiener [2]. A good introductory book with rigor-
ous but not overwhelming discussion of the underlying mathematical
concepts, especially regularization, is [3]. During the past decade, the
field experienced a renaissance due to the almost simultaneous matura-
tion of two related mathematics disciplines: convex optimization and
harmonic analysis, especially sparse representations. A light technical
introduction to these fascinating developments is in [4].
Neural networks have their own history of legendary ups-and-downs
[5] culminating with an even more recent renaissance. This was driven
by Hinton’s insight that multi-layer architectures with numerous layers,
dubbed as “deep networks,” DNNs, can generalize better than had been
previously thought after some simple but ingenious changes in the
nonlinearity and training algorithms [6]. Even more recently developed
architectures [7–9] have enabled neural networks to “learn deeper;”
and modern DNNs have shown spectacular success at solving “hard”
computational problems, such as: playing complex games like Atari
[17] and Go [18], object detection [19], and image restoration (e.g.,
colorization [20], deblurring [21–23], in-painting [24]).
The idea of using neural networks to clean up images isn’t exactly
new: for example, Hopfield’s associative memory network [25] was
capable of retrieving entire faces from partially obscured inputs, and
was implemented in an all-optical architecture [26] when computers
weren’t nearly as powerful as they are now. Recently, Horisaki et al.
[27] used support-vector machines, a form of bi-layer neural network
with nonlinear discriminant functions, also to recover face images
when the obscuration is caused by scattering media.
The hypothesis that we set out to test in this paper is whether
a neural network can be trained by being presented pairs of known
objects and their raw intensity image representations on the digital
camera of a computational imaging system; and then be used to produce
object estimates given raw intensity images from hitherto unknown
test objects, thus solving the inverse problem. This is a rather general
question and may take several flavors, depending on the nature of the
object, the physical design of the imaging system, etc. We chose to
('3365480', 'Ayan Sinha', 'ayan sinha')
('2371140', 'Justin Lee', 'justin lee')
('1804684', 'Shuai Li', 'shuai li')
('2455899', 'George Barbastathis', 'george barbastathis')
a0848d7b1bb43f4b4f1b4016e58c830f40944817Face matching for post-disaster family reunification
Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health
8600 Rockville Pike, Bethesda, MD USA
('1744255', 'Eugene Borovikov', 'eugene borovikov')
('2075836', 'Girish Lingappa', 'girish lingappa')
FaceMatch@NIH.gov
a000149e83b09d17e18ed9184155be140ae1266eChapter 9
Action Recognition in Realistic
Sports Videos
('1799979', 'Khurram Soomro', 'khurram soomro')
('40029556', 'Amir R. Zamir', 'amir r. zamir')
a01f9461bc8cf8fe40c26d223ab1abea5d8e2812Facial Age Estimation Through the Fusion of Texture
and local appearance Descriptors
DPDCE, University IUAV, Santa Croce 1957, 30135 Venice, Italy
2 Herta Security, Pau Claris 165 4-B, 08037 Barcelona, Spain
('1733945', 'Andrea Prati', 'andrea prati')huertacasado@iuav.it, aprati@iuav.it
carles.fernandez@hertasecurity.com
a702fc36f0644a958c08de169b763b9927c175ebFACIAL EXPRESSION RECOGNITION USING HOUGH FOREST
National Tsing-Hua University, Hsin-Chu, Taiwan
Asia University, Taichung, Taiwan
('2867389', 'Chi-Ting Hsu', 'chi-ting hsu')
('2790846', 'Shih-Chung Hsu', 'shih-chung hsu')
('1793389', 'Chung-Lin Huang', 'chung-lin huang')
Email: s9961601@m99.nthu.edu.tw, d9761817@oz.nthu.edu.tw, clhuang@ee.nthu.edu.tw
a7267bc781a4e3e79213bb9c4925dd551ea1f5c4Proceedings of eNTERFACE’15
The 11th Summer Workshop
on Multimodal Interfaces
August 10th - September 4th, 2015
Numediart Institute, University of Mons
Mons, Belgium
a784a0d1cea26f18626682ab108ce2c9221d1e53Anchored Regression Networks applied to Age Estimation and Super Resolution
D-ITET, ETH Zurich
Switzerland
D-ITET, ETH Zurich
Merantix GmbH
D-ITET, ETH Zurich
ESAT, KU Leuven
('2794259', 'Eirikur Agustsson', 'eirikur agustsson')
('1732855', 'Radu Timofte', 'radu timofte')
('1681236', 'Luc Van Gool', 'luc van gool')
aeirikur@vision.ee.ethz.ch
timofter@vision.ee.ethz.ch
vangool@vision.ee.ethz.ch
a77e9f0bd205a7733431a6d1028f09f57f9f73b0Multimodal feature fusion for CNN-based gait recognition: an
empirical comparison
F.M. Castroa,, M.J. Mar´ın-Jim´enezb, N. Guila, N. P´erez de la Blancac
University of Malaga, Spain
University of Cordoba, Spain
University of Granada, Spain
a74251efa970b92925b89eeef50a5e37d9281ad0
a7d23c699a5ae4ad9b8a5cbb8c38e5c3b5f5fb51Postgraduate Annual Research Seminar 2007 (3-4 July 2007)
A Summary of literature review : Face Recognition
Faculty of Computer Science & Information System,
University Technology of Malaysia, 81310 Skudai, Johor, Malaysia
kittmee@yahoo.com; dzulkifli@fsksm.utm.my
a70e36daf934092f40a338d61e0fe27be633f577Enhanced Facial Feature Tracking of Spontaneous and Continuous Expressions
A.Goneid and R. El Kaliouby
The American University in Cairo, Egypt
goneid@aucegypt.edu, ranak@aucegypt.edu
a7664247a37a89c74d0e1a1606a99119cffc41d4Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
3287
a7191958e806fce2505a057196ccb01ea763b6eaConvolutional Neural Network based
Age Estimation from Facial Image and
Depth Prediction from Single Image
B. Eng. (Honours)
Australian National University
January 2016
A thesis submitted for the degree of Master of Philosophy
at The Australian National University
Computer Vision Group
Research School of Engineering
College of Engineering and Computer Science
The Australian National University
('2124180', 'Jiayan Qiu', 'jiayan qiu')
a7e1327bd76945a315f2869bfae1ce55bb94d165Kernel Fisher Discriminant Analysis with Locality Preserving for Feature Extraction and
Recognition
School of Information Engineering, Guangdong Medical College, Song Shan Hu
Dongguan, Guangdong, China
Shaoguan University, Da Tang Lu
Shaoguan, Guangdong, China
School of Information Engineering, Guangdong Medical College, Song Shan Hu
Dongguan, Guangdong, China
('2588058', 'Di Zhang', 'di zhang')
('2007270', 'Jiazhong He', 'jiazhong he')
('20374749', 'Yun Zhao', 'yun zhao')
E-mail: changnuode@163.com
E-mail: hejiazhong@126.com
E-mail: zyun@gdmc.edu.cn
a7a6eb53bee5e2224f2ecd56a14e3a5a717e55b911th International Symposium of Robotics Research (ISRR2003), pp.192-201, 2003
Face Recognition Using Multi-viewpoint Patterns for
Robot Vision
Corporate Research and Development Center, TOSHIBA Corporation
1, KomukaiToshiba-cho, Saiwai-ku, Kawasaki 212-8582 Japan
('1770128', 'Kazuhiro Fukui', 'kazuhiro fukui')
('1708862', 'Osamu Yamaguchi', 'osamu yamaguchi')
kazuhiro.fukui@toshiba.co.jp / osamu1.yamaguchi@toshiba.co.jp
a758b744a6d6962f1ddce6f0d04292a0b5cf8e07
ISSN XXXX XXXX © 2017 IJESC


Research Article Volume 7 Issue No.4
Study on Human Face Recognition under Invariant Pose, Illumination
and Expression using LBP, LoG and SVM
Amrutha
Depart ment of Co mputer Science & Engineering
Mangalore Institute of Technology and Engineering, Moodabidri, Mangalore, India
INTRODUCTION
RELATED WORK
Abstrac t:
Face recognition system uses human face for the identification of the user. Face recognition is a difficu lt task there is no unique
method that provide accurate an accurate and effic ient solution in all the situations like the face image with differen t pose ,
illu mination and exp ression. Local Binary Pattern (LBP) and Laplac ian of Gaussian (Lo G) operators. Support Vector Machine
classifier is used to recognize the human face. The Lo G algorith m is used to preprocess the image to detect the edges of the face
image to get the image information. The LBP operator divides the face image into several blocks to generate the features informat ion
on pixe l level by creating LBP labels for all the blocks of image is obtained by concatenating all the individual local histo grams.
Support Vector Machine classifier (SVM ) is used to classify t he image. The a lgorith m performances is verified under the constraints
like illu mination, e xp ression and pose variation
Ke ywor ds: Face Recognition, Local Binary Pattern, Laplac ian of Gaussian, histogram, illu mination, pose angle, exp ression
variations, SVM .
1.
The Technology used for recognizing the face under security
systems works on the bio metric principles. There are many
human characteristics which can be used
for biometric
identification such that palm, finger print, face, and iris etc. one
of these biometrics methods face recognition is advantageous
because of it can be detected fro m much more d istance without
need of scanning devices this provides easy observation to
identify indiv iduals in group of persons. Most of the military
application security systems, attendance systems, authentication,
criminal identity etc. are performed using this technology. The
computer uses this recognition technology to identify or to
compare the person with same person or with some other person.
The human faces are very important factor to identify who the
person is and how the people will ma ke out his/her face. The
images of faces are taken fro m the distance without having
contact with a person, capturing the face images. Verification
and Identification s teps are used for comparison. The first
method is verification wh ich co mpares the face image with
his/her image wh ich is a lready stored in database. It is one to one
matching because it tries to match individual against same
person's image stored in database. The second method is
called one to n matching because it matches individual person's
face image with every person's face images. If the face images
are effected by lightning condition, different posing angle or
diffe rent expression then it is difficult to identify the human
face. Many algorithms are used to extract features of face and to
match the face images such as Principal Co mponent Analysis
(PCA) and Independent Component Analysis (ICA) [1], Elastic
Bunch Graph Matching (EBGM) [2], K -nearest neighbor
algorith m classifier and Linear Discriminant Analysis (LDA)
[3]. Th is paper is organized as fo llo ws: Section II revie ws the
related works done on data security in cloud. Section III
describes the proposed system and assumptions. Section IV
provides the conclusion of the paper
2.
the most biometrics
Face Recognition becomes one of
authentication
the past few years. Face
recognition is an interesting and successful application of Pattern
recognition and Image analysis. It co mpares a query face image
against all image te mplates in a face database. Face recognition
is very important due to its wide range of commercia l and law
enforcement applicat ions, which include forensic identificat ion,
access control, border surveillance and human interactions and
availability of low cost recording devices. Principa l Co mponent
Analysis and Independent Component Analysis [1], Elastic
Bunch Graph Matching [2], K-nearest neighbor algorithm
classifier and Linear Discriminant Analysis [3], Loca l Derivative
pattern and Local Binary Pattern [4]. These algorithms are still
having some proble ms
the
constraints like variations in pose, expression and illu mination.
This variation in the image degrades the performance of
recognition rate. Local Binary Pattern (LBP) and Laplac ian of
Gaussian (Lo G) is used to reduce the illu mination effects by
increasing the contrast of the image which does not effect to the
original
image and diffe rential e xc itation pixe l used for
preprocessing which is to make the algorithm invariant to the
illu mination changes
[4]. The Local Direct ional Pattern
descriptor (LDP) uses the edge values of surrounding pixe l of
the center pixe l and Two Dimensional Principal Analysis (2D-
PCA) is used for feature extraction which uses Euclidean
distance to measure the simila rity between tra ining database
images and test image features. The nearest neighbor classifier is
used to classify the images [5]. To reduce the influence of
illu mination fro m an input image an adaptive homo morphic
filtering is used in adaptive homo morphic eight local d irectional
to recognize
the face under
techniques from
International Journal of Engineering Science and Computing, April 2017 10081 http://ije sc.org/
a7c39a4e9977a85673892b714fc9441c959bf078Automated Individualization of Deformable Eye Region Model and Its
Application to Eye Motion Analysis
Dept. of Media and Image Technology,
Tokyo Polytechnic University
1583 Iiyama, Atsugi,
Kanagawa 243-0297, Japan
Robotics Institute
Carnegie Mellon University
5000 Forbes Avenue Pittsburgh,
PA 15213-3891, USA
('1683262', 'Tsuyoshi Moriyama', 'tsuyoshi moriyama')
('1733113', 'Takeo Kanade', 'takeo kanade')
moriyama@mega.t-kougei.ac.jp
tk@cs.cmu.edu
a75edf8124f5b52690c08ff35b0c7eb8355fe950Authentic Emotion Detection in Real-Time Video
School of Computer Science and Engineering, Sichuan University, China
Faculty of Science, University of Amsterdam, The Netherlands
LIACS Media Lab, Leiden University, The Netherlands
('1840164', 'Yafei Sun', 'yafei sun')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1731570', 'Michael S. Lew', 'michael s. lew')
('1695527', 'Theo Gevers', 'theo gevers')
a75ee7f4c4130ef36d21582d5758f953dba03a01DD2427 Final Project Report
DD2427 Final Project Report
Human face attributes prediction with Deep
Learning
moaah@kth.se
a775da3e6e6ea64bffab7f9baf665528644c7ed3International Journal of Computer Applications (0975 – 8887)
Volume 142 – No.9, May 2016
Human Face Pose Estimation based on Feature
Extraction Points
Research scholar,
Department of ECE
SBSSTC, Moga Road,
Ferozepur, Punjab, India
a703d51c200724517f099ee10885286ddbd8b587Fuzzy Neural Networks(FNN)-based Approach for
Personalized Facial Expression Recognition with
Novel Feature Selection Method
Div. of EE, Dept. of EECS, KAIST
373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Korea
Human-friendly Welfare Robotic System Engineering Research Center, KAIST
373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Korea
('1793114', 'Dae-Jin Kim', 'dae-jin kim')
('5960489', 'Kwang-Hyun Park', 'kwang-hyun park')
djkim@mail.kaist.ac.kr, zbien@ee.kaist.ac.kr
akaii@robotian.net
a75dfb5a839f0eb4b613d150f54a418b7812aa90MULTIBIOMETRIC SECURE SYSTEM BASED ON DEEP LEARNING
West Virginia University, Morgantown, USA
('23980155', 'Veeru Talreja', 'veeru talreja')
('1709360', 'Matthew C. Valenti', 'matthew c. valenti')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
b88ceded6467e9b286f048bb1b17be5998a077bdSparse Subspace Clustering via Diffusion Process
Curtin University, Perth, Australia
('2191968', 'Qilin Li', 'qilin li')
('1919769', 'Ling Li', 'ling li')
('1713220', 'Wanquan Liu', 'wanquan liu')
kylinlovesummer@gmail.com
b871d1b8495025ff8a6255514ed39f7765415935Application of Completed Local Binary Pattern for Facial Expression
Recognition on Gabor Filtered Facial Images
University of Ulsan, Ulsan, Republic of Korea
('2288674', 'Tanveer Ahsan', 'tanveer ahsan')1tanveerahsan@gmail.com, 2rsbdce@yahoo.com, *3upchong@ulsan.ac.kr
b8375ff50b8a6f1a10dd809129a18df96888ac8bPublished as a conference paper at ICLR 2017
DECOMPOSING MOTION AND CONTENT FOR
NATURAL VIDEO SEQUENCE PREDICTION
University of Michigan, Ann Arbor, USA
2Adobe Research, San Jose, CA 95110
3POSTECH, Pohang, Korea
Beihang University, Beijing, China
5Google Brain, Mountain View, CA 94043
('2241528', 'Seunghoon Hong', 'seunghoon hong')
('10668384', 'Xunyu Lin', 'xunyu lin')
('1697141', 'Honglak Lee', 'honglak lee')
('1768964', 'Jimei Yang', 'jimei yang')
('1711926', 'Ruben Villegas', 'ruben villegas')
b88d5e12089f6f598b8c72ebeffefc102cad1fc0Robust 2DPCA and Its Application
Xidian University
Xi’an China
Xidian University
Xi’an China
('40326660', 'Qianqian Wang', 'qianqian wang')
('38469552', 'Quanxue Gao', 'quanxue gao')
610887187@qq.com
xd ste pr@163.com
b84b7b035c574727e4c30889e973423fe15560d7Human Age Estimation Using Ranking SVM
HoHai University
2Center for Biometrics and Security Research & National Laboratory of Pattern
Recognition, Institute of Automation, Chinese Academy of Sciences
3China Research and Development Center for Internet of Thing
('40478348', 'Dong Cao', 'dong cao')
('1718623', 'Zhen Lei', 'zhen lei')
('1959072', 'Zhiwei Zhang', 'zhiwei zhang')
('39189280', 'Jun Feng', 'jun feng')
('34679741', 'Stan Z. Li', 'stan z. li')
fdcao,zlei,zwzhang,szlig@cbsr.ia.ac.cn, fengjun@hhu.edu.cn
b8dba0504d6b4b557d51a6cf4de5507141db60cfComparing Performances of Big Data Stream
Processing Platforms with RAM3S
b89862f38fff416d2fcda389f5c59daba56241dbA Web Survey for Facial Expressions Evaluation
Ecole Polytechnique Federale de Lausanne
Signal Processing Institute
Ecublens, 1015 Lausanne, Switzerland
Ecole Polytechnique Federale de Lausanne, Operation Research Group
Ecublens, 1015 Lausanne, Switzerland
June 9, 2008
('2916630', 'Matteo Sorci', 'matteo sorci')
('1794461', 'Gianluca Antonini', 'gianluca antonini')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
('1690395', 'Michel Bierlaire', 'michel bierlaire')
{Matteo.Sorci,Gianluca.Antonini,JP.Thiran}@epfl.ch
Michel.Bierlaire@epfl.ch
b8caf1b1bc3d7a26a91574b493c502d2128791f6RESEARCH ARTICLE
As Far as the Eye Can See: Relationship
between Psychopathic Traits and Pupil
Response to Affective Stimuli
Daniel T. Burley1*, Nicola S. Gray2,3, Robert J. Snowden1*
School of Psychology, Cardiff University, Cardiff, United Kingdom, College of
Human and Health Sciences, Swansea University, Swansea, United Kingdom, 3 Abertawe Bro-Morgannwg
University Health Board, Swansea, United Kingdom
* BurleyD2@Cardiff.ac.uk (DTB); Snowden@Cardiff.ac.uk (RJS)
b8084d5e193633462e56f897f3d81b2832b72dffDeepID3: Face Recognition with Very Deep Neural Networks
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
2SenseTime Group
('1681656', 'Yi Sun', 'yi sun')
('1865674', 'Ding Liang', 'ding liang')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
sy011@ie.cuhk.edu.hk
xgwang@ee.cuhk.edu.hk
liangding@sensetime.com
xtang@ie.cuhk.edu.hk
b8378ab83bc165bc0e3692f2ce593dcc713df34a
b8f3f6d8f188f65ca8ea2725b248397c7d1e662dSelfie Detection by Synergy-Constriant Based
Convolutional Neural Network
Electrical and Electronics Engineering, NITK-Surathkal, India.
('7245071', 'Yashas Annadani', 'yashas annadani')
('8341302', 'Akshay Kumar Jagadish', 'akshay kumar jagadish')
('2139966', 'Krishnan Chemmangat', 'krishnan chemmangat')
b8ebda42e272d3617375118542d4675a0c0e501dDeep Hashing Network for Unsupervised Domain Adaptation
Center for Cognitive Ubiquitous Computing, Arizona State University, Tempe, AZ, USA
('3151995', 'Hemanth Venkateswara', 'hemanth venkateswara')
('30443430', 'Jose Eusebio', 'jose eusebio')
('2471253', 'Shayok Chakraborty', 'shayok chakraborty')
('1743991', 'Sethuraman Panchanathan', 'sethuraman panchanathan')
{hemanthv, jeusebio, shayok.chakraborty, panch}@asu.edu
b85580ff2d8d8be0a2c40863f04269df4cd766d9HCMUS team at the Multimodal Person Discovery in
Broadcast TV Task of MediaEval 2016
Faculty of Information Technology
University of Science, Vietnam National University-Ho Chi Minh city
('34453615', 'Vinh-Tiep Nguyen', 'vinh-tiep nguyen')
('30097677', 'Manh-Tien H. Nguyen', 'manh-tien h. nguyen')
('8176737', 'Quoc-Huu Che', 'quoc-huu che')
('7736164', 'Van-Tu Ninh', 'van-tu ninh')
('38994364', 'Tu-Khiem Le', 'tu-khiem le')
('7213584', 'Thanh-An Nguyen', 'thanh-an nguyen')
('1780348', 'Minh-Triet Tran', 'minh-triet tran')
nvtiep@fit.hcmus.edu.vn, {nhmtien, cqhuu, nvtu, ltkhiem}@apcs.vn,
1312016@student.hcmus.edu.vn, tmtriet@fit.hcmus.edu.vn
b87b0fa1ac0aad0ca563844daecaeecb2df8debfComputational Aesthetics in Graphics, Visualization, and Imaging
EXPRESSIVE 2015
Non-Photorealistic Rendering of Portraits
Cardiff University, UK
Figure 1: President Obama re-rendered in “puppet” style and in the style of Julian Opie.
('1734823', 'Paul L. Rosin', 'paul l. rosin')
('1734823', 'Paul L. Rosin', 'paul l. rosin')
('7827503', 'Yu-Kun Lai', 'yu-kun lai')
b87db5ac17312db60e26394f9e3e1a51647cca66Semi-definite Manifold Alignment
Tsinghua University
Beijing, China
('2066355', 'Liang Xiong', 'liang xiong')
('34410258', 'Fei Wang', 'fei wang')
('1700883', 'Changshui Zhang', 'changshui zhang')
{xiongl,feiwang03}@mails.tsinghua.edu.cn, zcs@mail.tsinghua.edu.cn
b81cae2927598253da37954fb36a2549c5405cdbExperiments on Visual Information Extraction with the Faces of Wikipedia
D´epartement de g´enie informatique et g´enie logiciel, Polytechnique Montr´eal
2500, Chemin de Polytechnique, Universit´e de Montr´eal, Montr`eal, Qu´ebec, Canada
('2811524', 'Md. Kamrul Hasan', 'md. kamrul hasan')
b8a829b30381106b806066d40dd372045d49178d1872
A Probabilistic Framework for Joint Pedestrian Head
and Body Orientation Estimation
('2869660', 'Fabian Flohr', 'fabian flohr')
('1898318', 'Madalin Dumitru-Guzu', 'madalin dumitru-guzu')
('34846285', 'Julian F. P. Kooij', 'julian f. p. kooij')
b1d89015f9b16515735d4140c84b0bacbbef19acToo Far to See? Not Really!
— Pedestrian Detection with Scale-aware
Localization Policy
('47957574', 'Xiaowei Zhang', 'xiaowei zhang')
('50791064', 'Li Cheng', 'li cheng')
('49729740', 'Bo Li', 'bo li')
('2938403', 'Hai-Miao Hu', 'hai-miao hu')
b191aa2c5b8ece06c221c3a4a0914e8157a16129: DEEP SPATIO-TEMPORAL MANIFOLD NETWORK FOR ACTION RECOGNITION
Deep Spatio-temporal Manifold Network for
Action Recognition
Department of Computer Science
China University of Mining and Technol
ogy, Beijing, China
Center for Research in Computer
Vision (CRCV)
University of Central Florida, Orlando
FL, USA
School of Automation Science and
electrical engineering
Beihang University, Beijing, China
University of Chinese Academy of
Sciences
Beijing, China
Nortumbria Univesity
Newcastle, UK
Xiamen Univesity
Xiamen, China
('2606761', 'Ce Li', 'ce li')
('9497155', 'Chen Chen', 'chen chen')
('1740430', 'Baochang Zhang', 'baochang zhang')
('1694936', 'Qixiang Ye', 'qixiang ye')
('1783847', 'Jungong Han', 'jungong han')
('1725599', 'Rongrong Ji', 'rongrong ji')
celi@cumtb.edu.cn
chenchen870713@gmail.com
bczhang@139.com
qxye@ucas.ac.cn
jungonghan77@gmail.com
rrji@xmu.edu.cn
b13bf657ca6d34d0df90e7ae739c94a7efc30dc3Attribute and Simile Classifiers for Face Verification (In submission please do
not distribute.)
Columbia University
New York, NY
Columbia University
New York, NY
Columbia University
Columbia University
New York, NY
('3586464', 'Neeraj Kumar', 'neeraj kumar')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
('1750470', 'Shree K. Nayar', 'shree k. nayar')
belhumeur@cs.columbia.edu
neeraj@cs.columbia.edu
aberg@cs.columbia.edu
nayar@cs.columbia.edu
b13a882e6168afc4058fe14cc075c7e41434f43eRecognition of Humans and Their Activities Using Video
Center for Automation Research
University of Maryland
College Park, MD
Dept. of Electrical Engineering
University of California
Riverside, CA 92521
Shaohua K. Zhou
Siemens Research
Princeton, NJ 08540
('9215658', 'Rama Chellappa', 'rama chellappa')
('1688416', 'Amit K. Roy-Chowdhury', 'amit k. roy-chowdhury')
b14b672e09b5b2d984295dfafb05604492bfaec5LearningImageClassificationandRetrievalModelsThomasMensink
b1665e1ddf9253dcaebecb48ac09a7ab4095a83eEMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH ACTIVE
APPEARANCE MODELS
Department of Computer Science
University of North Carolina Wilmington
South College Road
Wilmington, NC, USA
Department of Computer Science
University of North Carolina Wilmington
South College Road
Wilmington, NC, USA
('12675740', 'Matthew S. Ratliff', 'matthew s. ratliff')
('37804931', 'Eric Patterson', 'eric patterson')
msr3520@uncw.edu
pattersone@uncw.edu
b16580d27bbf4e17053f2f91bc1d0be12045e00bPose-invariant Face Recognition with a
Two-Level Dynamic Programming Algorithm
1 Human Language Technology and Pattern Recognition Group
RWTH Aachen University, Aachen, Germany
2 Robert Bosch GmbH, Hildesheim, Germany
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('1685956', 'Hermann Ney', 'hermann ney')
('1967060', 'Philippe Dreuw', 'philippe dreuw')
@cs.rwth-aachen.de
philippe.dreuw@de.bosch.com
b1b993a1fbcc827bcb99c4cc1ba64ae2c5dcc000Deep Variation-structured Reinforcement Learning for Visual Relationship and
Attribute Detection
School of Computer Science, Carnegie Mellon University
('40250403', 'Xiaodan Liang', 'xiaodan liang')
('1752601', 'Eric P. Xing', 'eric p. xing')
('49441821', 'Lisa Lee', 'lisa lee')
{xiaodan1,lslee,epxing}@cs.cmu.edu
b11bb6bd63ee6f246d278dd4edccfbe470263803Joint Voxel and Coordinate Regression for Accurate
3D Facial Landmark Localization
†Center for Research on Intelligent Perception and Computing (CRIPAC)
Institute of Automation, Chinese Academy of Sciences (CASIA
†National Laboratory of Pattern Recognition (NLPR)
University of Chinese Academy of Sciences (UCAS
§Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), CAS
('37536613', 'Hongwen Zhang', 'hongwen zhang')
('39763795', 'Qi Li', 'qi li')
('1757186', 'Zhenan Sun', 'zhenan sun')
Email: hongwen.zhang@cripac.ia.ac.cn, {qli, znsun}@nlpr.ia.ac.cn
b171f9e4245b52ff96790cf4f8d23e822c260780
b1a3b19700b8738b4510eecf78a35ff38406df22This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAFFC.2017.2731763, IEEE
Transactions on Affective Computing
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
Automatic Analysis of Facial Actions: A Survey
and Maja Pantic, Fellow, IEEE
('1680608', 'Brais Martinez', 'brais martinez')
('1795528', 'Michel F. Valstar', 'michel f. valstar')
('39532631', 'Bihan Jiang', 'bihan jiang')
b166ce267ddb705e6ed855c6b679ec699d62e9cbTurk J Elec Eng & Comp Sci
(2017) 25: 4421 { 4430
c⃝ T (cid:127)UB_ITAK
doi:10.3906/elk-1702-49
Sample group and misplaced atom dictionary learning for face recognition
Faculty of Electronics and Communication, Yanshan University
Faculty of Electronics and Communication, Taishan University
Qinhuangdao, P.R. China
Tai’an, P.R. China
Received: 04.02.2017
(cid:15)
Accepted/Published Online: 01.06.2017
(cid:15)
Final Version: 05.10.2017
('39980529', 'Meng Wang', 'meng wang')
('49576759', 'Zhe Sun', 'zhe sun')
('6410069', 'Mei Zhu', 'mei zhu')
('49632877', 'Mei Sun', 'mei sun')
b13e2e43672e66ba45d1b852a34737e4ce04226bCROWLEY, PARKHI, ZISSERMAN: FACE PAINTING
Face Painting: querying art with photos
Elliot J. Crowley
Visual Geometry Group
Department of Engineering Science
University of Oxford
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
elliot@robots.ox.ac.uk
omkar@robots.ox.ac.uk
az@robots.ox.ac.uk
b1e4f8c15ff30cc7d35ab25ff3eddaf854e0a87cRESEARCH ARTICLE
Conveying facial expressions to blind and
visually impaired persons through a wearable
vibrotactile device
MIRA Institute, University of Twente, Enschede, The
Netherlands, Donders Institute, Radboud University, Nijmegen, The
Netherlands, 3 VicarVision, Amsterdam, The Netherlands, 4 Department of Media, Communication, &
Organization, University of Twente, Enschede, The Netherlands, HAN
University of Applied Sciences, Arnhem, The Netherlands
('1950480', 'Hendrik P. Buimer', 'hendrik p. buimer')
('25188062', 'Marian Bittner', 'marian bittner')
('3427220', 'Tjerk Kostelijk', 'tjerk kostelijk')
('49432294', 'Abdellatif Nemri', 'abdellatif nemri')
('2968885', 'Richard J. A. van Wezel', 'richard j. a. van wezel')
* h.buimer@donders.ru.nl
b1301c722886b6028d11e4c2084ee96466218be4
b15a06d701f0a7f508e3355a09d0016de3d92a6dRunning head: FACIAL CONTRAST LOOKS HEALTHY
1
Facial contrast is a cue for perceiving health from the face
Mauger2, Frederique Morizot2
Gettysburg College, Gettysburg, PA, USA
2 CHANEL Recherche et Technologie, Chanel PB
3 Université Grenoble Alpes
Author Note
Psychologie et NeuroCognition, Université Grenoble Alpes.
This is a prepublication copy. This article may not exactly replicate the authoritative document
published in the APA journal. It is not the copy of record. The authoritative document can be
found through this DOI: http://psycnet.apa.org/doi/10.1037/xhp0000219
('40482411', 'Richard Russell', 'richard russell')
('4556101', 'Aurélie Porcheron', 'aurélie porcheron')
('40482411', 'Richard Russell', 'richard russell')
('4556101', 'Aurélie Porcheron', 'aurélie porcheron')
('6258499', 'Emmanuelle Mauger', 'emmanuelle mauger')
('4556101', 'Aurélie Porcheron', 'aurélie porcheron')
('40482411', 'Richard Russell', 'richard russell')
College, Gettysburg, PA 17325, USA. Email: rrussell@gettysburg.edu
b1c5581f631dba78927aae4f86a839f43646220c
b18858ad6ec88d8b443dffd3e944e653178bc28bPurdue University
Purdue e-Pubs
Department of Computer Science Technical
Reports
Department of Computer Science
2017
Trojaning Attack on Neural Networks
See next page for additional authors
Report Number:
17-002
Liu, Yingqi; Ma, Shiqing; Aafer, Yousra; Lee, Wen-Chuan; Zhai, Juan; Wang, Weihang; and Zhang, Xiangyu, "Trojaning Attack on
Neural Networks" (2017). Department of Computer Science Technical Reports. Paper 1781.
https://docs.lib.purdue.edu/cstech/1781
additional information.
('3347155', 'Yingqi Liu', 'yingqi liu')
('40306181', 'Shiqing Ma', 'shiqing ma')
('3216258', 'Yousra Aafer', 'yousra aafer')
('2547748', 'Wen-Chuan Lee', 'wen-chuan lee')
('3293342', 'Juan Zhai', 'juan zhai')
Purdue University, liu1751@purdue.edu
Purdue University, ma229@purdue.edu
Purdue University, yaafer@purdue.edu
Purdue University, lee1938@purdue.edu
Nanjing University, China, zhaijuan@nju.edu.cn
This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for
b1444b3bf15eec84f6d9a2ade7989bb980ea7bd1LOCAL DIRECTIONAL RELATION PATTERN
Local Directional Relation Pattern for
Unconstrained and Robust Face Retrieval
('34992579', 'Shiv Ram Dubey', 'shiv ram dubey')
b133b2d7df9b848253b9d75e2ca5c68e21eba008Kobe University, NICT and University of Siegen
at TRECVID 2017 AVS Task
Graduate School of System Informatics, Kobe University
Center for Information and Neural Networks, National Institute of Information and Communications Technology (NICT
Pattern Recognition Group, University of Siegen
('2240008', 'Zhenying He', 'zhenying he')
('8183718', 'Takashi Shinozaki', 'takashi shinozaki')
('1707938', 'Kimiaki Shirahama', 'kimiaki shirahama')
('1727057', 'Marcin Grzegorzek', 'marcin grzegorzek')
('1711781', 'Kuniaki Uehara', 'kuniaki uehara')
jennyhe@ai.cs.kobe-u.ac.jp, uehara@kobe-u.ac.jp
tshino@nict.go.jp
kimiaki.shirahama@uni-siegen.de, marcin.grzegorzek@uni-siegen.de
b1451721864e836069fa299a64595d1655793757Criteria Sliders: Learning Continuous
Database Criteria via Interactive Ranking
Brown University 2University of Bath
Harvard University 4Max Planck Institute for Informatics
('1854493', 'James Tompkin', 'james tompkin')
('1808255', 'Kwang In Kim', 'kwang in kim')
('1680185', 'Christian Theobalt', 'christian theobalt')
b1df214e0f1c5065f53054195cd15012e660490aSupplementary Material to Sparse Coding and Dictionary Learning with Linear
Dynamical Systems∗
Tsinghua University, State Key Lab. of Intelligent
Technology and Systems, Tsinghua National Lab. for Information Science and Technology (TNList);
Australian National University and NICTA, Australia
In this supplementary material, we present the proofs of Theorems (1-3), the algorithm for learning the transition matrix
of LDSST, and the reconstruction error approach for classification in LDS-SC, LDSST-SC and covLDSST-SC. In addition,
we describe the details of the benchmark datasets that are applied in our experiments. Our dictionary learning algorithm for
anormaly detection is also explored in this supplementary material.
1. Proofs
Theorem 1. Suppose V1, V2, · · · , VM ∈ S(n, ∞), and y1, y2, · · · , yM ∈ R, we have
Xi=1
yiΠ(Vi) k2
F =
Xi,j=1
yiyj k VT
i Vj k2
F ,
i Oj can be computed with the Lyapunov equation defined in Equation (2), Li and Lj
i Vj = L−1
where VT
are Cholesky decomposition matrices for OT
i Oj L−T
i OT
. OT
i Oi and OT
j Oj , respectively.
i)]T by
Proof. We denote the sub-matrix of the extended observability matrix Oi as Oi(t) = [CT
taking the first t rows. We suppose that the Cholesky decomposition matrix for Oi is Li and denote that Vi(t) = Oi(t)L−T
Then, we derive
i , (CiAi)T, · · · , (CiAt
Xi=1
yiΠ(Vi) k2
F = lim
t→∞
= lim
t→∞
= lim
t→∞
yiVi(t)Vi(t)T
yiVi(t)Vi(t)T k2
Xi=1
Tr
Xi=1
Xi,j=1
yiyjTr(cid:0)Vi(t)TVj(t)Vj(t)TVi(t)(cid:1)
yj Vj(t)Vj(t)T
Xj=1
yiyj lim
t→∞
k Vi(t)TVj(t) k2
yiyj lim
t→∞
k L−1
(Oi(t)TOj(t))L−T
k2
yiyj k L−1
i Oij L−T
k2
F ,
(13)
Xi,j=1
Xi,j=1
Xi,j=1
∗This work is jointly supported by National Natural Science Foundation of China under Grant No. 61327809, 61210013, 91420302 and 91520201.
('8984539', 'Wenbing Huang', 'wenbing huang')
('40203750', 'Fuchun Sun', 'fuchun sun')
('2507718', 'Lele Cao', 'lele cao')
('1678783', 'Deli Zhao', 'deli zhao')
('31833173', 'Huaping Liu', 'huaping liu')
('23911916', 'Mehrtash Harandi', 'mehrtash harandi')
1{huangwb12@mails, fcsun@mail, caoll12@mails, hpliu@mail}.tsinghua.edu.cn,
zhaodeli@gmail.com,
Mehrtash.Harandi@nicta.com.au,
b185f0a39384ceb3c4923196aeed6d68830a069fDescribing Clothing by Semantic Attributes
Stanford University, Stanford, California
Kodak Research Laboratories, Rochester, New York
Cornell University, Ithaca, New York
('2896700', 'Huizhong Chen', 'huizhong chen')
('1739786', 'Bernd Girod', 'bernd girod')
b19e83eda4a602abc5a8ef57467c5f47f493848dJOURNAL OF LATEX CLASS FILES
Heat Kernel Based Local Binary Pattern for
Face Representation
('38979129', 'Xi Li', 'xi li')
('40506509', 'Weiming Hu', 'weiming hu')
('1720488', 'Zhongfei Zhang', 'zhongfei zhang')
('37414077', 'Hanzi Wang', 'hanzi wang')
b1429e4d3dd3412e92a37d2f9e0721ea719a9b9ePerson Re-identification Using Multiple First-Person-Views on Wearable Devices
Nanyang Technological University, Singapore
Institute for Infocomm Research (I2R), A*STAR, Singapore
Istituto Italiano di Tecnologia (IIT), Genova, 16163, Italy
('37287044', 'Anirban Chakraborty', 'anirban chakraborty')
('1709001', 'Bappaditya Mandal', 'bappaditya mandal')
('2860592', 'Hamed Kiani Galoogahi', 'hamed kiani galoogahi')
a.chakraborty@ntu.edu.sg
bmandal@i2r.a-star.edu.sg
kiani.galoogahi@iit.it
b1fdd4ae17d82612cefd4e78b690847b071379d3Supervised Descent Method
CMU-RI-TR-15-28
September 2015
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Fernando De la Torre, Chair
Srinivasa Narasimhan
Kris Kitani
Aleix Martinez
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy in Robotics.
('3182065', 'Xuehan Xiong', 'xuehan xiong')
('3182065', 'Xuehan Xiong', 'xuehan xiong')
dde5125baefa1141f1ed50479a3fd67c528a965fSynthesizing Normalized Faces from Facial Identity Features
Google, Inc. 2University of Massachusetts Amherst 3MIT CSAIL
('39578349', 'Forrester Cole', 'forrester cole')
('1707347', 'Dilip Krishnan', 'dilip krishnan')
{fcole, dbelanger, dilipkay, sarna, inbarm, wfreeman}@google.com
dd8084b2878ca95d8f14bae73e1072922f0cc5daModel Distillation with Knowledge Transfer from
Face Classification to Alignment and Verification
Beijing Orion Star Technology Co., Ltd. Beijing, China
('1747751', 'Chong Wang', 'chong wang')
('26403761', 'Xipeng Lan', 'xipeng lan')
{chongwang.nlpr, xipeng.lan, caveman1984}@gmail.com
ddf55fc9cf57dabf4eccbf9daab52108df5b69aaInternational Journal of Grid and Distributed Computing
Vol. 4, No. 3, September, 2011
Methodology and Performance Analysis of 3-D Facial Expression
Recognition Using Statistical Shape Representation
ADSIP Research Centre, University of Central Lancashire
School of Psychology, University of Central Lancashire
('2343120', 'Wei Quan', 'wei quan')
('2647218', 'Bogdan J. Matuszewski', 'bogdan j. matuszewski')
('2550166', 'Lik-Kwan Shark', 'lik-kwan shark')
('2942330', 'Charlie Frowd', 'charlie frowd')
{WQuan, BMatuszewski1, LShark}@uclan.ac.uk
CFrowd@uclan.ac.uk
dd85b6fdc45bf61f2b3d3d92ce5056c47bd8d335Unsupervised Learning and Segmentation of Complex Activities from Video
University of Bonn, Germany
('34678431', 'Fadime Sener', 'fadime sener')
('2569989', 'Angela Yao', 'angela yao')
{sener,yao}@cs.uni-bonn.de
dda35768681f74dafd02a667dac2e6101926a279MULTI-LAYER TEMPORAL GRAPHICAL MODEL
FOR HEAD POSE ESTIMATION IN REAL-WORLD VIDEOS
McGill University
Centre for Intelligent Machines,
('2515930', 'Meltem Demirkus', 'meltem demirkus')
('1724729', 'Doina Precup', 'doina precup')
('1713608', 'James J. Clark', 'james j. clark')
('1699104', 'Tal Arbel', 'tal arbel')
dd0760bda44d4e222c0a54d41681f97b3270122b
ddea3c352f5041fb34433b635399711a90fde0e8Facial Expression Classification using Visual Cues and Language
Department of Computer Science and Engineering, IIT Kanpur
('2094658', 'Abhishek Kar', 'abhishek kar')
('1803835', 'Amitabha Mukerjee', 'amitabha mukerjee')
{akar,amit}@iitk.ac.in
dd033d4886f2e687b82d893a2c14dae02962ea70Electronic Letters on Computer Vision and Image Analysis 11(1):41-54; 2012
Facial Expression Recognition Using New Feature Extraction
Algorithm
National Cheng Kung University, Tainan, Taiwan
Received 10th Oct. 2011; accepted 5th Sep. 2012
('2499819', 'Hung-Fu Huang', 'hung-fu huang')
('1751725', 'Shen-Chuan Tai', 'shen-chuan tai')
ddbd24a73ba3d74028596f393bb07a6b87a469c0Multi-region two-stream R-CNN
for action detection
Inria(cid:63)
('1766837', 'Xiaojiang Peng', 'xiaojiang peng')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
{xiaojiang.peng,cordelia.schmid}@inria.fr
ddf099f0e0631da4a6396a17829160301796151cIEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
Learning Face Image Quality from
Human Assessments
('2180413', 'Lacey Best-Rowden', 'lacey best-rowden')
('40217643', 'Anil K. Jain', 'anil k. jain')
dd0a334b767e0065c730873a95312a89ef7d1c03Eigenexpressions: Emotion Recognition using Multiple
Eigenspaces
Luis Marco-Gim´enez1, Miguel Arevalillo-Herr´aez1, and Cristina Cuhna-P´erez2

Burjassot. Valencia 46100, Spain,
2 Universidad Cat´olica San Vicente M´artir de Valencia (UCV),
Burjassot. Valencia. Spain
margi4@alumni.uv.es
dd2f6a1ba3650075245a422319d86002e1e87808
ddaa8add8528857712424fd57179e5db6885df7cMETTES, SNOEK, CHANG: ACTION LOCALIZATION WITH PSEUDO-ANNOTATIONS
Localizing Actions from Video Labels
and Pseudo-Annotations
Cees G.M. Snoek1
University of Amsterdam
Amsterdam, NL
Columbia University
New York, USA
('2606260', 'Pascal Mettes', 'pascal mettes')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
dd8d53e67668067fd290eb500d7dfab5b6f730dd69
A Parameter-Free Framework for General
Supervised Subspace Learning
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('7137861', 'Jianzhuang Liu', 'jianzhuang liu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
ddbb6e0913ac127004be73e2d4097513a8f02d37264
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 1, NO. 3, SEPTEMBER 1999
Face Detection Using Quantized Skin Color
Regions Merging and Wavelet Packet Analysis
('34798028', 'Christophe Garcia', 'christophe garcia')
('2441655', 'Georgios Tziritas', 'georgios tziritas')
dd600e7d6e4443ebe87ab864d62e2f4316431293
dc550f361ae82ec6e1a0cf67edf6a0138163382e
ISSN XXXX XXXX © 2018 IJESC


Research Article Volume 8 Issue No.3
Emotion Based Music Player
Professor1, UG Student2, 3, 4, 5, 6
Department of Electronics Engineering
K.D.K. College of Engineering Nagpur, India
('9217928', 'Vijay Chakole', 'vijay chakole')
('48228560', 'Kalyani Trivedi', 'kalyani trivedi')
dcf71245addaf66a868221041aabe23c0a074312S3FD: Single Shot Scale-invariant Face Detector
CBSR and NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing, China
('3220556', 'Shifeng Zhang', 'shifeng zhang'){shifeng.zhang,xiangyu.zhu,zlei,hailin.shi,xiaobo.wang,szli}@nlpr.ia.ac.cn
dcb44fc19c1949b1eda9abe998935d567498467dProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
1916
dcc38db6c885444694f515d683bbb50521ff3990Learning to hallucinate face images via Component Generation and Enhancement
City University of Hong Kong
South China University of Technology
3Tencent AI Lab
University of Science and Technology of China
('2255687', 'Yibing Song', 'yibing song')
('1718428', 'Jiawei Zhang', 'jiawei zhang')
('2548483', 'Shengfeng He', 'shengfeng he')
('2780029', 'Linchao Bao', 'linchao bao')
('1777434', 'Qingxiong Yang', 'qingxiong yang')
dc5cde7e4554db012d39fc41ac8580f4f6774045FAKTOR, IRANI: VIDEO SEGMENTATION BY NON-LOCAL CONSENSUS VOTING
Video Segmentation by Non-Local
Consensus Voting
http://www.wisdom.weizmann.ac.il/~alonf/
http://www.wisdom.weizmann.ac.il/~irani/
Dept. of Computer Science and
Applied Math
The Weizmann Institute of Science
ISRAEL
('2859022', 'Alon Faktor', 'alon faktor')
('1696887', 'Michal Irani', 'michal irani')
dc7df544d7c186723d754e2e7b7217d38a12fcf7Facial expression recognition using salient facial patches
MIRACL-ENET’COM
University of Sfax
Tunisia (3018), Sfax
MIRACL-FSS
University of Sfax
Tunisia (3018), Sfax
('2049116', 'Hazar Mliki', 'hazar mliki')
('1749733', 'Mohamed Hammami', 'mohamed hammami')
mliki.hazar@gmail.com
mohamed.hammami@fss.rnu.tn
dc77287bb1fcf64358767dc5b5a8a79ed9abaa53Fashion Conversation Data on Instagram
∗Graduate School of Culture Technology, KAIST, South Korea
†Department of Communication Studies, UCLA, USA
('3459091', 'Yu-i Ha', 'yu-i ha')
('2399803', 'Sejeong Kwon', 'sejeong kwon')
('1775511', 'Meeyoung Cha', 'meeyoung cha')
('1834047', 'Jungseock Joo', 'jungseock joo')
dc2e805d0038f9d1b3d1bc79192f1d90f6091ecb
dced05d28f353be971ea2c14517e85bc457405f3Multimodal Priority Verification of Face and Speech
Using Momentum Back-Propagation Neural Network
1 Image Processing and Intelligent Systems Laboratory, Department of Image Engineering,
Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University
221 Huksuk-dong, Tongjak-Ku, Seoul 156-756, Korea,
2 Broadcasting Media Research Group, Digital Broadcasting Research Division, ETRI, 161
Gajeong-dong, Yuseong-Gu, Daejeon 305-700, Korea,
3 Intelligent Image Communication Laboratory, Department of Computer Engineering,
Kwangwoon University, 447-1 Wolge-dong, Nowon-Gu, Seoul 139-701, Korea
('1727735', 'Changhan Park', 'changhan park')
('1722181', 'Myungseok Ki', 'myungseok ki')
('1723542', 'Jaechan Namkung', 'jaechan namkung')
('1684329', 'Joonki Paik', 'joonki paik')
initialchp@wm.cau.ac.kr, http://ipis.cau.ac.kr,
kkim@etri.re.kr, http://www.etri.re.kr,
namjc@daisy.kw.ac.kr, http://vision.kw.ac.kr.
dce5e0a1f2cdc3d4e0e7ca0507592860599b0454Facelet-Bank for Fast Portrait Manipulation
The Chinese University of Hong Kong
2Tencent Youtu Lab
Johns Hopkins University
('2070527', 'Ying-Cong Chen', 'ying-cong chen')
('40898180', 'Yangang Ye', 'yangang ye')
('1729056', 'Jiaya Jia', 'jiaya jia')
{ycchen, linhj, ryli, xtao}@cse.cuhk.edu.hk
goodshenxy@gmail.com
Mshu1@jhu.edu
yangangye@tecent.com
leojia9@gmail.com
dc9d62087ff93a821e6bb8a15a8ae2da3e39dcddLearning with Confident Examples:
Rank Pruning for Robust Classification with Noisy Labels
Massachusetts Institute of Technology
Cambridge, MA 02139
('39972987', 'Curtis G. Northcutt', 'curtis g. northcutt')
('3716141', 'Tailin Wu', 'tailin wu')
('1706040', 'Isaac L. Chuang', 'isaac l. chuang')
{cgn, tailin, ichuang}@mit.edu
dcce3d7e8d59041e84fcdf4418702fb0f8e35043Probabilistic Identity Characterization for Face Recognition∗
Center for Automation Research (CfAR) and
Department of Electrical and Computer Engineering
University of Maryland, College Park, MD
('1682187', 'Shaohua Kevin Zhou', 'shaohua kevin zhou')
('9215658', 'Rama Chellappa', 'rama chellappa')
{shaohua, rama}@cfar.umd.edu
dce3dff9216d63c4a77a2fcb0ec1adf6d2489394Manifold Learning for Gender Classification
from Face Sequences
Machine Vision Group, P.O. Box 4500, FI-90014, University of Oulu, Finland
('1751372', 'Abdenour Hadid', 'abdenour hadid')
dc974c31201b6da32f48ef81ae5a9042512705feAm I done? Predicting Action Progress in Video
1 Media Integration and Communication Center, Univ. of Florence, Italy
2 Department of Mathematics “Tullio Levi-Civita”, Univ. of Padova, Italy
('41172759', 'Federico Becattini', 'federico becattini')
('1789269', 'Tiberio Uricchio', 'tiberio uricchio')
('2831602', 'Lorenzo Seidenari', 'lorenzo seidenari')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
('1795847', 'Lamberto Ballan', 'lamberto ballan')
b6f758be954d34817d4ebaa22b30c63a4b8ddb35A Proximity-Aware Hierarchical Clustering of Faces
University of Maryland, College Park
('3329881', 'Wei-An Lin', 'wei-an lin')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('9215658', 'Rama Chellappa', 'rama chellappa')
walin@terpmail.umd.edu, pullpull@cs.umd.edu, rama@umiacs.umd.edu
b62571691a23836b35719fc457e093b0db187956 Volume 3, Issue 5, May 2013 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
A Novel approach for securing biometric template
Dr.Chander Kant
Department of computer Science & applications Department of computer Science & applications
Kurukshetra University, Kurukshetra, India
Kurukshetra University, Kurukshetra, India
('3384880', 'Shweta Malhotra', 'shweta malhotra')
b69b239217d4e9a20fe4fe1417bf26c94ded9af9A Temporally-Aware Interpolation Network for
Video Frame Inpainting
University of Michigan, Ann Arbor, USA
('2582303', 'Ximeng Sun', 'ximeng sun')
('34246012', 'Ryan Szeto', 'ryan szeto')
('3587688', 'Jason J. Corso', 'jason j. corso')
{sunxm,szetor,jjcorso}@umich.edu
b6c047ab10dd86b1443b088029ffe05d79bbe257
b6052dc718c72f2506cfd9d29422642ecf3992efA Survey on Human Motion Analysis from
Depth Data
University of Kentucky, 329 Rose St., Lexington, KY, 40508, U.S.A
2 Microsoft, One Microsoft Way, Redmond, WA, 98052, U.S.A
3 SRI International Sarnoff, 201 Washington Rd, Princeton, NJ, 08540, U.S.A
University of Bonn, Roemerstrasse 164, 53117 Bonn, Germany
('3876303', 'Mao Ye', 'mao ye')
('1681771', 'Qing Zhang', 'qing zhang')
('40476140', 'Liang Wang', 'liang wang')
('2446676', 'Jiejie Zhu', 'jiejie zhu')
('38958903', 'Ruigang Yang', 'ruigang yang')
('2946643', 'Juergen Gall', 'juergen gall')
mao.ye@uky.edu, qing.zhang@uky.edu, ryang@cs.uky.edu
liangwan@microsoft.com
jiejie.zhu@sri.com
gall@iai.uni-bonn.de
b6145d3268032da70edc9cfececa1f9ffa4e3f11c(cid:2) 2001 Kluwer Academic Publishers. Manufactured in The Netherlands.
Face Recognition Using the Discrete Cosine Transform
Center for Intelligent Machines, McGill University, 3480 University Street, Montreal, Canada H3A 2A
('1693521', 'Ziad M. Hafed', 'ziad m. hafed')
('3631473', 'Martin D. Levine', 'martin d. levine')
zhafed@cim.mcgill.ca
levine@cim.mcgill.ca
b6c53891dff24caa1f2e690552a1a5921554f994
b6ef158d95042f39765df04373c01546524c9ccdIm2vid: Future Video Prediction for Static Image Action
Recognition
Badour Ahmad AlBahar
Thesis submitted to the Faculty of the
Virginia Polytechnic Institute and State University
in partial fulfillment of the requirements for the degree of
Master of Science
in
Computer Engineering
Jia-Bin Huang, Chair
A. Lynn Abbott
Pratap Tokekar
May 9, 2018
Blacksburg, Virginia
Keywords: Human Action Recognition, Static Image Action Recognition, Video Action
Recognition, Future Video Prediction.
Copyright 2018, Badour Ahmad AlBahar
b68150bfdec373ed8e025f448b7a3485c16e3201Adversarial Image Perturbation for Privacy Protection
A Game Theory Perspective
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbr cken, Germany
('2390510', 'Seong Joon Oh', 'seong joon oh')
('1739548', 'Mario Fritz', 'mario fritz')
('1697100', 'Bernt Schiele', 'bernt schiele')
{joon,mfritz,schiele}@mpi-inf.mpg.de
b613b30a7cbe76700855479a8d25164fa7b6b9f11
Identifying User-Specific Facial Affects from
Spontaneous Expressions with Minimal Annotation
('23417737', 'Michael Xuelin Huang', 'michael xuelin huang')
('1706729', 'Grace Ngai', 'grace ngai')
('1730455', 'Kien A. Hua', 'kien a. hua')
('1714454', 'Hong Va Leong', 'hong va leong')
b64cfb39840969b1c769e336a05a30e7f9efcd61ORIGINAL RESEARCH
published: 15 June 2016
doi: 10.3389/fict.2016.00009
CRF-Based Context Modeling for
Person Identification in Broadcast
Videos
LIUM Laboratory, Le Mans, France, 2 Idiap Research Institute, Martigny, Switzerland
We are investigating the problem of speaker and face identification in broadcast videos.
Identification is performed by associating automatically extracted names from overlaid
texts with speaker and face clusters. We aimed at exploiting the structure of news
videos to solve name/cluster association ambiguities and clustering errors. The proposed
approach combines iteratively two conditional random fields (CRF). The first CRF performs
the person diarization (joint temporal segmentation, clustering, and association of voices
jointly over the speech segments and the face tracks. It benefits from
and faces)
contextual
information being extracted from the image backgrounds and the overlaid
texts. The second CRF associates names with person clusters, thanks to co-occurrence
statistics. Experiments conducted on a recent and substantial public dataset containing
reports and debates demonstrate the interest and complementarity of the different
modeling steps and information sources: the use of these elements enables us to obtain
better performances in clustering and identification, especially in studio scenes.
Keywords: face identification, speaker identification, broadcast videos, conditional random field, face clustering,
speaker diarization
1. INTRODUCTION
For the last two decades, researchers have been trying to create indexing and fast search and
browsing tools capable of handling the growing amount of available video collections. Among the
associated possibilities, person identification is an important one. Indeed, video contents can often
be browsed through the appearances of their different actors. Moreover, the availability of each
person intervention allows easier access to video structure elements, such as the scene segmentation.
Both motivations are especially verified in the case of news collections. The focus of this paper is,
therefore, to develop a program able to identify persons in broadcast videos. That is, the program
must be able to provide all temporal segments corresponding to each face and speaker.
Person identification can be supervised. A face and/or a speaker model of the queried person is
then learned over manually labeled training data. However, this raises the problem of annotation
cost. An unsupervised and complementary approach consists of using the naming information
already present in the documents. Such resources include overlaid texts, speech transcripts, and
metadata. Motivated by this opportunity, unsupervised identification has been investigated for
15 years from the early work of Satoh et al. (1999) to the development of more complex news-
browsing systems exploiting this paradigm (Jou et al., 2013), or thanks to sponsored competitions
(Giraudel et al., 2012). Whatever the source of naming information, it must tackle two main
obstacles: associate the names to co-occurring speech and face segments and propagate this naming
information from the co-occurring segments to the other segments of this person.
Edited by:
Shin’Ichi Satoh,
National Institute of Informatics, Japan
Reviewed by:
Thanh Duc Ngo,
Vietnam National University Ho Chi
Minh City, Vietnam
Ichiro Ide,
Nagoya University, Japan
*Correspondence:
Specialty section:
This article was submitted to
Computer Image Analysis, a section
of the journal Frontiers in ICT
Received: 16 October 2015
Accepted: 12 May 2016
Published: 15 June 2016
Citation:
Gay P, Meignier S, Deléglise P and
Odobez J-M (2016) CRF-Based
Context Modeling for Person
Identification in Broadcast Videos.
doi: 10.3389/fict.2016.00009
Frontiers in ICT | www.frontiersin.org
June 2016 | Volume 3 | Article 9
('14556501', 'Paul Gay', 'paul gay')
('2446815', 'Sylvain Meignier', 'sylvain meignier')
('1682046', 'Paul Deléglise', 'paul deléglise')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
odobez@idiap.ch
b6f682648418422e992e3ef78a6965773550d36bFebruary 8, 2017
b689d344502419f656d482bd186a5ee6b01408912009, Vol. 9, No. 2, 260 –264
© 2009 American Psychological Association
1528-3542/09/$12.00 DOI: 10.1037/a0014681
CORRECTED JULY 1, 2009; SEE LAST PAGE
BRIEF REPORTS
Christopher P. Said
Princeton University
University of Amsterdam, University of Trento, Italy
Princeton University
People make trait inferences based on facial appearance despite little evidence that these inferences
accurately reflect personality. The authors tested the hypothesis that these inferences are driven in part
neutral faces on a set of trait dimensions. The authors then submitted the face images to a Bayesian
expression. In general, neutral faces that are perceived to have positive valence resemble happiness, faces
that are perceived to have negative valence resemble disgust and fear, and faces that are perceived to be
threatening resemble anger. These results support the idea that trait inferences are in part the result of an
then be misattributed as traits.
People evaluate neutral faces on multiple trait dimensions and
these evaluations have social consequences (Hassin & Trope,
2000). For instance, political candidates whose faces are perceived
as more competent are more likely to win elections (Ballew &
Todorov, 2007; Todorov, Mandisodza, Goren, & Hall, 2005), and
cadets whose faces are perceived as more dominant are more likely
to be promoted to higher military ranks (Mazur, Mazur, & Keating,
1984).
Although inferences about traits based on facial appearance are
made reliably across observers, there is little evidence that these
inferences accurately reflect the personality of the observed face.
Most correlations between perceived traits and actual traits are
weak though positive (Bond, Berry, & Omar, 1994), some are
inconsistent for men and women (Zebrowitz, Voinescu, & Collins,
1996), and some are negative (Zebrowitz, Andreoletti, Collins,
ogy and the Center for the Study of Brain, Mind and Behavior at
versity of Amsterdam, Amsterdam and University of Trento
We thank Valerie Loehr for her assistance with the acquisition of trait
ratings, and Nick Oosterhof for helpful discussions. This research was
supported by National Science Foundation Grant BCS-0446846.
Correspondence should be addressed to Christopher P. Said, Department
of Psychology, Princeton University, Princeton, NJ 08540. E-mail
260
Lee, & Blumenthal, 1998). It is therefore puzzling that people
make reliable and rapid trait inferences from faces (Willis &
Todorov, 2006) when only little accurate information, at best, is
provided about personality. One intriguing explanation is that
neutral faces may contain structural properties that cause them to
resemble faces with more accurate and ecologically relevant in-
son, 1996; Montepare & Dobish, 2003).
Under this hypothesis, the adaptive ability to recognize emo-
tions overgeneralizes to neutral faces that merely bear a subtle
faces vary on trait dimensions such as trustworthiness (Engell,
Haxby, & Todorov, 2007). One possibility is that the source of
consensus in judging faces on social dimensions is the similarity of
the face to expressions corresponding to the dimension of trait
judgment (e.g., aggressiveness and anger). When given the task of
could base their judgments on this similarity. Evidence for this
hypothesis comes from research showing that the more a neutral
face is rated as happy by one group of participants the higher it is
rated on dominance and affiliation by another group of partici-
pants, and the more a face is rated as angry the higher it is rated on
dominance and the lower on affiliation (Montepare & Dobish,
2003). One interpretation of these findings is that people misat-
('1703601', 'Nicu Sebe', 'nicu sebe')
('2913698', 'Alexander Todorov', 'alexander todorov')
('2913698', 'Alexander Todorov', 'alexander todorov')
('1703601', 'Nicu Sebe', 'nicu sebe')
csaid@princeton.edu
b6d3caccdcb3fbce45ce1a68bb5643f7e68dadb3Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks ∗
University of Science and Technology of China, Hefei, China
‡ Microsoft Research, Beijing, China
('3430743', 'Zhaofan Qiu', 'zhaofan qiu')
('2053452', 'Ting Yao', 'ting yao')
('1724211', 'Tao Mei', 'tao mei')
zhaofanqiu@gmail.com, {tiyao, tmei}@microsoft.com
b6d0e461535116a675a0354e7da65b2c1d2958d4Deep Directional Statistics:
Pose Estimation with
Uncertainty Quantification
Max Planck Institute for Intelligent Systems, T ubingen, Germany
2 Amazon, T¨ubingen, Germany
3 Microsoft Research, Cambridge, UK
('15968671', 'Sergey Prokudin', 'sergey prokudin')
('2388416', 'Sebastian Nowozin', 'sebastian nowozin')
sergey.prokudin@tuebingen.mpg.de
b656abc4d1e9c8dc699906b70d6fcd609fae8182
b6a01cd4572b5f2f3a82732ef07d7296ab0161d3Kernel-Based Supervised Discrete Hashing for
Image Retrieval
University of Florida, Gainesville, FL, 32611, USA
('2766473', 'Xiaoshuang Shi', 'xiaoshuang shi')
('2082604', 'Fuyong Xing', 'fuyong xing')
('3457945', 'Jinzheng Cai', 'jinzheng cai')
('2476328', 'Zizhao Zhang', 'zizhao zhang')
('1877955', 'Yuanpu Xie', 'yuanpu xie')
('1705066', 'Lin Yang', 'lin yang')
xsshi2015@ufl.edu
a9791544baa14520379d47afd02e2e7353df87e5Technical Note
The Need for Careful Data Collection for Pattern Recognition in
Digital Pathology
Montefiore Institute, University of Li ge, 4000 Li ge, Belgium
Received: 08 December 2016
Accepted: 15 March 2017
Published: 10 April 2017
('1689882', 'Raphaël Marée', 'raphaël marée')
a9eb6e436cfcbded5a9f4b82f6b914c7f390adbd(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 5, No.6, 2016
A Model for Facial Emotion Inference Based on
Planar Dynamic Emotional Surfaces
Ruivo, J. P. P.
Escola Polit´ecnica
Negreiros, T.
Escola Polit´ecnica
Barretto, M. R. P.
Escola Polit´ecnica
Tinen, B.
Escola Polit´ecnica
Universidade de S˜ao Paulo
Universidade de S˜ao Paulo
Universidade de S˜ao Paulo
Universidade de S˜ao Paulo
S˜ao Paulo, Brazil
S˜ao Paulo, Brazil
S˜ao Paulo, Brazil
S˜ao Paulo, Brazil
a955033ca6716bf9957b362b77092592461664b4 ISSN(Online): 2320-9801
ISSN (Print): 2320-9798
International Journal of Innovative Research in Computer
and Communication Engineering
(An ISO 3297: 2007 Certified Organization)
Video Based Face Recognition Using Artificial
Vol. 3, Issue 6, June 2015
Neural Network
Pursuing M.Tech, Caarmel Engineering College, MG University, Kerala, India
Caarmel Engineering College, MG University, Kerala, India
a956ff50ca958a3619b476d16525c6c3d17ca264A Novel Bidirectional Neural Network for Face Recognition
JalilMazloum, Ali Jalali and Javad Amiryan
Electrical and Computer Engineering Department
ShahidBeheshti University
Tehran, Iran
J_Mazloum@sbu.ac.ir, A_Jalali@sbu.ac.ir, Amiryan.j@robocyrus.ir
a92adfdd8996ab2bd7cdc910ea1d3db03c66d34f
a98316980b126f90514f33214dde51813693fe0dCollaborations on YouTube: From Unsupervised Detection to the
Impact on Video and Channel Popularity
Multimedia Communications Lab (KOM), Technische Universität Darmstadt, Germany
('49495293', 'Christian Koch', 'christian koch')
('46203604', 'Moritz Lode', 'moritz lode')
('2214486', 'Denny Stohr', 'denny stohr')
('2869441', 'Amr Rizk', 'amr rizk')
('1725298', 'Ralf Steinmetz', 'ralf steinmetz')
E-Mail: {Christian.Koch | Denny.Stohr | Amr.Rizk | Ralf.Steinmetz}@kom.tu-darmstadt.de
a93781e6db8c03668f277676d901905ef44ae49fRecent Datasets on Object Manipulation: A Survey ('3112203', 'Yongqiang Huang', 'yongqiang huang')
('39545911', 'Matteo Bianchi', 'matteo bianchi')
('2646612', 'Minas Liarokapis', 'minas liarokapis')
('1681376', 'Yu Sun', 'yu sun')
a9fc23d612e848250d5b675e064dba98f05ad0d9(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 9, No. 2, 2018
Face Age Estimation Approach based on Deep
Learning and Principle Component Analysis
Faculty of Computers and
Informatics,
Benha University, Egypt
Faculty of Computers and
Information,
Minia University, Egypt
Faculty of Computers and
Informatics,
Benha University, Egypt
('3488856', 'Essam H. Houssein', 'essam h. houssein')
('33680569', 'Hala H. Zayed', 'hala h. zayed')
a9adb6dcccab2d45828e11a6f152530ba8066de6Aydınlanma Alt-uzaylarına dayalı Gürbüz Yüz Tanıma
Illumination Subspaces based Robust Face Recognition
Interactive Systems Labs, Universität Karlsruhe (TH)
76131 Karlsruhe, Almanya
web: http://isl.ira.uka.de/face_recognition
Özetçe
yönlerine
aydınlanma
kaynaklanan
sonra, yüz uzayı
Bu çalışmada aydınlanma alt-uzaylarına dayalı bir yüz tanıma
sistemi sunulmuştur. Bu sistemde,
ilk olarak, baskın
aydınlanma yönleri, bir topaklandırma algoritması kullanılarak
öğrenilmiştir. Topaklandırma algoritması sonucu önden, sağ
ve sol yanlardan olmak üzere üç baskın aydınlanma yönü
gözlemlenmiştir. Baskın
karar
-yüzün görünümündeki
kılındıktan
aydınlanmadan
kişi
kimliklerinden kaynaklanan değişimlerden ayırmak için- bu üç
aydınlanma uzayına bölünmüştür. Daha sonra, ek aydınlanma
yönü bilgisinden faydalanmak için aydınlanma alt-uzaylarına
dayalı yüz
tanıma algoritması kullanılmıştır. Önerilen
yaklaşım, CMU PIE veritabanında, “illumination” ve
“lighting” kümelerinde yer alan yüz
imgeleri üzerinde
sınanmıştır. Elde edilen deneysel sonuçlar, aydınlanma
yönünden yararlanmanın ve aydınlanma alt-uzaylarına dayalı
yüz tanıma algoritmasının yüz tanıma başarımını önemli
ölçüde arttırdığını göstermiştir.
değişimleri,
farklı
('1770336', 'D. Kern', 'd. kern')
('1742325', 'R. Stiefelhagen', 'r. stiefelhagen')
ekenel@ira.uka.de
a967426ec9b761a989997d6a213d890fc34c5fe3Relative Ranking of Facial Attractiveness
Department of Computer Science and Engineering
University of California, San Diego
('3079766', 'Hani Altwaijry', 'hani altwaijry'){haltwaij,sjb}@cs.ucsd.edu
a95dc0c4a9d882a903ce8c70e80399f38d2dcc89 TR-IIS-14-003
Review and Implementation of
High-Dimensional Local Binary
Patterns and Its Application to
Face Recognition
July. 24, 2014 || Technical Report No. TR-IIS-14-003
http://www.iis.sinica.edu.tw/page/library/TechReport/tr2014/tr14.html
('33970300', 'Bor-Chun Chen', 'bor-chun chen')
('1720473', 'Chu-Song Chen', 'chu-song chen')
a9286519e12675302b1d7d2fe0ca3cc4dc7d17f6Learning to Succeed while Teaching to Fail:
Privacy in Closed Machine Learning Systems
('2077648', 'Qiang Qiu', 'qiang qiu')
('4838771', 'Miguel R. D. Rodrigues', 'miguel r. d. rodrigues')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
a949b8700ca6ba96ee40f75dfee1410c5bbdb3dbInstance-weighted Transfer Learning of Active Appearance Models
Computer Vision Group, Friedrich Schiller University of Jena, Germany
Ernst-Abbe-Platz 2-4, 07743 Jena, Germany
('1708249', 'Daniel Haase', 'daniel haase')
('1679449', 'Erik Rodner', 'erik rodner')
('1728382', 'Joachim Denzler', 'joachim denzler')
{daniel.haase,erik.rodner,joachim.denzler}@uni-jena.de
a92b5234b8b73e06709dd48ec5f0ec357c1aabed
a9be20954e9177d8b2bc39747acdea4f5496f394Event-specific Image Importance
University of California, San Diego
2Adobe Research
('35259685', 'Yufei Wang', 'yufei wang'){yuw176, gary}@ucsd.edu
{zlin, xshen, rmech, gmiller}@adobe.com
d5afd7b76f1391321a1340a19ba63eec9e0f9833Journal of Information Hiding and Multimedia Signal Processing
Ubiquitous International
c⃝2010 ISSN 2073-4212
Volume 1, Number 3, July 2010
Statistical Analysis of Human Facial Expressions
Department of Informatics
Aristotle University of Thessaloniki
Box 451, 54124 Thessaloniki, Greece
Department of Informatics
Aristotle University of Thessaloniki
Box 451, 54124 Thessaloniki, Greece
Informatics and Telematics Institute
CERTH, Greece
Received March 2010; revised June 2010
('2764130', 'Stelios Krinidis', 'stelios krinidis')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
stelios.krinidis@mycosmos.gr
pitas@aiia.csd.auth.gr
d5375f51eeb0c6eff71d6c6ad73e11e9353c1f12Manifold Ranking-Based Locality Preserving Projections
School of Computer Science and Engineering, South China University of Technology
Guangzhou 510006, Guangdong, China
('2132230', 'Jia Wei', 'jia wei')
('3231018', 'Zewei Chen', 'zewei chen')
('1837988', 'Pingyang Niu', 'pingyang niu')
('2524825', 'Yishun Chen', 'yishun chen')
('7307608', 'Wenhui Chen', 'wenhui chen')
csjwei@scut.edu.cn
d5d7e89e6210fcbaa52dc277c1e307632cd91dabDOTA: A Large-scale Dataset for Object Detection in Aerial Images∗
State Key Lab. LIESMARS, Wuhan University, China
2EIS, Huazhong Univ. Sci. and Tech., China
Computer Science Depart., Cornell University, USA
Computer Science Depart., Rochester University, USA
5German Aerospace Center (DLR), Germany
DAIS, University of Venice, Italy
January 30, 2018
('39943835', 'Gui-Song Xia', 'gui-song xia')
('1686737', 'Xiang Bai', 'xiang bai')
('1749386', 'Jian Ding', 'jian ding')
('48148046', 'Zhen Zhu', 'zhen zhu')
('33642939', 'Jiebo Luo', 'jiebo luo')
('1777167', 'Mihai Datcu', 'mihai datcu')
('8111020', 'Marcello Pelillo', 'marcello pelillo')
('1733213', 'Liangpei Zhang', 'liangpei zhang')
{guisong.xia, jding, zlp62}@whu.edu.cn
{xbai, zzhu}@hust.edu.cn
sjb344@cornell.edu
jiebo.luo@gmail.com
mihai.datcu@dlr.de
pelillo@dsi.unive.it
d50c6d22449cc9170ab868b42f8c72f8d31f9b6cProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
1668
d522c162bd03e935b1417f2e564d1357e98826d2He et al. EURASIP Journal on Advances in Signal Processing 2013, 2013:19
http://asp.eurasipjournals.com/content/2013/1/19
RESEARCH
Open Access
Weakly supervised object extraction with
iterative contour prior for remote sensing
images
('2456383', 'Chu He', 'chu he')
('40382947', 'Yu Zhang', 'yu zhang')
('1813780', 'Bo Shi', 'bo shi')
('1727252', 'Xin Su', 'xin su')
('32514309', 'Xin Xu', 'xin xu')
('2048631', 'Mingsheng Liao', 'mingsheng liao')
d59f18fcb07648381aa5232842eabba1db52383eInternational Conference on Systemics, Cybernetics and Informatics, February 12–15, 2004
ROBUST FACIAL EXPRESSION RECOGNITION USING SPATIALLY
LOCALIZED GEOMETRIC MODEL
Department of Electrical Engineering
Dept. of Computer Sc. and Engg.
IIT Kanpur
Kanpur 208016, India
Kanpur 208016, India
IIT Kanpur
Dept. of Computer Sc. and Engg.
IIT Kanpur
Kanpur 208016, India
While approaches based on 3D deformable facial model have
achieved expression recognition rates of as high as 98% [2], they
are computationally inefficient and require considerable apriori
training based on 3D information, which is often unavailable.
Recognition from 2D images remains a difficult yet important
problem for areas such as
image database querying and
classification. The accuracy rates achieved for 2D images are
around 90% [3,4,5,11]. In a recent review of expression
recognition, Fasel [1] considers the problem along several
dimensions: whether features such as lips or eyebrows are first
identified in the face (local [4] vs holistic [11]), or whether the
image model used is 2D or 3D. Methods proposed for expression
recognition from 2D images include the Gabor-Wavelet [5] or
Holistic Optical flow [11] approach.
This paper describes a more robust system for facial expression
recognition from image sequences using 2D appearance-based
local approach for the extraction of intransient facial features, i.e.
features such as eyebrows, lips, or mouth, which are always
present in the image, but may be deformed [1] (in contrast,
transient features are wrinkles or bulges that disappear at other
times). The main advantages of such an approach is low
computational requirements, ability to work with both colored and
grayscale images and robustness in handling partial occlusions
[3].
Edge projection analysis which is used here for feature extraction
(eyebrows and lips) is well known [6]. Unlike [6] which describes
a template based matching as an essential starting point, we use
contours analysis. Our system computes a feature vector based on
geometrical model of the face and then classifies it into four
expression classes using a feed-forward basis function net. The
system detects open and closed state of the mouth as well. The
algorithm presented here works on both color and grayscale image
sequences. An important aspect of our work is the use of color
information for robust and more accurate segmentation of lip
region in case of color images. The novel lip-enhancement
transform is based on Gaussian modeling of skin and lip color.
To place the work in a larger context of face analysis and
recognition, the overall task requires that the part of the image
involving the face be detected and segmented. We assume that a
near-frontal view of the face is available. Tests on a grayscale
and two color face image databases ([8] and [9,10]) demonstrate a
superior recognition rate for four facial expressions (smile,
surprise, disgust and sad against neutral).
image sequences
('1681995', 'Ashutosh Saxena', 'ashutosh saxena')
('40101676', 'Ankit Anand', 'ankit anand')
('1803835', 'Amitabha Mukerjee', 'amitabha mukerjee')
ashutosh.saxena@ieee.org
ankanand@cse.iitk.ac.in
amit@cse.iitk.ac.in
d5fa9d98c8da54a57abf353767a927d662b7f026 VOL. 1, NO. 2, Oct 2010 E-ISSN 2218-6301
Journal of Emerging Trends in Computing and Information Sciences
©2009-2010 CIS Journal. All rights reserved.
http://www.cisjournal.org
Age Estimation based on Neural Networks using Face Features

Corresponding Author: Faculty of Information Technology
Islamic University of Gaza - Palestine
Email
:
edu.
ps.
('1714298', 'Nabil Hewahi', 'nabil hewahi')nhewahi@iugaza
d588dd4f305cdea37add2e9bb3d769df98efe880
Audio-Visual Authentication System over the
Internet Protocol
abandoned.
in
illumination based
is developed with the objective to
('1968167', 'Yee Wan Wong', 'yee wan wong')
d5444f9475253bbcfef85c351ea9dab56793b9eaIEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
BoxCars: Improving Fine-Grained Recognition
of Vehicles using 3D Bounding Boxes
in Traffic Surveillance
in contrast
('34891870', 'Jakub Sochor', 'jakub sochor')
('1785162', 'Adam Herout', 'adam herout')
d5ab6aa15dad26a6ace5ab83ce62b7467a18a88eWorld Journal of Computer Application and Technology 2(7): 133-138, 2014
DOI: 10.13189/wjcat.2014.020701
http://www.hrpub.org
Optimized Structure for Facial Action Unit Relationship
Using Bayesian Network
Intelligent Biometric Group, School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Pulau
Pinang, Malaysia
Copyright © 2014 Horizon Research Publishing All rights reserved.
('9115930', 'Yee Koon Loh', 'yee koon loh')
('3120408', 'Shahrel A. Suandi', 'shahrel a. suandi')
*Corresponding Author: lyk10_eee045@student.usm.my
d5b0e73b584be507198b6665bcddeba92b62e1e5CHEN ET AL.: MULTI-REGION ENSEMBLE CNNS FOR AGE ESTIMATION
Multi-Region Ensemble Convolutional Neural
Networks for High-Accuracy Age Estimation
1 Faculty of Information Technology
Macau University of Science and
Technology, Macau SAR
2 National Laboratory of Pattern
Recognition, Institute of Automation
Chinese Academy of Sciences
University of Chinese Academy of
Sciences
4 Computing, School of Science and
Engineering, University of Dundee
('38141486', 'Yiliang Chen', 'yiliang chen')
('9645431', 'Zichang Tan', 'zichang tan')
('1916793', 'Alex Po Leung', 'alex po leung')
('1756538', 'Jun Wan', 'jun wan')
('40539612', 'Jianguo Zhang', 'jianguo zhang')
elichan5168@gmail.com
tanzichang2016@ia.ac.cn
pleung@must.edu.mo
jun.wan@ia.ac.cn
jnzhang@dundee.ac.uk
d56fe69cbfd08525f20679ffc50707b738b88031Training of multiple classifier systems utilizing
partially labelled sequences

89069 Ulm - Germany
('3037635', 'Martin Schels', 'martin schels')
('2307794', 'Patrick Schillinger', 'patrick schillinger')
('1685857', 'Friedhelm Schwenker', 'friedhelm schwenker')
d5de42d37ee84c86b8f9a054f90ddb4566990ec0Asynchronous Temporal Fields for Action Recognition
Carnegie Mellon University 2University of Washington 3Allen Institute for Arti cial Intelligence
github.com/gsig/temporal-fields/
('34280810', 'Gunnar A. Sigurdsson', 'gunnar a. sigurdsson')
('2270286', 'Ali Farhadi', 'ali farhadi')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
d50751da2997e7ebc89244c88a4d0d18405e8507
d511e903a882658c9f6f930d6dd183007f508eda
d50a40f2d24363809a9ac57cf7fbb630644af0e5END-TO-END TRAINED CNN ENCODER-DECODER NETWORKS FOR IMAGE
STEGANOGRAPHY
National University of Computer and Emerging Sciences (NUCES-FAST), Islamabad, Pakistan
Reveal.ai (Recognition, Vision & Learning) Lab
('9205693', 'Atique ur Rehman', 'atique ur rehman')
('2695106', 'Sibt ul Hussain', 'sibt ul hussain')
d5b5c63c5611d7b911bc1f7e161a0863a34d44eaExtracting Scene-dependent Discriminant
Features for Enhancing Face Recognition
under Severe Conditions
Information and Media Processing Research Laboratories, NEC Corporation
1753, Shimonumabe, Nakahara-Ku, Kawasaki 211-8666 Japan
('1709089', 'Rui Ishiyama', 'rui ishiyama')
('35577655', 'Nobuyuki Yasukawa', 'nobuyuki yasukawa')
d59404354f84ad98fa809fd1295608bf3d658bdcInternational Journal of Computer Vision manuscript No.
(will be inserted by the editor)
Face Synthesis from Visual Attributes via Sketch using
Conditional VAEs and GANs
Received: date / Accepted: date
('29673017', 'Xing Di', 'xing di')
d5e1173dcb2a51b483f86694889b015d55094634
d28d32af7ef9889ef9cb877345a90ea85e70f7f12017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
Local-Global Landmark Confidences for Face Recognition
Institute for Robotics and Intelligent Systems, University of Southern California, CA, USA
Language Technologies Institute, Carnegie Mellon University, PA, USA
('2792633', 'KangGeon Kim', 'kanggeon kim')
('1752756', 'Feng-Ju Chang', 'feng-ju chang')
('1689391', 'Jongmoo Choi', 'jongmoo choi')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
('1694832', 'Ramakant Nevatia', 'ramakant nevatia')
d28d697b578867500632b35b1b19d3d76698f4a9Appears in the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR’99, Fort Collins, Colorado, USA, June 23-25, 1999.
Face Recognition Using Shape and Texture
Department of Computer Science
George Mason University
Fairfax, VA 22030-4444
cliu, wechsler
('39664966', 'Chengjun Liu', 'chengjun liu')
('1781577', 'Harry Wechsler', 'harry wechsler')
@cs.gmu.edu
d231a81b38fde73bdbf13cfec57d6652f8546c3cSUPERRESOLUTION TECHNIQUES
FOR FACE RECOGNITION FROM VIDEO
by
B.S., E.E., Bo azi i University
Submitted to the Graduate School of Engineering
and Natural Sciences in partially fulfillment of
the requirement for the degree of
Master of Science
Graduate Program in Electronics Engineering and Computer Science
Sabanc University
Spring 2005
('2258053', 'Osman Gökhan Sezer', 'osman gökhan sezer')
d22785eae6b7503cb16402514fd5bd9571511654Evaluating Facial Expressions with Different
Occlusion around Image Sequence

Department of Computer Science
Sanghvi Institute of Management and Science
Indore (MP), India
I.
local
INTRODUCTION
('2890210', 'Ramchand Hablani', 'ramchand hablani')
d2eb1079552fb736e3ba5e494543e67620832c52ANNUNZIATA, SAGONAS, CALÌ: DENSELY FUSED SPATIAL TRANSFORMER NETWORKS1
DeSTNet: Densely Fused Spatial
Transformer Networks1
Onfido Research
3 Finsbury Avenue
London, UK
('31336510', 'Roberto Annunziata', 'roberto annunziata')
('3320415', 'Christos Sagonas', 'christos sagonas')
('1997807', 'Jacques Calì', 'jacques calì')
roberto.annunziata@onfido.com
christos.sagonas@onfido.com
jacques.cali@onfido.com
d24dafe10ec43ac8fb98715b0e0bd8e479985260J Nonverbal Behav (2018) 42:81–99
https://doi.org/10.1007/s10919-017-0266-z
O R I G I N A L P A P E R
Effects of Social Anxiety on Emotional Mimicry
and Contagion: Feeling Negative, but Smiling Politely
• Gerben A. van Kleef2
• Agneta H. Fischer2
Published online: 25 September 2017
Ó The Author(s) 2017. This article is an open access publication
('4041392', 'Corine Dijk', 'corine dijk')
('35427440', 'Charlotte van Eeuwijk', 'charlotte van eeuwijk')
('1878851', 'Nexhmedin Morina', 'nexhmedin morina')
d29eec5e047560627c16803029d2eb8a4e61da75Feature Transfer Learning for Deep Face
Recognition with Long-Tail Data
Michigan State University, NEC Laboratories America
('39708770', 'Xi Yin', 'xi yin')
('15644381', 'Xiang Yu', 'xiang yu')
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
('40022363', 'Xiaoming Liu', 'xiaoming liu')
('2099305', 'Manmohan Chandraker', 'manmohan chandraker')
{yinxi1,liuxm}@cse.msu.edu,{xiangyu,ksohn,manu}@nec-labs.com
d280bcbb387b1d548173917ae82cb6944e3ceca6FACIAL GRID TRANSFORMATION: A NOVEL FACE REGISTRATION APPROACH FOR
IMPROVING FACIAL ACTION UNIT RECOGNITION
University of South Carolina, Columbia, USA
('3225915', 'Shizhong Han', 'shizhong han')
('3091647', 'Zibo Meng', 'zibo meng')
('40205868', 'Ping Liu', 'ping liu')
('1686235', 'Yan Tong', 'yan tong')
d278e020be85a1ccd90aa366b70c43884dd3f798Learning From Less Data: Diversified Subset Selection and
Active Learning in Image Classification Tasks
IIT Bombay
Mumbai, Maharashtra, India
AITOE Labs
Mumbai, Maharashtra, India
AITOE Labs
Mumbai, Maharashtra, India
Rishabh Iyer
AITOE Labs
Seattle, Washington, USA
AITOE Labs
Seattle, Washington, USA
Narsimha Raju
IIT Bombay
Mumbai, Maharashtra, India
IIT Bombay
Mumbai, Maharashtra, India
IIT Bombay
Mumbai, Maharashtra, India
May 30, 2018
('3333118', 'Vishal Kaushal', 'vishal kaushal')
('40224337', 'Khoshrav Doctor', 'khoshrav doctor')
('33911191', 'Suyash Shetty', 'suyash shetty')
('10710354', 'Anurag Sahoo', 'anurag sahoo')
('49613683', 'Pankaj Singh', 'pankaj singh')
('1697088', 'Ganesh Ramakrishnan', 'ganesh ramakrishnan')
vkaushal@cse.iitb.ac.in
khoshrav@gmail.com
suyashshetty29@gmail.com
rishabh@aitoelabs.com
anurag@aitoelabs.com
uavnraju@cse.iitb.ac.in
pr.pankajsingh@gmail.com
ganesh@cse.iitb.ac.in
d26b443f87df76034ff0fa9c5de9779152753f0cA GPU-Oriented Algorithm Design for
Secant-Based Dimensionality Reduction
Department of Mathematics
Colorado State University
Fort Collins, CO 80523-1874
tool
for extracting useful
('51042250', 'Henry Kvinge', 'henry kvinge')
('51121534', 'Elin Farnell', 'elin farnell')
('41211081', 'Michael Kirby', 'michael kirby')
('30383278', 'Chris Peterson', 'chris peterson')
d2cd9a7f19600370bce3ea29aba97d949fe0ceb9Separability Oriented Preprocessing for
Illumination-Insensitive Face Recognition
1 Key Lab of Intelligent Information Processing
of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
2 Department of Computer Science and Engineering,
Michigan State University, East Lansing, MI 48824, U.S.A
3 Omron Social Solutions Co., LTD., Kyoto, Japan
Institute of Digital Media, Peking University, Beijing 100871, China
some
last decade,
('34393045', 'Hu Han', 'hu han')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
('1710195', 'Shihong Lao', 'shihong lao')
('1698902', 'Wen Gao', 'wen gao')
{hhan,sgshan,xlchen}@jdl.ac.cn, lao@ari.ncl.omron.co.jp, wgao@pku.edu.cn
d22b378fb4ef241d8d210202893518d08e0bb213Random Faces Guided Sparse Many-to-One Encoder
for Pose-Invariant Face Recognition
Polytechnic Institute of NYU, NY, USA
College of Computer and Information Science, Northeastern University, MA, USA
Northeastern University, MA, USA
('3272356', 'Yizhe Zhang', 'yizhe zhang')zhangyizhe1987@gmail.com, mingshao@ccs.neu.edu, wong@poly.edu, yunfu@ece.neu.edu
aac39ca161dfc52aade063901f02f56d01a1693cThe Analysis of Parameters t and k of LPP on
Several Famous Face Databases
College of Computer Science and Technology
Jilin University, Changchun 130012, China
('7489436', 'Sujing Wang', 'sujing wang')
('1758249', 'Na Zhang', 'na zhang')
('3028807', 'Mingfang Sun', 'mingfang sun')
('8239114', 'Chunguang Zhou', 'chunguang zhou')
{wangsj08, nazhang08}@mails.jlu.edu.cn; cgzhou@jlu.edu.cn
aadf4b077880ae5eee5dd298ab9e79a1b0114555Dynamics-based Facial Emotion Recognition and Pain Detection
Using Hankel Matrices for
DICGIM - University of Palermo
V.le delle Scienze, Ed. 6, 90128 Palermo (Italy)
('1711610', 'Liliana Lo Presti', 'liliana lo presti')
('9127836', 'Marco La Cascia', 'marco la cascia')
liliana.lopresti@unipa.it
aa127e6b2dc0aaccfb85e93e8b557f83ebee816bAdvancing Human Pose and
Gesture Recognition
DPhil Thesis
Supervisor: Professor Andrew Zisserman
Tomas Pfister
Visual Geometry Group
Department of Engineering Science
University of Oxford
Wolfson College
April 2015
aafb271684a52a0b23debb3a5793eb618940c5dd
aae742779e8b754da7973949992d258d6ca26216Robust Facial Expression Classification Using Shape
and Appearance Features
Department of Electrical Engineering,
Indian Institute of Technology Kharagpur, India
('2680543', 'Aurobinda Routray', 'aurobinda routray')
aa8ef6ba6587c8a771ec4f91a0dd9099e96f6d52Improved Face Tracking Thanks to Local Features
Correspondence
Department of Information Engineering
University of Brescia
('3134795', 'Alberto Piacenza', 'alberto piacenza')
('1806359', 'Fabrizio Guerrini', 'fabrizio guerrini')
('1741369', 'Riccardo Leonardi', 'riccardo leonardi')
aab3561acbd19f7397cbae39dd34b3be33220309Quantization Mimic: Towards Very Tiny CNN
for Object Detection
Tsinghua University, Beijing, China
The Chinese University of Hong Kong, Hong Kong, China
3SenseTime, Beijing, China
The University of Sydney, SenseTime Computer Vision Research Group, Sydney
New South Wales, Australia
('49019561', 'Yi Wei', 'yi wei')
('7418754', 'Xinyu Pan', 'xinyu pan')
('46636770', 'Hongwei Qin', 'hongwei qin')
('1721677', 'Junjie Yan', 'junjie yan')
wei-y15@mails.tsinghua.edu.cn,THUSEpxy@gmail.com
qinhongwei@sensetime.com,wanli.ouyang@sydney.edu.au
yanjunjie@sensetime.com
aa912375eaf50439bec23de615aa8a31a3395ad3International Journal on Cryptography and Information Security(IJCIS),Vol.2, No.2, June 2012
Implementation of a New Methodology to Reduce
the Effects of Changes of Illumination in Face
Recognition-based Authentication
Howard University, Washington DC
Howard University, Washington DC
('3437323', 'Andres Alarcon-Ramirez', 'andres alarcon-ramirez')
('2522254', 'Mohamed F. Chouikha', 'mohamed f. chouikha')
alarconramirezandr@bison.howard.edu
mchouikha@howard.edu
aa52910c8f95e91e9fc96a1aefd406ffa66d797dFACE RECOGNITION SYSTEM BASED
ON 2DFLD AND PCA
E&TC Department
Sinhgad Academy of Engineering
Pune, India
Mr. Hulle Rohit Rajiv
ME E&TC [Digital System]
Sinhgad Academy of Engineering
Pune, India
('2985198', 'Sachin D. Ruikar', 'sachin d. ruikar')ruikarsachin@gmail.com
rohithulle@gmail.com
aaeb8b634bb96a372b972f63ec1dc4db62e7b62aISSN (e): 2250 – 3005 || Vol, 04 || Issue, 12 || December – 2014 ||
International Journal of Computational Engineering Research (IJCER)
Facial Expression Recognition System: A Digital Printing
Application
Jadavpur University, India
Jadavpur University, India
('2226316', 'Somnath Banerjee', 'somnath banerjee')
aafb8dc8fda3b13a64ec3f1ca7911df01707c453Excitation Backprop for RNNs
Boston University 2Pattern Analysis and Computer Vision (PAVIS
Istituto Italiano di Tecnologia 3Adobe Research 4Computer Science Department, Universit`a di Verona
Figure 1: Our proposed framework spatiotemporally highlights/grounds the evidence that an RNN model used in producing a class label
or caption for a given input video. In this example, by using our proposed back-propagation method, the evidence for the activity class
CliffDiving is highlighted in a video that contains CliffDiving and HorseRiding. Our model employs a single backward pass to produce
saliency maps that highlight the evidence that a given RNN used in generating its outputs.
('3298267', 'Sarah Adel Bargal', 'sarah adel bargal')
('40063519', 'Andrea Zunino', 'andrea zunino')
('40622560', 'Donghyun Kim', 'donghyun kim')
('1701293', 'Jianming Zhang', 'jianming zhang')
('1727204', 'Vittorio Murino', 'vittorio murino')
('1749590', 'Stan Sclaroff', 'stan sclaroff')
{sbargal,donhk,sclaroff}@bu.edu, {andrea.zunino,vittorio.murino}@iit.it, jianmzha@adobe.com
aa0c30bd923774add6e2f27ac74acd197b9110f2DYNAMIC PROBABILISTIC LINEAR DISCRIMINANT ANALYSIS FOR VIDEO
CLASSIFICATION
Deparment of Computing, Imperial College London, UK
Deparment of Computing, Goldsmiths, University of London, UK
Middlesex University London, 4International Hellenic University
Center for Machine Vision and Signal Analysis, University of Oulu, Finland
('35340264', 'Alessandro Fabris', 'alessandro fabris')
('1752913', 'Mihalis A. Nicolaou', 'mihalis a. nicolaou')
('1754270', 'Irene Kotsia', 'irene kotsia')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
aadfcaf601630bdc2af11c00eb34220da59b7559Multi-view Hybrid Embedding:
A Divide-and-Conquer Approach
('30443690', 'Jiamiao Xu', 'jiamiao xu')
('2462771', 'Shujian Yu', 'shujian yu')
('1744228', 'Xinge You', 'xinge you')
('3381421', 'Mengjun Leng', 'mengjun leng')
('15132338', 'Xiao-Yuan Jing', 'xiao-yuan jing')
('1697202', 'C. L. Philip Chen', 'c. l. philip chen')
aaa4c625f5f9b65c7f3df5c7bfe8a6595d0195a5Biometrics in Ambient Intelligence ('1725688', 'Massimo Tistarelli', 'massimo tistarelli')
aac934f2eed758d4a27562dae4e9c5415ff4cdb7TS-LSTM and Temporal-Inception:
Exploiting Spatiotemporal Dynamics for Activity Recognition
Georgia Institute of Technology
2Georgia Tech Research Institution
('7437104', 'Chih-Yao Ma', 'chih-yao ma')
('1960668', 'Min-Hung Chen', 'min-hung chen')
('1746245', 'Zsolt Kira', 'zsolt kira')
{cyma, cmhungsteve, zkira, alregib}@gatech.edu
aa331fe378056b6d6031bb8fe6676e035ed60d6d
aae0e417bbfba701a1183d3d92cc7ad550ee59c3844
A Statistical Method for 2-D Facial Landmarking
('1764521', 'Albert Ali Salah', 'albert ali salah')
('1695527', 'Theo Gevers', 'theo gevers')
aa577652ce4dad3ca3dde44f881972ae6e1acce7Deep Attribute Networks
Department of EE, KAIST
Daejeon, South Korea
Department of EE, KAIST
Daejeon, South Korea
Department of EE, KAIST
Daejeon, South Korea
Department of EE, KAIST
Daejeon, South Korea
('8270717', 'Junyoung Chung', 'junyoung chung')
('2350325', 'Donghoon Lee', 'donghoon lee')
('2397884', 'Youngjoo Seo', 'youngjoo seo')
('5578091', 'Chang D. Yoo', 'chang d. yoo')
jych@kaist.ac.kr
iamdh@kaist.ac.kr
minerrba@kaist.ac.kr
cdyoo@ee.kaist.ac.kr
aa3c9de34ef140ec812be85bb8844922c35eba47Reducing Gender Bias Amplification using Corpus-level Constraints
Men Also Like Shopping:
University of Virginia
University of Washington
('3456473', 'Tianlu Wang', 'tianlu wang')
('2064210', 'Mark Yatskar', 'mark yatskar')
('33524946', 'Jieyu Zhao', 'jieyu zhao')
('2782886', 'Kai-Wei Chang', 'kai-wei chang')
('2004053', 'Vicente Ordonez', 'vicente ordonez')
{jz4fu, tw8cb, vicente, kc2wc}@virginia.edu
my89@cs.washington.edu
aa94f214bb3e14842e4056fdef834a51aecef39cReconhecimento de padrões faciais: Um estudo
Universidade Federal
Rural do Semi-Árido
Departamento de Ciências Naturais
Mossoró, RN - 59625-900
Resumo—O reconhecimento facial tem sido utilizado em di-
versas áreas para identificação e autenticação de usuários. Um
dos principais mercados está relacionado a segurança, porém há
uma grande variedade de aplicações relacionadas ao uso pessoal,
conveniência, aumento de produtividade, etc. O rosto humano
possui um conjunto de padrões complexos e mutáveis. Para
reconhecer esses padrões, são necessárias técnicas avançadas de
reconhecimento de padrões capazes, não apenas de reconhecer,
mas de se adaptar às mudanças constantes das faces das pessoas.
Este documento apresenta um método de reconhecimento facial
proposto a partir da análise comparativa de trabalhos encontra-
dos na literatura.
biométrica é o uso da biometria para reconhecimento, identi-
ficação ou verificação, de um ou mais traços biométricos de
um indivíduo com o objetivo de autenticar sua identidade. Os
traços biométricos são os atributos analisados pelas técnicas
de reconhecimento biométrico.
A tarefa de reconhecimento facial é composta por três
processos distintos: Registro, verificação e identificação bio-
métrica. Os processos se diferenciam pela forma de determinar
a identidade de um indivíduo. Na Figura 1 são descritos os
processos de registro, verificação e identificação biométrica.
I. INTRODUÇÃO
Biometria é a ciência que estabelece a identidade de um
indivíduo baseada em seus atributos físicos, químicos ou
comportamentais [1]. Possui inúmeras aplicações em diver-
sas áreas, se destacando mais na área de segurança, como
por exemplo sistemas de gerenciamento de identidade, cuja
funcionalidade é autenticar a identidade de um indivíduo no
contexto de uma aplicação.
O reconhecimento facial é uma técnica biométrica que
consiste em identificar padrões em características faciais como
formato da boca, do rosto, distância dos olhos, entre outros.
Um humano é capaz de reconhecer uma pessoa familiar
mesmo com muitos obstáculos com distância, sombras ou
apenas a visão parcial do rosto. Uma máquina, no entanto,
precisa realizar inúmeros processos para detectar e reconhecer
um conjunto de padrões específicos para rotular uma face
como conhecida ou desconhecida. Para isso, exitem métodos
capazes de detectar, extrair e classificar as características
faciais, fornecendo um reconhecimento automático de pessoas.
II. RECONHECIMENTO FACIAL
A tecnologia biométrica oferece vantagens em relação a
outros métodos tradicionais de identificação como senhas,
documentos e tokens. Entre elas estão o fato de que os
traços biométricos não podem ser perdidos ou esquecidos, são
difíceis de serem copiados, compartilhados ou distribuídos. Os
métodos requerem que a pessoa autenticada esteja presente
na hora e lugar da autenticação, evitando que pessoas má
intencionadas tenham acesso sem autorização.
A autenticação é o ato de estabelecer ou confirmar alguém,
ou alguma coisa, como autêntico, isto é, que as alegações
feitas por ou sobre a coisa é verdadeira [2]. Autenticação
(a)
(b)
(c)
Figura 1: Registro biométrico (a), identificação biométrica (b)
e verificação biométrica (c)
A Figura 1a descreve o processo de registro de dados
('2545499', 'Marcos Evandro Cintra', 'marcos evandro cintra')Email: alexdemise@gmail.com, mecintra@gmail.com
aac101dd321e6d2199d8c0b48c543b541c181b66USING CONTEXT TO ENHANCE THE
UNDERSTANDING OF FACE IMAGES
A Dissertation Presented
by
VIDIT JAIN
Submitted to the Graduate School of the
University of Massachusetts Amherst in partial ful llment
of the requirements for the degree of
DOCTOR OF PHILOSOPHY
September 2010
Department of Computer Science
af8fe1b602452cf7fc9ecea0fd4508ed4149834e
af6e351d58dba0962d6eb1baf4c9a776eb73533fHow to Train Your Deep Neural Network with
Dictionary Learning
*IIIT Delhi
Okhla Phase 3
Delhi, 110020, India
+IIIT Delhi
Okhla Phase 3
#IIIT Delhi
Okhla Phase 3
Delhi, 110020, India
Delhi, 110020, India
('30255052', 'Vanika Singhal', 'vanika singhal')
('38608015', 'Shikha Singh', 'shikha singh')
('2641605', 'Angshul Majumdar', 'angshul majumdar')
vanikas@iiitd.ac.in
shikhas@iiitd.ac.in
angshul@iiitd.ac.in
aff92784567095ee526a705e21be4f42226bbaabFace Recognition in Uncontrolled
Environments
A dissertation submitted in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
at
University College London
Department of Computer Science
University College London
May 26, 2015
('38098063', 'Yun Fu', 'yun fu')
aff8705fb2f2ae460cb3980b47f2e85c2e6dd41aAttributes in Multiple Facial Images
West Virginia University, Morgantown
WV 26506, USA
('1767347', 'Xudong Liu', 'xudong liu')
('1822413', 'Guodong Guo', 'guodong guo')
xdliu@mix.wvu.edu, guodong.guo@mail.wvu.edu
af13c355a2a14bb74847aedeafe990db3fc9cbd4Happy and Agreeable? Multi-Label Classification of
Impressions in Social Video
Idiap Research Institute
Switzerland
Instituto Potosino de
Investigación Científica y
Tecnológica
Mexico
Idiap Research Institute
École Polytechnique Fédérale
de Lausanne
Switzerland
('2389354', 'Gilberto Chávez-Martínez', 'gilberto chávez-martínez')
('1934619', 'Salvador Ruiz-Correa', 'salvador ruiz-correa')
('1698682', 'Daniel Gatica-Perez', 'daniel gatica-perez')
gchavez@idiap.ch
src@cmls.pw
gatica@idiap.ch
af6cae71f24ea8f457e581bfe1240d5fa63faaf7
af62621816fbbe7582a7d237ebae1a4d68fcf97dInternational Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622
International Conference on Humming Bird ( 01st March 2014)
RESEARCH ARTICLE
OPEN ACCESS
Active Shape Model Based Recognition Of Facial Expression
AncyRija V , Gayathri. S2
AncyRijaV, Author is currently pursuing M.E (Software Engineering) in Vins Christian College of
Engineering,
Gayathri.S, M.E., Vins Christian college of Engineering
e-mail: ancyrija@gmail.com.
afdf9a3464c3b015f040982750f6b41c048706f5A Recurrent Encoder-Decoder Network for Sequential Face Alignment
Rutgers University
Rogerio Feris
IBM T. J. Watson
Snapchat Research
Dimitris Metaxas
Rutgers University
('4340744', 'Xi Peng', 'xi peng')
('48631738', 'Xiaoyu Wang', 'xiaoyu wang')
xipeng.cs@rutgers.edu
rsferis@us.ibm.com
fanghuaxue@gmail.com
dnm@cs.rutgers.edu
af54dd5da722e104740f9b6f261df9d4688a9712
afa57e50570a6599508ee2d50a7b8ca6be04834aMotion in action : optical flow estimation and action
localization in videos
To cite this version:
Computer Vision and Pattern Recognition [cs.CV]. Université Grenoble Alpes, 2016. English. 2016GREAM013>.
HAL Id: tel-01407258
https://tel.archives-ouvertes.fr/tel-01407258
Submitted on 1 Dec 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
afe9cfba90d4b1dbd7db1cf60faf91f24d12b286Principal Directions of Synthetic Exact Filters
for Robust Real-Time Eye Localization
Vitomir ˇStruc1;2, Jerneja ˇZganec Gros1, and Nikola Paveˇsi´c2
1 Alpineon Ltd, Ulica Iga Grudna 15, SI-1000 Ljubljana, Slovenia,
Faculty of Electrical Engineering, University of Ljubljana, Tr za ska cesta
SI-1000 Ljubljana, Slovenia,
fvitomir.struc, jerneja.grosg@alpineon.com,
fvitomir.struc, nikola.pavesicg@fe.uni-lj.si
afa84ff62c9f5b5c280de2996b69ad9fa48b7bc3Two-stream Flow-guided Convolutional Attention Networks for Action
Recognition
National University of Singapore
Loong-Fah Cheong
('25205026', 'An Tran', 'an tran')an.tran@u.nus.edu
eleclf@nus.edu.sg
af278274e4bda66f38fd296cfa5c07804fbc26eeRESEARCH ARTICLE
A Novel Maximum Entropy Markov Model for
Human Facial Expression Recognition
College of Information and Communication Engineering, Sungkyunkwan University, Suwon-si, Gyeonggi
do, Rep. of Korea, Kyung Hee University, Suwon, Rep. of Korea
Innopolis University, Kazan, Russia
a11111
☯ These authors contributed equally to this work.
('1711083', 'Muhammad Hameed Siddiqi', 'muhammad hameed siddiqi')
('2401685', 'Md. Golam Rabiul Alam', 'md. golam rabiul alam')
('1683244', 'Choong Seon Hong', 'choong seon hong')
('1734679', 'Hyunseung Choo', 'hyunseung choo')
* choo@skku.edu
af654a7ec15168b16382bd604889ea07a967dac6FACE RECOGNITION COMMITTEE MACHINE
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Shatin, Hong Kong
('2899702', 'Ho-Man Tang', 'ho-man tang')
('1681775', 'Michael R. Lyu', 'michael r. lyu')
('1706259', 'Irwin King', 'irwin king')
hmtang, lyu, king @cse.cuhk.edu.hk
afc7092987f0d05f5685e9332d83c4b27612f964Person-Independent Facial Expression Detection using Constrained
Local Models
('1713496', 'Patrick Lucey', 'patrick lucey')
('1820249', 'Simon Lucey', 'simon lucey')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1729760', 'Sridha Sridharan', 'sridha sridharan')
b730908bc1f80b711c031f3ea459e4de09a3d3242024
Active Orientation Models for Face
Alignment In-the-Wild
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('2575567', 'Joan Alabort-i-Medina', 'joan alabort-i-medina')
('1694605', 'Maja Pantic', 'maja pantic')
b7426836ca364603ccab0e533891d8ac54cf2429Hindawi
Journal of Healthcare Engineering
Volume 2017, Article ID 3090343, 31 pages
https://doi.org/10.1155/2017/3090343
Review Article
A Review on Human Activity Recognition Using
Vision-Based Method
College of Information Science and Engineering, Ocean University of China, Qingdao, China
Tsinghua University, Beijing, China
Received 22 February 2017; Accepted 11 June 2017; Published 20 July 2017
Academic Editor: Dong S. Park
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the
environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health
care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition
approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a
chronological research trajectory from global representations to local representations, and recent depth-based representations.
For the classification methods, we conform to the categorization of template-based methods, discriminative models, and
generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to
provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed
taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the
directions for future research.
1. Introduction
Human activity recognition (HAR) is a widely studied com-
puter vision problem. Applications of HAR include video
surveillance, health care, and human-computer interaction.
As the imaging technique advances and the camera device
upgrades, novel approaches for HAR constantly emerge. This
review aims to provide a comprehensive introduction to the
video-based human activity recognition, giving an overview
of various approaches as well as their evolutions by covering
both the representative classical literatures and the state-of-
the-art approaches.
Human activities have an inherent hierarchical structure
that indicates the different levels of it, which can be consid-
ered as a three-level categorization. First, for the bottom level,
there is an atomic element and these action primitives consti-
tute more complex human activities. After the action primi-
tive level, the action/activity comes as the second level.
Finally, the complex interactions form the top level, which
refers to the human activities that involve more than two
persons and objects. In this paper, we follow this three-level
categorization namely action primitives, actions/activities,
and interactions. This three-level categorization varies a little
from previous surveys [1–4] and maintains a consistent
theme. Action primitives are those atomic actions at the limb
level, such as “stretching the left arm,” and “raising the right
leg.” Atomic actions are performed by a specific part of the
human body, such as the hands, arms, or upper body part
[4]. Actions and activities are used interchangeably in this
review, referring to the whole-body movements composed
of several action primitives in temporal sequential order
and performed by a single person with no more person or
additional objects. Specifically, we refer the terminology
human activities as all movements of the three layers and
the activities/actions as the middle level of human activities.
Human activities like walking, running, and waving hands
are categorized in the actions/activities level. Finally, similar
to Aggarwal et al.’s review [2], interactions are human activ-
ities that involve two or more persons and objects. The
additional person or object is an important characteristic of
('7671146', 'Shugang Zhang', 'shugang zhang')
('39868595', 'Zhiqiang Wei', 'zhiqiang wei')
('2896895', 'Jie Nie', 'jie nie')
('40284611', 'Lei Huang', 'lei huang')
('40658604', 'Shuang Wang', 'shuang wang')
('40166799', 'Zhen Li', 'zhen li')
('7671146', 'Shugang Zhang', 'shugang zhang')
Correspondence should be addressed to Zhen Li; lizhen0130@gmail.com
b73795963dc623a634d218d29e4a5b74dfbc79f1ZHAO, YANG: IDENTITY PRESERVING FACE COMPLETION FOR LARGE OCULAR RO
Identity Preserving Face Completion for
Large Ocular Region Occlusion
1 Computer Science Department
University of Kentucky
Lexington, KY, USA
Institute for Creative Technologies
University of Southern California
Playa Vista, California, USA
3 School of Computer Science and
Technology
Harbin Institute of Technology
Harbin, China
Hangzhou Institute of Service
Engineering
Hangzhou Normal University
Hangzhou, China
('2613340', 'Yajie Zhao', 'yajie zhao')
('47483055', 'Weikai Chen', 'weikai chen')
('1780032', 'Jun Xing', 'jun xing')
('21515518', 'Xiaoming Li', 'xiaoming li')
('3408065', 'Zach Bessinger', 'zach bessinger')
('1752129', 'Fuchang Liu', 'fuchang liu')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('38958903', 'Ruigang Yang', 'ruigang yang')
yajie.zhao@uky.edu
wechen@ict.usc.edu
junxnui@gmail.com
hit.xmshr@gmail.com
zach.bessinger@gmail.com
20140022@hznu.edu.cn
cswmzuo@gmail.com
ryang@cs.uky.edu
b7cf7bb574b2369f4d7ebc3866b461634147041aNeural Comput & Applic (2012) 21:1575–1583
DOI 10.1007/s00521-011-0728-x
O R I G I N A L A R T I C L E
From NLDA to LDA/GSVD: a modified NLDA algorithm
Received: 2 August 2010 / Accepted: 3 August 2011 / Published online: 19 August 2011
Ó Springer-Verlag London Limited 2011
('1692984', 'Jun Yin', 'jun yin')
b750b3d8c34d4e57ecdafcd5ae8a15d7fa50bc24Unified Solution to Nonnegative Data Factorization Problems
Huazhong University of Science and Technology, Wuhan, China
National University of Singapore, Singapore
('1817910', 'Xiaobai Liu', 'xiaobai liu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('2156156', 'Hai Jin', 'hai jin')
b7894c1f805ffd90ab4ab06002c70de68d6982abBiomedical Research 2017; Special Issue: S610-S618
ISSN 0970-938X
www.biomedres.info
A comprehensive age estimation on face images using hybrid filter based
feature extraction.
Karthikeyan D1*, Balakrishnan G2
Srinivasan Engineering College, Perambalur, India
Indra Ganesan College of Engineering, Trichy, India
b7eead8586ffe069edd190956bd338d82c69f880A VIDEO DATABASE FOR FACIAL
BEHAVIOR UNDERSTANDING
D. Freire-Obreg´on and M. Castrill´on-Santana.
SIANI, Universidad de Las Palmas de Gran Canaria, Spain
dfreire@iusiani.ulpgc.es, mcastrillon@iusiani.ulpgc.es
b75cee96293c11fe77ab733fc1147950abbe16f9
b7774c096dc18bb0be2acef07ff5887a22c2a848Distance metric learning for image and webpage
comparison
To cite this version:
versité Pierre et Marie Curie - Paris VI, 2015. English. .
HAL Id: tel-01135698
https://tel.archives-ouvertes.fr/tel-01135698v2
Submitted on 18 Mar 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('32868306', 'Marc Teva Law', 'marc teva law')
('32868306', 'Marc Teva Law', 'marc teva law')
b7f05d0771da64192f73bdb2535925b0e238d233 MVA2005 IAPR Conference on Machine VIsion Applications, May 16-18, 2005 Tsukuba Science City, Japan
4-3
Robust Active Shape Model using AdaBoosted Histogram Classifiers
W ataru Ito
Imaging Software Technology Center
Imaging Software Technology Center
FUJI PHOTO FILM CO., LTD.
FUJI PHOTO FILM CO., LTD.
('1724928', 'Yuanzhong Li', 'yuanzhong li')li_yuanzhong@ fujifilm.co.jp
wataru_ito@ fujifilm.co.jp
b755505bdd5af078e06427d34b6ac2530ba69b12To appear in the International Joint Conf. Biometrics, Washington D.C., October, 2011
NFRAD: Near-Infrared Face Recognition at a Distance
aDept. of Brain and Cognitive Eng. Korea Univ., Seoul, Korea
bDept. of Comp. Sci. & Eng. Michigan State Univ., E. Lansing, MI, USA 48824
('2429013', 'Hyunju Maeng', 'hyunju maeng')
('2131755', 'Hyun-Cheol Choi', 'hyun-cheol choi')
('2222919', 'Unsang Park', 'unsang park')
('1703007', 'Seong-Whan Lee', 'seong-whan lee')
('6680444', 'Anil K. Jain', 'anil k. jain')
{hjmaeng, hcchoi}@korea.ac.kr, parkunsa@cse.msu.edu, swlee@image.korea.ac.kr , jain@cse.msu.edu
b7820f3d0f43c2ce613ebb6c3d16eb893c84cf89Visual Data Synthesis via GAN for Zero-Shot Video Classification
Institute of Computer Science and Technology, Peking University
Beijing 100871, China
('2439211', 'Chenrui Zhang', 'chenrui zhang')
('1704081', 'Yuxin Peng', 'yuxin peng')
pengyuxin@pku.edu.cn
b7b461f82c911f2596b310e2b18dd0da1d5d44912961
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
K-MAPPINGS AND REGRESSION TREES
SAMSI and Duke University
1. INTRODUCTION
argminM1,...,MK
P1,...PK
2.1. Partitioning Y
K(cid:2)
(cid:2)
(cid:3)
(cid:4)
('3149531', 'Arthur Szlam', 'arthur szlam')
b73fdae232270404f96754329a1a18768974d3f6
b76af8fcf9a3ebc421b075b689defb6dc4282670Face Mask Extraction in Video Sequence ('2563750', 'Yujiang Wang', 'yujiang wang')
b7c5f885114186284c51e863b58292583047a8b4GAdaBoost: Accelerating Adaboost Feature Selection with Genetic
Algorithms
The American University In Cairo, Road 90, New Cairo, Cairo, Egypt
Keywords:
Object Detection, Genetic Algorithms, Haar Features, Adaboost, Face Detection.
('3468033', 'Mai F. Tolba', 'mai f. tolba')
('27045559', 'Mohamed Moustafa', 'mohamed moustafa')
maitolba@aucegypt.edu, m.moustafa@aucegypt.edu
b73d9e1af36aabb81353f29c40ecdcbdf731dbedSensors 2015, 15, 20945-20966; doi:10.3390/s150920945
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Head Pose Estimation on Top of Haar-Like Face Detection:
A Study Using the Kinect Sensor
Institute for Information Technology and Communications (IIKT), Otto-von-Guericke-University
College of Computer Science and Information Sciences
College of Science, Menou a University, Menou a 32721, Egypt
Tel.: +49-391-67-11033; Fax: +49-391-67-11231.
Academic Editor: Vittorio M. N. Passaro
Received: 3 July 2015 / Accepted: 6 August 2015 / Published: 26 August 2015
('2712124', 'Anwar Saeed', 'anwar saeed')
('1741165', 'Ayoub Al-Hamadi', 'ayoub al-hamadi')
('1889194', 'Ahmed Ghoneim', 'ahmed ghoneim')
Magdeburg, Magdeburg D-39016, Germany; E-Mail: Ayoub.Al-Hamadi@ovgu.de
King Saud University, Riyadh 11451, Saudi Arabia; E-Mail: ghoneim@KSU.EDU.SA
* Author to whom correspondence should be addressed; E-Mail: anwar.saeed@ovgu.de;
b747fcad32484dfbe29530a15776d0df5688a7db
b7f7a4df251ff26aca83d66d6b479f1dc6cd1085Bouges et al. EURASIP Journal on Image and Video Processing 2013, 2013:55
http://jivp.eurasipjournals.com/content/2013/1/55
RESEARCH
Open Access
Handling missing weak classifiers in boosted
cascade: application to multiview and
occluded face detection
('3212236', 'Pierre Bouges', 'pierre bouges')
('1865978', 'Thierry Chateau', 'thierry chateau')
('32323470', 'Christophe Blanc', 'christophe blanc')
('1685767', 'Gaëlle Loosli', 'gaëlle loosli')
db848c3c32464d12da33b2f4c3a29fe293fc35d1Pose Guided Human Video Generation
1 CUHK-SenseTime Joint Lab, CUHK, Hong Kong S.A.R.
2 SenseTime Research, Beijing, China
Carnegie Mellon University
('49984891', 'Ceyuan Yang', 'ceyuan yang')
('1915826', 'Zhe Wang', 'zhe wang')
('22689408', 'Xinge Zhu', 'xinge zhu')
('2000034', 'Chen Huang', 'chen huang')
('1788070', 'Jianping Shi', 'jianping shi')
('1807606', 'Dahua Lin', 'dahua lin')
yangceyuan@sensetime.com
db1f48a7e11174d4a724a4edb3a0f1571d649670Joint Constrained Clustering and Feature
Learning based on Deep Neural Networks
by
B.Sc., University of Science and Technology of China
Thesis Submitted in Partial Fulfillment of the
Requirements for the Degree of
Master of Science
in the
School of Computing Science
Faculty of Applied Sciences
SIMON FRASER UNIVERSITY
Summer 2017
However, in accordance with the Copyright Act of Canada, this work may be
reproduced without authorization under the conditions for “Fair Dealing.”
Therefore, limited reproduction of this work for the purposes of private study,
research, education, satire, parody, criticism, review and news reporting is likely
All rights reserved.
to be in accordance with the law, particularly if cited appropriately.
('1707706', 'Xiaoyu Liu', 'xiaoyu liu')
('1707706', 'Xiaoyu Liu', 'xiaoyu liu')
db227f72bb13a5acca549fab0dc76bce1fb3b948International Refereed Journal of Engineering and Science (IRJES)
ISSN (Online) 2319-183X, (Print) 2319-1821
Volume 4, Issue 6 (June 2015), PP.169-169-174
Characteristic Based Image Search using Re-Ranking method
1Chitti Babu, 2Yasmeen Jaweed, 3G.Vijay Kumar
dbb16032dd8f19bdfd045a1fc0fc51f29c70f70aPARKHI et al.: DEEP FACE RECOGNITION
Deep Face Recognition
Visual Geometry Group
Department of Engineering Science
University of Oxford
('3188342', 'Omkar M. Parkhi', 'omkar m. parkhi')
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
omkar@robots.ox.ac.uk
vedaldi@robots.ox.ac.uk
az@robots.ox.ac.uk
dbaf89ca98dda2c99157c46abd136ace5bdc33b3Nonlinear Cross-View Sample Enrichment for
Action Recognition
Institut Mines-T´el´ecom; T´el´ecom ParisTech; CNRS LTCI
('1695223', 'Ling Wang', 'ling wang')
('1692389', 'Hichem Sahbi', 'hichem sahbi')
dbab6ac1a9516c360cdbfd5f3239a351a64adde7
dbe255d3d2a5d960daaaba71cb0da292e0af36a7Evolutionary Cost-sensitive Extreme Learning
Machine
1
('36904370', 'Lei Zhang', 'lei zhang')
dbb0a527612c828d43bcb9a9c41f1bf7110b1dc8Chapter 7
Machine Learning Techniques
for Face Analysis
('9301018', 'Roberto Valenti', 'roberto valenti')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1695527', 'Theo Gevers', 'theo gevers')
('1774778', 'Ira Cohen', 'ira cohen')
db5a00984fa54b9d2a1caad0067a9ff0d0489517Multi-Task Adversarial Network for Disentangled Feature Learning
Ian Wassell1
University of Cambridge
2Adobe Research
('49421489', 'Yang Liu', 'yang liu')
('48707577', 'Zhaowen Wang', 'zhaowen wang')
1{yl504,ijw24}@cam.ac.uk
2{zhawang,hljin}@adobe.com
dbd958ffedc3eae8032be67599ec281310c05630Automated Restyling of Human Portrait Based on Facial Expression Recognition
and 3D Reconstruction
Stanford University
350 Serra Mall, Stanford, CA 94305, USA
('46740443', 'Cheng-Han Wu', 'cheng-han wu')1chw0208@stanford.edu
2hsinc@stanford.edu
dbed26cc6d818b3679e46677abc9fa8e04e8c6a6A Hierarchical Generative Model for Eye Image Synthesis and Eye Gaze
Estimation
ECSE, Rensselaer Polytechnic Institute, Troy, NY, USA
('1771700', 'Kang Wang', 'kang wang')
('49832825', 'Rui Zhao', 'rui zhao')
('1726583', 'Qiang Ji', 'qiang ji')
{wangk10, zhaor, jiq}@rpi.edu
db3545a983ffd24c97c18bf7f068783102548ad7Enriching the Student Model in an
Intelligent Tutoring System
Submitted in partial fulfillment of the requirements for the degree
of Doctor of Philosophy
of the
Indian Institute of Technology, Bombay, India
and
Monash University, Australia
by
Supervisors:
The course of study for this award was developed jointly by
the Indian Institute of Technology, Bombay and Monash University, Australia
and given academic recognition by each of them.
The programme was administered by The IITB-Monash Research Academy.
2014
('2844237', 'Ramkumar Rajendran', 'ramkumar rajendran')
('1946438', 'Sridhar Iyer', 'sridhar iyer')
('1791910', 'Sahana Murthy', 'sahana murthy')
('38751653', 'Campbell Wilson', 'campbell wilson')
('1727078', 'Judithe Sheard', 'judithe sheard')
dba493caf6647214c8c58967a8251641c2bda4c2Automatic 3D Facial Expression Editing in Videos
University of California, Santa Barbara
2IMPA – Instituto de Matematica Pura e Aplicada
('13303219', 'Ya Chang', 'ya chang')
('2428542', 'Marcelo Vieira', 'marcelo vieira')
('1752714', 'Matthew Turk', 'matthew turk')
('1705620', 'Luiz Velho', 'luiz velho')
dbb7f37fb9b41d1aa862aaf2d2e721a470fd2c57Face Image Analysis With
Convolutional Neural Networks
Dissertation
Zur Erlangung des Doktorgrades
der Fakult¨at f¨ur Angewandte Wissenschaften
an der Albert-Ludwigs-Universit¨at Freiburg im Breisgau
von
Stefan Duffner
2007
db36e682501582d1c7b903422993cf8d70bb0b42Deep Trans-layer Unsupervised Networks for
Representation Learning
aKey Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
bSchool of Computer and Control Engineering, University of Chinese Academy of Sciences
Beijing 100049, China
('1778018', 'Wentao Zhu', 'wentao zhu')
('35048816', 'Jun Miao', 'jun miao')
('2343895', 'Laiyun Qing', 'laiyun qing')
('1710220', 'Xilin Chen', 'xilin chen')
dbe0e533d715f8543bcf197f3b8e5cffa969dfc0International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering
ISSN (Print) : 2320 – 3765
ISSN (Online): 2278 – 8875
(An ISO 3297: 2007 Certified Organization)
Vol. 3, Issue 5, May 2014
A Comprehensive Comparative Performance
Analysis of Eigenfaces, Laplacianfaces and
Orthogonal Laplacianfaces for Face Recognition
UG student, Amity school of Engineering and Technology, Amity University, Haryana, India
Lecturer, Amity school of Engineering and Technology, Amity University, Haryana, India
dbd5e9691cab2c515b50dda3d0832bea6eef79f2Image-basedFaceRecognition:IssuesandMethods
WenYiZhao
RamaChellappa
Sarno(cid:11)Corporation
CenterforAutomationResearch
WashingtonRoad
UniversityofMaryland
Princeton,NJ
CollegePark, MD
Email:wzhao@sarno(cid:11).com
Email:rama@cfar.umd.edu
db67edbaeb78e1dd734784cfaaa720ba86ceb6d2SPECFACE - A Dataset of Human Faces Wearing Spectacles
Indian Institute of Technology Kharagpur
India
('30654921', 'Anirban Dasgupta', 'anirban dasgupta')
('30572870', 'Shubhobrata Bhattacharya', 'shubhobrata bhattacharya')
('2680543', 'Aurobinda Routray', 'aurobinda routray')
db82f9101f64d396a86fc2bd05b352e433d88d02A Spatio-Temporal Probabilistic Framework for
Dividing and Predicting Facial Action Units
Electrical and Computer Engineering, The University of Memphis
('2497319', 'Md. Iftekhar Tanveer', 'md. iftekhar tanveer')
('1828610', 'Mohammed Yeasin', 'mohammed yeasin')
db428d03e3dfd98624c23e0462817ad17ef14493Oxford TRECVID 2006 – Notebook paper
Department of Engineering Science
University of Oxford
United Kingdom
('2276542', 'James Philbin', 'james philbin')
('8873555', 'Anna Bosch', 'anna bosch')
('1720149', 'Jan-Mark Geusebroek', 'jan-mark geusebroek')
('1782755', 'Josef Sivic', 'josef sivic')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
a83fc450c124b7e640adc762e95e3bb6b423b310Deep Face Feature for Face Alignment ('15679675', 'Boyi Jiang', 'boyi jiang')
('2938279', 'Juyong Zhang', 'juyong zhang')
('2964129', 'Bailin Deng', 'bailin deng')
('8280113', 'Yudong Guo', 'yudong guo')
('1724542', 'Ligang Liu', 'ligang liu')
a85e9e11db5665c89b057a124547377d3e1c27efDynamics of Driver’s Gaze: Explorations in
Behavior Modeling & Maneuver Prediction
('1841835', 'Sujitha Martin', 'sujitha martin')
('22254044', 'Sourabh Vora', 'sourabh vora')
('2812409', 'Kevan Yuen', 'kevan yuen')
a8117a4733cce9148c35fb6888962f665ae65b1eIEEE TRANSACTIONS ON XXXX, VOL. XX, NO. XX, XX 201X
A Good Practice Towards Top Performance of Face
Recognition: Transferred Deep Feature Fusion
('33419682', 'Lin Xiong', 'lin xiong')
('1785111', 'Jayashree Karlekar', 'jayashree karlekar')
('2052311', 'Jian Zhao', 'jian zhao')
('33221685', 'Jiashi Feng', 'jiashi feng')
('2668358', 'Sugiri Pranata', 'sugiri pranata')
('3493398', 'Shengmei Shen', 'shengmei shen')
a87ab836771164adb95d6744027e62e05f47fd96Understanding human-human interactions: a survey
Utrecht University, Buys Ballotgebouw, Princetonplein 5, Utrecht, 3584CC, Netherlands
Utrecht University, Buys Ballotgebouw, Princetonplein 5, Utrecht, 3584CC, Netherlands
('26936326', 'Alexandros Stergiou', 'alexandros stergiou')
('1754666', 'Ronald Poppe', 'ronald poppe')
a896ddeb0d253739c9aaef7fc1f170a2ba8407d3SSH: Single Stage Headless Face Detector
University of Maryland
('40465379', 'Mahyar Najibi', 'mahyar najibi')
('3383048', 'Pouya Samangouei', 'pouya samangouei')
('1693428', 'Larry S. Davis', 'larry s. davis')
{pouya,rama,lsd}@umiacs.umd.edu
najibi@cs.umd.edu
a820941eaf03077d68536732a4d5f28d94b5864aLeveraging Datasets with Varying Annotations for Face Alignment
via Deep Regression Network
1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
3CAS Center for Excellence in Brain Science and Intelligence Technology
('1698586', 'Jie Zhang', 'jie zhang')
('1693589', 'Meina Kan', 'meina kan')
('1710220', 'Xilin Chen', 'xilin chen')
{jie.zhang,meina.kan,shiguang.shan,xilin.chen}@vipl.ict.ac.cn
a8035ca71af8cc68b3e0ac9190a89fed50c92332000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
IIIT-CFW: A Benchmark Database of
Cartoon Faces in the Wild
1 IIIT Chittoor, Sri City, India
2 CVIT, KCIS, IIIT Hyderabad, India
('2154430', 'Ashutosh Mishra', 'ashutosh mishra')
('31821293', 'Shyam Nandan Rai', 'shyam nandan rai')
('39719398', 'Anand Mishra', 'anand mishra')
('1694502', 'C. V. Jawahar', 'c. v. jawahar')
a88640045d13fc0207ac816b0bb532e42bcccf36ARXIV VERSION
Simultaneously Learning Neighborship and
Projection Matrix for Supervised
Dimensionality Reduction
('34116743', 'Yanwei Pang', 'yanwei pang')
('2521321', 'Bo Zhou', 'bo zhou')
('1688370', 'Feiping Nie', 'feiping nie')
a803453edd2b4a85b29da74dcc551b3c53ff17f9Pose Invariant Face Recognition Under Arbitrary
Illumination Based on 3D Face Reconstruction
School of Computer Science and Technology, Harbin Institute of Technology
150001 Harbin, China
2 ICT-ISVISION Joint R&D Lab for Face Recognition, ICT, CAS, 100080 Beijing, China
('1695600', 'Xiujuan Chai', 'xiujuan chai')
('2343895', 'Laiyun Qing', 'laiyun qing')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
('1698902', 'Wen Gao', 'wen gao')
{xjchai,xlchen,wgao}@jdl.ac.cn
{lyqing,sgshan}@jdl.ac.cn
a8a30a8c50d9c4bb8e6d2dd84bc5b8b7f2c84dd8This is a repository copy of Modelling of Orthogonal Craniofacial Profiles.
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/131767/
Version: Published Version
Article:
Dai, Hang, Pears, Nicholas Edwin orcid.org/0000-0001-9513-5634 and Duncan, Christian
(2017) Modelling of Orthogonal Craniofacial Profiles. Journal of Imaging. ISSN 2313-433X
https://doi.org/10.3390/jimaging3040055
Reuse
This article is distributed under the terms of the Creative Commons Attribution (CC BY) licence. This licence
allows you to distribute, remix, tweak, and build upon the work, even commercially, as long as you credit the
authors for the original work. More information and the full terms of the licence here:
https://creativecommons.org/licenses/
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
https://eprints.whiterose.ac.uk/
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.
eprints@whiterose.ac.uk
a8638a07465fe388ae5da0e8a68e62a4ee322d68How to predict the global instantaneous feeling induced
by a facial picture?
To cite this version:
feeling induced by a facial picture?. Signal Processing: Image Communication, Elsevier, 2015,
pp.1-30. .
HAL Id: hal-01198718
https://hal.archives-ouvertes.fr/hal-01198718
Submitted on 14 Sep 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('25030249', 'Arnaud Lienhard', 'arnaud lienhard')
('2216412', 'Patricia Ladret', 'patricia ladret')
('1788869', 'Alice Caplier', 'alice caplier')
('25030249', 'Arnaud Lienhard', 'arnaud lienhard')
('2216412', 'Patricia Ladret', 'patricia ladret')
('1788869', 'Alice Caplier', 'alice caplier')
a8e75978a5335fd3deb04572bb6ca43dbfad4738Sparse Graphical Representation based Discriminant
Analysis for Heterogeneous Face Recognition
('2299758', 'Chunlei Peng', 'chunlei peng')
('10699750', 'Xinbo Gao', 'xinbo gao')
('2870173', 'Nannan Wang', 'nannan wang')
('38158055', 'Jie Li', 'jie li')
a8d52265649c16f95af71d6f548c15afc85ac905Situation Recognition with Graph Neural Networks
The Chinese University of Hong Kong, 2University of Toronto, 3Youtu Lab, Tencent
Uber Advanced Technologies Group, 5Vector Institute
('8139953', 'Ruiyu Li', 'ruiyu li')
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('2246396', 'Renjie Liao', 'renjie liao')
('1729056', 'Jiaya Jia', 'jiaya jia')
('2422559', 'Raquel Urtasun', 'raquel urtasun')
('37895334', 'Sanja Fidler', 'sanja fidler')
ryli@cse.cuhk.edu.hk, {makarand,rjliao,urtasun,fidler}@cs.toronto.edu, leojia9@gmail.com
a8583e80a455507a0f146143abeb35e769d25e4eA DISTANCE-ACCURACY HYBRID WEIGHTED VOTING SCHEME
FOR PARTIAL FACE RECOGNITION
1Dept. of Information Engineering and Computer Science,
Feng Chia University, Taichung, Taiwan
2Department of Photonics,
National Chiao Tung University, Taiwan
('40609876', 'Yung-Hui Li', 'yung-hui li')
('3072232', 'Bo-Ren Zheng', 'bo-ren zheng')
('2532474', 'Wei-Cheng Huang', 'wei-cheng huang')
ayunghui@gmail.com, bzawdcx@gmail.com, cs75757775@gmail.com, dchtien@mail.nctu.edu.tw
a87e37d43d4c47bef8992ace408de0f872739efcReview
A Comprehensive Review on Handcrafted and
Learning-Based Action Representation Approaches
for Human Activity Recognition
School of Computing and Communications Infolab21, Lancaster University, Lancaster LA1 4WA, UK
COMSATS Institute of Information Technology, Lahore 54000, Pakistan
Academic Editor: José Santamaria
Received: 5 September 2016; Accepted: 13 January 2017; Published: 23 January 2017
('2145942', 'Allah Bux Sargano', 'allah bux sargano')
('5736243', 'Plamen Angelov', 'plamen angelov')
p.angelov@lancaster.ac.uk
drzhabib@ciitlahore.edu.pk
* Correspondence: a.bux@lancaster.ac.uk; Tel.: +44-152-451-0525
a8c8a96b78e7b8e0d4a4a422fcb083e53ad06531(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 8, No. 4, 2017
3D Human Action Recognition using Hu Moment
Invariants and Euclidean Distance Classifier
System Engineering Department
System Engineering Department
Computer Science Department
University of Arkansas at Little Rock
University of Arkansas at Little Rock
University of Arkansas at Little Rock
Arkansas, USA
Arkansas, USA
Arkansas, USA
('19305764', 'Fadwa Al-Azzo', 'fadwa al-azzo')
('22768683', 'Arwa Mohammed Taqi', 'arwa mohammed taqi')
('1795699', 'Mariofanna Milanova', 'mariofanna milanova')
a8748a79e8d37e395354ba7a8b3038468cb37e1fSeeing the Forest from the Trees: A Holistic Approach to Near-infrared
Heterogeneous Face Recognition
U.S. Army Research Laboratory
University of Maryland, College Park
West Virginia University
('39412489', 'Christopher Reale', 'christopher reale')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
('1688527', 'Heesung Kwon', 'heesung kwon')
('9215658', 'Rama Chellappa', 'rama chellappa')
reale@umiacs.umd.edu
heesung.kwon.civ@mail.mil
nasser.nasrabadi@mail.wvu.edu
rama@umiacs.umd.edu
a8a61badec9b8bc01f002a06e1426a623456d121JOINT SPATIO-TEMPORAL ACTION LOCALIZATION
IN UNTRIMMED VIDEOS WITH PER-FRAME SEGMENTATION
Xi an Jiaotong University
2HERE Technologies
3Alibaba Group
4Microsoft Research
('46809347', 'Xuhuan Duan', 'xuhuan duan')
('40367806', 'Le Wang', 'le wang')
('51262903', 'Changbo Zhai', 'changbo zhai')
('46324995', 'Qilin Zhang', 'qilin zhang')
('1786361', 'Zhenxing Niu', 'zhenxing niu')
('1715389', 'Nanning Zheng', 'nanning zheng')
('1745420', 'Gang Hua', 'gang hua')
a8154d043f187c6640cb6aedeaa8385a323e46cfMURRUGARRA, KOVASHKA: IMAGE RETRIEVAL WITH MIXED INITIATIVE
Image Retrieval with Mixed Initiative and
Multimodal Feedback
Department of Computer Science
University of Pittsburgh
Pittsburgh, PA, USA
('1916866', 'Nils Murrugarra-Llerena', 'nils murrugarra-llerena')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
nineil@cs.pitt.edu
kovashka@cs.pitt.edu
a812368fe1d4a186322bf72a6d07e1cf60067234Imperial College London
Department of Computing
Gaussian Processes
for Modeling of Facial Expressions
September, 2016
Supervised by Prof. Maja Pantic
Submitted in part fulfilment of the requirements for the degree of PhD in Computing and
the Diploma of Imperial College London. This thesis is entirely my own work, and, except
where otherwise indicated, describes my own research.
('2308430', 'Stefanos Eleftheriadis', 'stefanos eleftheriadis')
de7f5e4ccc2f38e0c8f3f72a930ae1c43e0fdcf0Merge or Not? Learning to Group Faces via Imitation Learning
SenseTime
SenseTime
SenseTime
Chen Chang Loy
The Chinese University of Hong Kong
('49990550', 'Yue He', 'yue he')
('9963152', 'Kaidi Cao', 'kaidi cao')
('46651787', 'Cheng Li', 'cheng li')
heyue@sensetime.com
caokaidi@sensetime.com
chengli@sensetime.com
ccloy@ie.cuhk.edu.hk
de8381903c579a4fed609dff3e52a1dc51154951Graz University of Technology
Institute for Computer Graphics and Vision
Dissertation
Shape and Appearance Based Analysis
of Facial Images for Assessing ICAO
Compliance
Graz, Austria, December 2010
Thesis supervisors
Prof. Dr. Horst Bischof
Prof. Dr. Fernando De la Torre
('3464430', 'Markus Storer', 'markus storer')
ded968b97bd59465d5ccda4f1e441f24bac7ede5Noname manuscript No.
(will be inserted by the editor)
Large scale 3D Morphable Models
Zafeiriou
Received: date / Accepted: date
('47456731', 'James Booth', 'james booth')
de0eb358b890d92e8f67592c6e23f0e3b2ba3f66ACCEPTED BY IEEE TRANS. PATTERN ANAL. AND MACH. INTELL.
Inference-Based Similarity Search in
Randomized Montgomery Domains for
Privacy-Preserving Biometric Identification
('46393453', 'Yi Wang', 'yi wang')
('2087574', 'Jianwu Wan', 'jianwu wan')
('39954962', 'Jun Guo', 'jun guo')
('32840387', 'Yiu-ming Cheung', 'yiu-ming cheung')
def569db592ed1715ae509644444c3feda06a536Discovery and usage of joint attention in images
Weizmann Institute of Science, Rehovot, Israel
The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA USA
Massachusetts Institute of Technology, Cambridge, MA USA
Weizmann Institute of Science, Rehovot, Israel
Daniel Harari (hararid@weizmann.ac.il)
Joshua B. Tenenbaum (jbt@mit.edu)
Shimon Ullman (shimon.ullman@weizmann.ac.il)
dee406a7aaa0f4c9d64b7550e633d81bc66ff451Content-Adaptive Sketch Portrait Generation by
Decompositional Representation Learning
('8335563', 'Dongyu Zhang', 'dongyu zhang')
('1737218', 'Liang Lin', 'liang lin')
('1765674', 'Tianshui Chen', 'tianshui chen')
('1738906', 'Xian Wu', 'xian wu')
('1989769', 'Wenwei Tan', 'wenwei tan')
('1732655', 'Ebroul Izquierdo', 'ebroul izquierdo')
de15af84b1257211a11889b6c2adf0a2bcf59b42Anomaly Detection in Non-Stationary and
Distributed Environments
Colin O’Reilly
Submitted for the Degree of
Doctor of Philosophy
from the
University of Surrey
Institute for Communication Systems
Faculty of Engineering and Physical Sciences
University of Surrey
Guildford, Surrey GU2 7XH, U.K.
November 2014
© Colin O’Reilly 2014
de3285da34df0262a4548574c2383c51387a24bfTwo-Stream Convolutional Networks for Dynamic Texture Synthesis
Department of Electrical Engineering and Computer Science
York University, Toronto
('19251410', 'Matthew Tesfaldet', 'matthew tesfaldet'){mtesfald,mab}@eecs.yorku.ca
dedabf9afe2ae4a1ace1279150e5f1d495e565da3294
Robust Face Recognition With Structurally
Incoherent Low-Rank Matrix Decomposition
('2017922', 'Chia-Po Wei', 'chia-po wei')
('2624492', 'Chih-Fan Chen', 'chih-fan chen')
('2733735', 'Yu-Chiang Frank Wang', 'yu-chiang frank wang')
dec0c26855da90876c405e9fd42830c3051c2f5fSupplementary Material: Learning Compositional Visual Concepts with Mutual
Consistency
School of Electrical and Computer Engineering, Cornell University, Ithaca NY
3Siemens Corporate Technology, Princeton NJ
Contents
1. Objective functions
1.1. Adversarial loss
1.2. Extended cycle-consistency loss .
1.3. Commutative loss
. . .
. . .
. . .
2. Additional implementation details
3. Additional results
4. Discussion
5. Generalizing ConceptGAN
5.1. Assumption: Concepts have distinct states . .
5.2. Assumption: Concepts are mutually compatible
5.3. Generalization .
. . .
1. Objective functions
In this section, we provide complete mathematical
expressions for each of the three terms in our loss func-
tion, following the notation defined in Section 3 of the main
paper and the assumption that no training data is available
in subdomain Σ11.
1.1. Adversarial loss
For generator G1 and discriminator D10, for example,
the adversarial loss is expressed as:
Ladv(G1, D10, Σ00, Σ10) = Eσ10∼P10 [log D10(σ10)]
+Eσ00∼P00[log(1 − D10(G1(σ00)))]
(1)
where the generator G1 and discriminator D10 are
learned to optimize a minimax objective such that
G∗
1 = arg min
G1
max
D10
Ladv(G1, D10, Σ00, Σ10)
(2)
For generator G2 and discriminator D01, the adversarial
loss is expressed as:
Ladv(G2, D01, Σ00, Σ01) = Eσ01∼P01 [log D01(σ01)]
+Eσ00∼P00[log(1 − D01(G2(σ00)))]
For generator F1 and discriminator D00, the adversarial
loss is expressed as:
Ladv(F1, D00, Σ10, Σ00) = Eσ00∼P00 [log D00(σ00)]
+Eσ10∼P10 [log(1 − D00(F1(σ10)))]
For generator F2 and discriminator D00, the adversarial
loss is expressed as:
Ladv(F2, D00, Σ01, Σ00) = Eσ00∼P00 [log D00(σ00)]
+Eσ01∼P01 [log(1 − D00(F2(σ01)))]
(5)
The overall adversarial loss LADV is the sum of these four
terms.
(3)
(4)
(6)
LADV =Ladv(G1, D10, Σ00, Σ10)
+ Ladv(G2, D01, Σ00, Σ01)
+ Ladv(F1, D00, Σ10, Σ00)
+ Ladv(F2, D00, Σ01, Σ00)
1.2. Extended cycle-consistency loss
Following our discussion in Section 3.2 of the main
paper, for any data sample σ00 in subdomain Σ00, a
distance-4 cycle consistency constraint is defined in the
clockwise direction (F2 ◦ F1 ◦ G2 ◦ G1)(σ00) ≈ σ00 and in
the counterclockwise direction (F1 ◦ F2 ◦ G1 ◦ G2)(σ00) ≈
σ00. Such constraints are implemented by the penalty func-
tion:
Lcyc4(G, F, Σ00)
= Eσ00∼P00[(cid:107)(F2 ◦ F1 ◦ G2 ◦ G1)(σ00) − σ00(cid:107)1]
+ Eσ00∼P00[(cid:107)(F1 ◦ F2 ◦ G1 ◦ G2)(σ00) − σ00(cid:107)1].
(7)
('3303727', 'Yunye Gong', 'yunye gong')
('1976152', 'Srikrishna Karanam', 'srikrishna karanam')
('3311781', 'Ziyan Wu', 'ziyan wu')
('2692770', 'Kuan-Chuan Peng', 'kuan-chuan peng')
('39497207', 'Jan Ernst', 'jan ernst')
('1767099', 'Peter C. Doerschuk', 'peter c. doerschuk')
{yg326,pd83}@cornell.edu,{first.last}@siemens.com
de398bd8b7b57a3362c0c677ba8bf9f1d8ade583Hierarchical Bayesian Theme Models for
Multi-pose Facial Expression Recognition
('3069077', 'Qirong Mao', 'qirong mao')
('1851510', 'Qiyu Rao', 'qiyu rao')
('1770550', 'Yongbin Yu', 'yongbin yu')
('1710341', 'Ming Dong', 'ming dong')
ded41c9b027c8a7f4800e61b7cfb793edaeb2817
defa8774d3c6ad46d4db4959d8510b44751361d8FEBEI - Face Expression Based Emoticon Identification
CS - B657 Computer Vision
Robert J Henderson - rojahend
('1854614', 'Nethra Chandrasekaran', 'nethra chandrasekaran')
('1830695', 'Prashanth Kumar Murali', 'prashanth kumar murali')
b0c512fcfb7bd6c500429cbda963e28850f2e948
b08203fca1af7b95fda8aa3d29dcacd182375385OBJECT AND TEXT-GUIDED SEMANTICS FOR CNN-BASED ACTIVITY RECOGNITION
U.S. Army Research Laboratory, Adelphi, MD, USA
§Booz Allen Hamilton Inc., McLean, VA, USA
('3090299', 'Sungmin Eum', 'sungmin eum')
('39412489', 'Christopher Reale', 'christopher reale')
('1688527', 'Heesung Kwon', 'heesung kwon')
('3202888', 'Claire Bonial', 'claire bonial')
b03b4d8b4190361ed2de66fcbb6fda0c9a0a7d89Deep Alternative Neural Network: Exploring
Contexts as Early as Possible for Action Recognition
School of Electronics Engineering and Computer Science, Peking University
School of Electronics and Computer Engineering, Peking University
('3258842', 'Jinzhuo Wang', 'jinzhuo wang')
('1788029', 'Wenmin Wang', 'wenmin wang')
('8082703', 'Xiongtao Chen', 'xiongtao chen')
('1702330', 'Ronggang Wang', 'ronggang wang')
('1698902', 'Wen Gao', 'wen gao')
jzwang@pku.edu.cn, wangwm@ece.pku.edu.cn
cxt@pku.edu.cn, rgwang@ece.pku.edu.cn, wgao@pku.edu.cn
b09b693708f412823053508578df289b8403100aWANG et al.: TWO-STREAM SR-CNNS FOR ACTION RECOGNITION IN VIDEOS
Two-Stream SR-CNNs for Action
Recognition in Videos
1 Advanced Interactive Technologies Lab
ETH Zurich
Zurich, Switzerland
2 Computer Vision Lab
ETH Zurich
Zurich, Switzerland
('46394691', 'Yifan Wang', 'yifan wang')
('40403685', 'Jie Song', 'jie song')
('33345248', 'Limin Wang', 'limin wang')
('1681236', 'Luc Van Gool', 'luc van gool')
('2531379', 'Otmar Hilliges', 'otmar hilliges')
yifan.wang@student.ethz.ch
jsong@inf.ethz.ch
07wanglimin@gmail.com
vangool@vision.ee.ethz.ch
otmar.hilliges@inf.ethz.ch
b013cce42dd769db754a57351d49b7410b8e82adAutomatic Point-based Facial Trait Judgments Evaluation
1Computer Vision Center, Edifici O, Campus UAB, Spain
2Universitat Oberta de Catalunya, Rambla del Poblenou 156, 08018, Barcelona, Spain
Princeton University, Princeton, New Jersey, USA
4Department de Matematica Aplicada i Analisi, Universitat de Barcelona, Spain
('1863902', 'David Masip', 'david masip')
('2913698', 'Alexander Todorov', 'alexander todorov')
mrojas@cvc.uab.es, dmasipr@uoc.edu, atodorov@princeton.edu, jordi.vitria@ub.edu
b07582d1a59a9c6f029d0d8328414c7bef64dca0Employing Fusion of Learned and Handcrafted
Features for Unconstrained Ear Recognition
Maur´ıcio Pamplona Segundo∗†
October 24, 2017
('26977067', 'Earnest E. Hansley', 'earnest e. hansley')
('1715991', 'Sudeep Sarkar', 'sudeep sarkar')
b017963d83b3edf71e1673d7ffdec13a6d350a87View Independent Face Detection Based on
Combination of Local and Global Kernels
The University of Electro-Communications
1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, JAPAN
('2510362', 'Kazuhiro HOTTA', 'kazuhiro hotta')hotta@ice.uec.ac.jp,
b03d6e268cde7380e090ddaea889c75f64560891
b084683e5bab9b2bc327788e7b9a8e049d5fff8fUsing LIP to Gloss Over Faces in Single-Stage Face Detection
Networks
The University of Queensland, School of ITEE, QLD 4072, Australia
('1973322', 'Siqi Yang', 'siqi yang')
('2331880', 'Arnold Wiliem', 'arnold wiliem')
('3104113', 'Shaokang Chen', 'shaokang chen')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
{siqi.yang, a.wiliem, s.chen2}@uq.edu.au, lovell@itee.uq.edu.au
b0c1615ebcad516b5a26d45be58068673e2ff217How Image Degradations Affect Deep CNN-based Face
Recognition?
S¸amil Karahan1 Merve Kılınc¸ Yıldırım1 Kadir Kırtac¸1 Ferhat S¸ ¨ukr¨u Rende1
G¨ultekin B¨ut¨un1Hazım Kemal Ekenel2
b03446a2de01126e6a06eb5d526df277fa36099fA Torch Library for Action Recognition and Detection Using CNNs and LSTMs
Stanford University
('4910251', 'Helen Jiang', 'helen jiang'){gthung, helennn}@stanford.edu
b0de0892d2092c8c70aa22500fed31aa7eb4dd3f(will be inserted by the editor)
A robust and efficient video representation for action recognition
Received: date / Accepted: date
('1804138', 'Heng Wang', 'heng wang')
b018fa5cb9793e260b8844ae155bd06380988584Project STAR IST-2000-28764
Deliverable D6.3 Enhanced face and arm/hand
detector
Date: August 29th, 2003
Katholieke Universiteit Leuven, ESAT/VISICS
Kasteelpark Arenberg 10, 3001 Heverlee, Belgium
Tel. +32-16-32.10.61 and Fax. +32-16-32.17.23
http://www.esat.kuleuven.ac.be/ knummiar/star/star.html
To: STAR project partners
Siemens CT PP6,
Otto-Hahn-Ring 6, 81730 Munich, Germany
Tel. +49-89-636.49.851, Fax. +49-89-636.481.00
Introduction
KU Leuven is responsible for the work package number 6, Automated view selection and
camera hand-over. The main goal is to build an intelligent virtual editor that produces as
an output a single video stream from multiple input streams. The selection should be made
in such a way that the resulting stream is pleasant to watch and informative about what is
going on in the scene. Face detection and object tracking is needed to select the best camera
view from the multi-camera system.
KUL has delivered the STAR deliverables D6.1 Initial face detection software and D6.2 Initial
arm/hand tracking software from work package 6, July 2002 (month 12). The integration of
the detection and tracking has been needed to successfully provide this deliverable D6.3
Enhanced face and arm/hand detector.
We explain (cid:12)rst the enhanced face detection, followed by the enhanced tracking software and
(cid:12)nally the integration. Also the hand tracking results with simple histogram-based detection
is presented. The results will be shown using the common STAR data sequencies, from
di(cid:11)erent Siemens factories, in Germany.
('2381884', 'Katja Nummiaro', 'katja nummiaro')
('2733505', 'Rik Fransens', 'rik fransens')
('1681236', 'Luc Van Gool', 'luc van gool')
fknummiar, fransen, vangoolg@esat.kuleuven.ac.be
artur.raczynski@mchp.siemens.de
b073313325b6482e22032e259d7311fb9615356cRobust and Accurate Cancer Classification with Gene Expression Profiling
Dept. of Computer Science
Human Interaction Research Lab
Dept. of Computer Science
University of California
Riverside, CA 92521
Motorola, Inc
Tempe, AZ 85282
University of California
Riverside, CA 92521
('31947043', 'Haifeng Li', 'haifeng li')
('1749400', 'Keshu Zhang', 'keshu zhang')
('6820989', 'Tao Jiang', 'tao jiang')
hli@cs.ucr.edu
keshu.zhang@motorola.com
jiang@cs.ucr.edu
a6f81619158d9caeaa0863738ab400b9ba2d77c2Face Recognition using Convolutional Neural Network
and Simple Logistic Classifier
Intelligent Systems Laboratory (ISLAB),
Faculty of Electrical & Computer Engineering
K.N. Toosi University of Technology, Tehran, Iran
('2040276', 'Hurieh Khalajzadeh', 'hurieh khalajzadeh')
('10694774', 'Mohammad Mansouri', 'mohammad mansouri')
('1709359', 'Mohammad Teshnehlab', 'mohammad teshnehlab')
hurieh.khalajzadeh@gmail.com,
mohammad.mansouri@ee.kntu.ac.ir,
teshnehlab@eetd.kntu.ac.ir
a66d89357ada66d98d242c124e1e8d96ac9b37a0Failure Detection for Facial Landmark Detectors
Computer Vision Lab, D-ITET, ETH Zurich, Switzerland
('33028242', 'Andreas Steger', 'andreas steger')
('1732855', 'Radu Timofte', 'radu timofte')
stegeran@ethz.ch, {radu.timofte, vangool}@vision.ee.ethz.ch
a6d7cf29f333ea3d2aeac67cde39a73898e270b7Gender Classification from Facial Images Using Texture Descriptors
801
Gender Classification from Facial Images Using Texture Descriptors
King Saud University, KSA
King Saud University, KSA
King Saud University, KSA
University of Nevada at Reno, USA
('1758125', 'Ihsan Ullah', 'ihsan ullah')
('1966959', 'Hatim Aboalsamh', 'hatim aboalsamh')
('2363759', 'Muhammad Hussain', 'muhammad hussain')
('1758305', 'Ghulam Muhammad', 'ghulam muhammad')
('1808451', 'George Bebis', 'george bebis')
{ihsanullah, hatim, mhussain, ghulam}@ksu.edu.sa, bebis@cse.unr.edu
a611c978e05d7feab01fb8a37737996ad6e88bd9Benchmarking 3D pose estimation for
face recognition
Computational Biomedicine Lab, University of Houston, TX, USA
('39634395', 'Pengfei Dou', 'pengfei dou')
('2461369', 'Yuhang Wu', 'yuhang wu')
('2700399', 'Shishir K. Shah', 'shishir k. shah')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
{pengfei,yuhang}@cbl.uh.edu, {sshah,IKakadia}@central.uh.edu
a608c5f8fd42af6e9bd332ab516c8c2af7063c612408
Age Estimation via Grouping and Decision Fusion
('3006921', 'Kuan-Hsien Liu', 'kuan-hsien liu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('9363144', 'C.-C. Jay Kuo', 'c.-c. jay kuo')
a6e8a8bb99e30a9e80dbf80c46495cf798066105Ranking Generative Adversarial Networks:
Subjective Control over Semantic Image Attributes
University of Bath
('41020280', 'Yassir Saquil', 'yassir saquil')
('1808255', 'Kwang In Kim', 'kwang in kim')
a6eb6ad9142130406fb4ffd4d60e8348c2442c29Video Description: A Survey of Methods,
Datasets and Evaluation Metrics
('50978260', 'Nayyer Aafaq', 'nayyer aafaq')
('1746166', 'Syed Zulqarnain Gilani', 'syed zulqarnain gilani')
('46641573', 'Wei Liu', 'wei liu')
('46332747', 'Ajmal Mian', 'ajmal mian')
a6ffe238eaf8632b4a8a6f718c8917e7f3261546 Australasian Medical Journal [AMJ 2011, 4, 10, 555-562]
Dynamic Facial Prosthetics for Sufferers of Facial Paralysis
Nottingham Trent University, Nottingham, UK
Nottingham University Hospital, Nottingham, UK
RESEARCH

Please cite this paper as: Coulter F, Breedon P, Vloeberghs
M. Dynamic facial prosthetics for sufferers of facial
paralysis.
AMJ 2011, 4, 10, 555-562
http//dx.doi.org/10.4066/AMJ.2011.921
Corresponding Author:
Nottingham Trent University

United Kingdom
('6930559', 'Fergal Coulter', 'fergal coulter')
('3214667', 'Philip Breedon', 'philip breedon')
('40436855', 'Michael Vloeberghs', 'michael vloeberghs')
('3214667', 'Philip Breedon', 'philip breedon')
philip.breedon@ntu.ac.uk
a6583c8daa7927eedb3e892a60fc88bdfe89a486
a660390654498dff2470667b64ea656668c98eccFACIAL EXPRESSION RECOGNITION BASED ON GRAPH-PRESERVING SPARSE
NON-NEGATIVE MATRIX FACTORIZATION
Institute of Information Science
Beijing Jiaotong University
Beijing 100044, P.R. China
, Bastiaan Kleijn
ACCESS Linnaeus Center
KTH Royal Institute of Technology, Stockholm
School of Electrical Engineering
('3247912', 'Ruicong Zhi', 'ruicong zhi')
('1749334', 'Markus Flierl', 'markus flierl')
('1738408', 'Qiuqi Ruan', 'qiuqi ruan')
{05120370, qqruan}@bjtu.edu.cn
{ruicong, mflierl, bastiaan}@kth.se
a60907b7ee346b567972074e3e03c82f64d7ea30Head Motion Signatures from Egocentric Videos
The Hebrew University of Jerusalem, Israel
2 IIIT Delhi, India
('2926663', 'Yair Poleg', 'yair poleg')
('1897733', 'Chetan Arora', 'chetan arora')
('1796055', 'Shmuel Peleg', 'shmuel peleg')
a6e43b73f9f87588783988333997a81b4487e2d5Facial Age Estimation by Total Ordering
Preserving Projection
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China
('39527177', 'Xiao-Dong Wang', 'xiao-dong wang')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
{wangxd,zhouzh}@lamda.nju.edu.cn
a6496553fb9ab9ca5d69eb45af1bdf0b60ed86dcSemi-supervised Neighborhood Preserving
Discriminant Embedding:
A Semi-supervised Subspace Learning
Algorithm
1 Department of Computer Science and Software Engineering,

University of Western Australia
('2067346', 'Maryam Mehdizadeh', 'maryam mehdizadeh')
('1766400', 'Cara MacNish', 'cara macnish')
('39128433', 'R. Nazim Khan', 'r. nazim khan')
('1698675', 'Mohammed Bennamoun', 'mohammed bennamoun')
a6b5ffb5b406abfda2509cae66cdcf56b4bb3837One Shot Similarity Metric Learning
for Action Recognition
The Weizmann Institute of
The Open University
Science, Rehovot, Israel.
Raanana, Israel.
The Blavatnik School of Computer Science, Tel-Aviv University, Tel-Aviv, Israel
('3294355', 'Orit Kliper-Gross', 'orit kliper-gross')
('1756099', 'Tal Hassner', 'tal hassner')
('1776343', 'Lior Wolf', 'lior wolf')
orit.kliper@weizmann.ac.il
hassner@openu.ac.il
wolf@cs.tau.ac.il
a6590c49e44aa4975b2b0152ee21ac8af3097d80https://doi.org/10.1007/s11263-018-1074-6
3D Interpreter Networks for Viewer-Centered Wireframe Modeling
Received: date / Accepted: date
('3045089', 'Jiajun Wu', 'jiajun wu')
('1763295', 'Joshua B. Tenenbaum', 'joshua b. tenenbaum')
a694180a683f7f4361042c61648aa97d222602dbFace Recognition using Scattering Wavelet under Illicit Drug Abuse Variations
IIIT-Delhi India
('2503967', 'Prateekshit Pandey', 'prateekshit pandey')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
fprateekshit12078, rsingh, mayankg@iiitd.ac.in
a6db73f10084ce6a4186363ea9d7475a9a658a11
a6e25cab2251a8ded43c44b28a87f4c62e3a548aLet’s Dance: Learning From Online Dance Videos
Georgia Institute of Technology
Irfan Essa
('40333356', 'Daniel Castro', 'daniel castro')
('2935619', 'Steven Hickson', 'steven hickson')
('3430745', 'Patsorn Sangkloy', 'patsorn sangkloy')
('40506496', 'Bhavishya Mittal', 'bhavishya mittal')
('35459529', 'Sean Dai', 'sean dai')
('1945508', 'James Hays', 'james hays')
shickson@gatech.edu
patsorn sangkloy@gatech.edu
dcastro9@gatech.edu
bmittal6@gatech.edu
sdai@gatech.edu
hays@gatech.edu
irfan@gatech.edu
a6634ff2f9c480e94ed8c01d64c9eb70e0d98487
a6270914cf5f60627a1332bcc3f5951c9eea3be0Joint Attention in Driver-Pedestrian Interaction: from
Theory to Practice
Department of Electrical Engineering and Computer Science
York University, Toronto, ON, Canada
March 28, 2018
('26902477', 'Amir Rasouli', 'amir rasouli')
('1727853', 'John K. Tsotsos', 'john k. tsotsos')
{aras,tsotsos}@eecs.yorku.ca
a6ce2f0795839d9c2543d64a08e043695887e0ebDriver Gaze Region Estimation
Without Using Eye Movement
Massachusetts Institute of Technology (MIT
('49925254', 'Philipp Langhans', 'philipp langhans')
('7137846', 'Joonbum Lee', 'joonbum lee')
('1901227', 'Bryan Reimer', 'bryan reimer')
a6b1d79bc334c74cde199e26a7ef4c189e9acd46bioRxiv preprint first posted online Aug. 17, 2017;
doi:
http://dx.doi.org/10.1101/177196
.
The copyright holder for this preprint (which was
not peer-reviewed) is the author/funder. It is made available under a
CC-BY-NC 4.0 International license
Deep Recurrent Neural Network Reveals a Hierarchy of
Process Memory during Dynamic Natural Vision
1Weldon School of Biomedical Engineering
2School of Electrical and Computer Engineering
Purdue Institute for Integrative Neuroscience
Purdue University, West Lafayette, Indiana, 47906, USA
*Correspondence
Assistant Professor of Biomedical Engineering
Assistant Professor of Electrical and Computer Engineering
College of Engineering, Purdue University
206 S. Martin Jischke Dr.
West Lafayette, IN 47907, USA
Phone: +1 765 496 1872
Fax: +1 765 496 1459
('4416237', 'Junxing Shi', 'junxing shi')
('4431043', 'Haiguang Wen', 'haiguang wen')
('3334748', 'Yizhen Zhang', 'yizhen zhang')
('3418794', 'Kuan Han', 'kuan han')
('1799110', 'Zhongming Liu', 'zhongming liu')
('1799110', 'Zhongming Liu', 'zhongming liu')
Email: zmliu@purdue.edu
a6ebe013b639f0f79def4c219f585b8a012be04fFacial Expression Recognition Based on Hybrid
Approach
Graduate School of Science and Engineering, Saitama University
255 Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama 338-8570, Japan
E-mail
('13403748', 'Md. Abdul Mannan', 'md. abdul mannan')
('34949901', 'Antony Lam', 'antony lam')
('2367471', 'Yoshinori Kobayashi', 'yoshinori kobayashi')
('1737913', 'Yoshinori Kuno', 'yoshinori kuno')
a6e21438695dbc3a184d33b6cf5064ddf655a9baPKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human
Action Understanding
Institiude of Computer Science and Technology, Peking University
('2994549', 'Jiaying Liu', 'jiaying liu')
('1708754', 'Chunhui Liu', 'chunhui liu')
{liuchunhui, huyy, lyttonhao, ssj940929, liujiaying}@pku.edu.cn
b9081856963ceb78dcb44ac410c6fca0533676a3UntrimmedNets for Weakly Supervised Action Recognition and Detection
1Computer Vision Laboratory, ETH Zurich, Switzerland
The Chinese University of Hong Kong, Hong Kong
('33345248', 'Limin Wang', 'limin wang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('1807606', 'Dahua Lin', 'dahua lin')
('1681236', 'Luc Van Gool', 'luc van gool')
b97f694c2a111b5b1724eefd63c8d64c8e19f6c9Group Affect Prediction Using Multimodal Distributions
Aspiring Minds
Univeristy of Massachusetts, Amherst
Johns Hopkins University
('40997180', 'Saqib Nizam Shamsi', 'saqib nizam shamsi')
('47679973', 'Bhanu Pratap Singh', 'bhanu pratap singh')
('7341605', 'Manya Wadhwa', 'manya wadhwa')
shamsi.saqib@gmail.com
bhanupratap.mnit@gmail.com
mwadhwa1@jhu.edu
b9d0774b0321a5cfc75471b62c8c5ef6c15527f5Fishy Faces: Crafting Adversarial Images to Poison Face Authentication
imec-DistriNet, KU Leuven
imec-DistriNet, KU Leuven
imec-DistriNet, KU Leuven
imec-DistriNet, KU Leuven
imec-DistriNet, KU Leuven
('4412412', 'Giuseppe Garofalo', 'giuseppe garofalo')
('23974422', 'Vera Rimmer', 'vera rimmer')
('19243432', 'Tim Van hamme', 'tim van hamme')
('1722184', 'Davy Preuveneers', 'davy preuveneers')
('1752104', 'Wouter Joosen', 'wouter joosen')
b9cad920a00fc0e997fc24396872e03f13c0bb9cFACE LIVENESS DETECTION UNDER BAD ILLUMINATION CONDITIONS
University of Campinas (Unicamp
Campinas, SP, Brazil
('2826093', 'Bruno Peixoto', 'bruno peixoto')
('34629204', 'Carolina Michelassi', 'carolina michelassi')
('2145405', 'Anderson Rocha', 'anderson rocha')
b908edadad58c604a1e4b431f69ac8ded350589aDeep Face Feature for Face Alignment ('15679675', 'Boyi Jiang', 'boyi jiang')
('2938279', 'Juyong Zhang', 'juyong zhang')
('2964129', 'Bailin Deng', 'bailin deng')
('8280113', 'Yudong Guo', 'yudong guo')
('47968194', 'Ligang Liu', 'ligang liu')
b93bf0a7e449cfd0db91a83284d9eba25a6094d8Supplementary Material for: Active Pictorial Structures
Epameinondas Antonakos
Joan Alabort-i-Medina
Stefanos Zafeiriou
Imperial College London
180 Queens Gate, SW7 2AZ, London, U.K.
In the following sections, we provide additional material for the paper “Active Pictorial Structures”. Section 1 explains in
more detail the differences between the proposed Active Pictorial Structures (APS) and Pictorial Structures (PS). Section 2
presents the proofs about the structure of the precision matrices of the Gaussian Markov Random Filed (GMRF) (Eqs. 10
and 12 of the main paper). Section 3 gives an analysis about the forward Gauss-Newton optimization of APS and shows that
the inverse technique with fixed Jacobian and Hessian, which is used in the main paper, is much faster. Finally, Sec. 4 shows
additional experimental results and conducts new experiments on different objects (human eyes and cars). An open-source
implementation of APS is available within the Menpo Project [1] in http://www.menpo.org/.
1. Differences between Active Pictorial Structures and Pictorial Structures
As explained in the main paper, the proposed model is partially motivated by PS [4, 8]. In the original formulation of PS,
the cost function to be optimized has the form
(cid:88)
n(cid:88)
n(cid:88)
i=1
arg min
= arg min
i=1
mi((cid:96)i) +
dij((cid:96)i, (cid:96)j) =
i,j:(vi,vj )∈E
[A((cid:96)i) − µa
i ]T (Σa
i )−1[A((cid:96)i) − µa
i ] +
(cid:88)
i,j:(vi,vj )∈E
[(cid:96)i − (cid:96)j − µd
ij]T (Σd
ij)−1[(cid:96)i − (cid:96)j − µd
ij]
(1)
1 , . . . , (cid:96)T
n ]T is the vector of landmark coordinates ((cid:96)i = [xi, yi]T , ∀i = 1, . . . , n), A((cid:96)i) is a feature vector
where s = [(cid:96)T
ij} denote the mean
extracted from the image location (cid:96)i and we have assumed a tree G = (V, E). {µa
and covariances of the appearance and deformation respectively. In Eq. 1, mi((cid:96)i) is a function measuring the degree of
mismatch when part vi is placed at location (cid:96)i in the image. Moreover, dij((cid:96)i, (cid:96)j) denotes a function measuring the degree
of deformation of the model when part vi is placed at location (cid:96)i and part vj is placed at location (cid:96)j. The authors show
an inference algorithm based on distance transform [3] that can find a global minimum of Eq. 1 without any initialization.
However, this algorithm imposes two important restrictions: (1) appearance of each part is independent of the rest of them
and (2) G must always be acyclic (a tree). Additionally, the computation of mi((cid:96)i) for all parts (i = 1, . . . , n) and all possible
image locations (response maps) has a high computational cost, which makes the algorithm very slow. Finally, in [8], the
authors only use a diagonal covariance for the relative locations (deformation) of each edge of the graph, which restricts the
flexibility of the model.
i } and {µd
ij, Σd
i , Σa
In the proposed APS, we aim to minimize the cost function (Eq. 19 of the main paper)
(cid:107)A(S(¯s, p)) − ¯a(cid:107)2
[A(S(¯s, p)) − ¯a]T Qa[A(S(¯s, p)) − ¯a] + [S(¯s, p) − ¯s]T Qd[S(¯s, p) − ¯s]
Qa + (cid:107)S(¯s, p) − ¯s(cid:107)2
Qd =
arg min
= arg min
(2)
There are two main differences between APS and PS: (1) we employ a statistical shape model and optimize with respect
to its parameters and (2) we use the efficient Gauss-Newton optimization technique. However, these differences introduce
some important advantages, as also mentioned in the main paper. The proposed formulation allows to define a graph (not
only tree) between the object’s parts. This means that we can assume dependencies between any pair of landmarks for both
{e.antonakos, ja310, s.zafeiriou}@imperial.ac.uk
b9c9c7ef82f31614c4b9226e92ab45de4394c5f611
Face Recognition under Varying Illumination
Nanyang Technological University
Singapore
1. Introduction
Face Recognition by a robot or machine is one of the challenging research topics in the
recent years. It has become an active research area which crosscuts several disciplines such
as image processing, pattern recognition, computer vision, neural networks and robotics.
For many applications, the performances of face recognition systems in controlled
environments have achieved a satisfactory level. However, there are still some challenging
issues to address in face recognition under uncontrolled conditions. The variation in
illumination is one of the main challenging problems that a practical face recognition system
needs to deal with. It has been proven that in face recognition, differences caused by
illumination variations are more significant than differences between individuals (Adini et
al., 1997). Various methods have been proposed to solve the problem. These methods can be
classified into three categories, named face and illumination modeling, illumination
invariant feature extraction and preprocessing and normalization. In this chapter, an
extensive and state-of-the-art study of existing approaches to handle illumination variations
is presented. Several latest and representative approaches of each category are presented in
detail, as well as the comparisons between them. Moreover, to deal with complex
environment where illumination variations are coupled with other problems such as pose
and expression variations, a good feature representation of human face should not only be
illumination invariant, but also robust enough against pose and expression variations. Local
binary pattern (LBP) is such a local texture descriptor. In this chapter, a detailed study of the
LBP and its several important extensions is carried out, as well as its various combinations
with other techniques to handle illumination invariant face recognition under a complex
environment. By generalizing different strategies in handling illumination variations and
evaluating their performances, several promising directions for future research have been
suggested.
This chapter is organized as follows. Several famous methods of face and illumination
modeling are introduced in Section 2. In Section 3, latest and representative approaches of
illumination invariant feature extraction are presented in detail. More attentions are paid on
quotient-image-based methods. In Section 4, the normalization methods on discarding low
frequency coefficients in various transformed domains are introduced with details. In
Section 5, a detailed introduction of the LBP and its several important extensions is
presented, as well as its various combinations with other face recognition techniques. In
Section 6, comparisons between different methods and discussion of their advantages and
disadvantages are presented. Finally, several promising directions as the conclusions are
drawn in Section 7.
www.intechopen.com
('9244425', 'Lian Zhichao', 'lian zhichao')
('9224769', 'Er Meng Joo', 'er meng joo')
b9f2a755940353549e55690437eb7e13ea226bbfUnsupervised Feature Learning from Videos for Discovering and Recognizing Actions ('3296857', 'Carolina Redondo-Cabrera', 'carolina redondo-cabrera')
('2941882', 'Roberto J. López-Sastre', 'roberto j. lópez-sastre')
carolina.redondoc@edu.uah.es
robertoj.lopez@uah.es
b9cedd1960d5c025be55ade0a0aa81b75a6efa61INEXACT KRYLOV SUBSPACE ALGORITHMS FOR LARGE
MATRIX EXPONENTIAL EIGENPROBLEM FROM
DIMENSIONALITY REDUCTION
('1685951', 'Gang Wu', 'gang wu')
('7139289', 'Ting-ting Feng', 'ting-ting feng')
('9472022', 'Li-jia Zhang', 'li-jia zhang')
('5828998', 'Meng Yang', 'meng yang')
b971266b29fcecf1d5efe1c4dcdc2355cb188ab0MAI et al.: ON THE RECONSTRUCTION OF FACE IMAGES FROM DEEP FACE TEMPLATES
On the Reconstruction of Face Images from
Deep Face Templates
('3391550', 'Guangcan Mai', 'guangcan mai')
('1684684', 'Kai Cao', 'kai cao')
('1768574', 'Pong C. Yuen', 'pong c. yuen')
('6680444', 'Anil K. Jain', 'anil k. jain')
a1af7ec84472afba0451b431dfdb59be323e35b7LikeNet: A Siamese Motion Estimation
Network Trained in an Unsupervised Way
Multimedia and Vision Research Group
Queen Mary University of London
London, UK
('49505678', 'Aria Ahmadi', 'aria ahmadi')
('2000297', 'Ioannis Marras', 'ioannis marras')
('1744405', 'Ioannis Patras', 'ioannis patras')
('49505678', 'Aria Ahmadi', 'aria ahmadi')
('2000297', 'Ioannis Marras', 'ioannis marras')
('1744405', 'Ioannis Patras', 'ioannis patras')
a.ahmadi@qmul.ac.uk
i.marras@qmul.ac.uk
i.patras@qmul.ac.uk
a1dd806b8f4f418d01960e22fb950fe7a56c18f1Interactively Building a Discriminative Vocabulary of Nameable Attributes
Toyota Technological Institute, Chicago (TTIC
University of Texas at Austin
('1713589', 'Devi Parikh', 'devi parikh')
('1794409', 'Kristen Grauman', 'kristen grauman')
dparikh@ttic.edu
grauman@cs.utexas.edu
a158c1e2993ac90a90326881dd5cb0996c20d4f3OPEN ACCESS
ISSN 2073-8994
Article
1 DMA, Università degli Studi di Palermo, via Archirafi 34, 90123 Palermo, Italy
2 CITC, Università degli Studi di Palermo, via Archirafi 34, 90123 Palermo, Itlay
3 Istituto Nazionale di Ricerche Demopolis, via Col. Romey 7, 91100 Trapani, Italy
† Deceased on 15 March 2009.
Received: 4 March 2010; in revised form: 23 March 2010 / Accepted: 29 March 2010 /
Published: 1 April 2010
('1716744', 'Bertrand Zavidovique', 'bertrand zavidovique')4 IEF, Université Paris IX–Orsay, Paris, France; E-Mail: bertrand.zavidovique@u-psud.fr (B.Z.)
* Author to whom correspondence should be addressed; E-Mail: metabacchi@demopolis.it.
a15d9d2ed035f21e13b688a78412cb7b5a04c469Object Detection Using
Strongly-Supervised Deformable Part Models
1Computer Vision and Active Perception Laboratory (CVAP), KTH, Sweden
2INRIA, WILLOW, Laboratoire d’Informatique de l’Ecole Normale Superieure
('2622491', 'Hossein Azizpour', 'hossein azizpour')
('1785596', 'Ivan Laptev', 'ivan laptev')
azizpour@kth.se,ivan.laptev@inria.fr
a1b1442198f29072e907ed8cb02a064493737158456
Crowdsourcing Facial Responses
to Online Videos
('1801452', 'Daniel McDuff', 'daniel mcduff')
('1754451', 'Rana El Kaliouby', 'rana el kaliouby')
('1719389', 'Rosalind W. Picard', 'rosalind w. picard')
a14db48785d41cd57d4eac75949a6b79fc684e70Fast High Dimensional Vector Multiplication Face Recognition
Tel Aviv University
Tel Aviv University
Tel Aviv University
IBM Research
('2109324', 'Oren Barkan', 'oren barkan')
('40389676', 'Jonathan Weill', 'jonathan weill')
('1776343', 'Lior Wolf', 'lior wolf')
('2580470', 'Hagai Aronowitz', 'hagai aronowitz')
orenbarkan@post.tau.ac.il
yonathanw@post.tau.ac.il
wolf@cs.tau.ac.il
hagaia@il.ibm.com
a15c728d008801f5ffc7898568097bbeac8270a4Concise Preservation by Combining Managed Forgetting
and Contextualized Remembering
Grant Agreement No. 600826
Deliverable D4.4
Work-package
Deliverable
Deliverable Leader
Quality Assessor
Dissemination level
Delivery date in Annex I
Actual delivery date
Revisions
Status
Keywords
Information Consolidation and Con-
WP4:
centration
D4.4:
Information analysis, consolidation
and concentration techniques, and evalua-
tion - Final release.
Vasileios Mezaris (CERTH)
Walter Allasia (EURIX)
PU
31-01-2016 (M36)
31-01-2016
Final
multidocument summarization, semantic en-
richment,
feature extraction, concept de-
tection, event detection, image/video qual-
ity, image/video aesthetic quality, face de-
tection/clustering,
im-
age/video summarization, image/video near
duplicate detection, data deduplication, con-
densation, consolidation
image clustering,
a1b7bb2a4970b7c479aff3324cc7773c1daf3fc1Longitudinal Study of Child Face Recognition
Michigan State University
East Lansing, MI, USA
Malaviya National Institute of Technology
Jaipur, India
Michigan State University
East Lansing, MI, USA
('32623642', 'Debayan Deb', 'debayan deb')
('2117075', 'Neeta Nain', 'neeta nain')
('6680444', 'Anil K. Jain', 'anil k. jain')
debdebay@msu.edu
nnain.cse@mnit.ac.in
jain@cse.msu.edu
a14ed872503a2f03d2b59e049fd6b4d61ab4d6caAttentional Pooling for Action Recognition
The Robotics Institute, Carnegie Mellon University
http://rohitgirdhar.github.io/AttentionalPoolingAction
('3102850', 'Rohit Girdhar', 'rohit girdhar')
('1770537', 'Deva Ramanan', 'deva ramanan')
a1132e2638a8abd08bdf7fc4884804dd6654fa636
Real-Time Video Face Recognition
for Embedded Devices
Tessera, Galway,
Ireland
1. Introduction
This chapter will address the challenges of real-time video face recognition systems
implemented in embedded devices. Topics to be covered include: the importance and
challenges of video face recognition in real life scenarios, describing a general architecture of
a generic video face recognition system and a working solution suitable for recognizing
faces in real-time using low complexity devices. Each component of the system will be
described together with the system’s performance on a database of video samples that
resembles real life conditions.
2. Video face recognition
Face recognition remains a very active topic in computer vision and receives attention from
a large community of researchers in that discipline. Many reasons feed this interest; the
main being the wide range of commercial, law enforcement and security applications that
require authentication. The progress made in recent years on the methods and algorithms
for data processing as well as the availability of new technologies makes it easier to study
these algorithms and turn them into commercially viable product. Biometric based security
systems are becoming more popular due to their non-invasive nature and their increasing
reliability. Surveillance applications based on face recognition are gaining increasing
attention after the United States’ 9/11 events and with the ongoing security threats. The
Face Recognition Vendor Test (FRVT) (Phillips et al., 2003) includes video face recognition
testing starting with the 2002 series of tests.
Recently, face recognition technology was deployed in consumer applications such as
organizing a collection of images using the faces present in the images (Picassa; Corcoran &
Costache, 2005), prioritizing family members for best capturing conditions when taking
pictures, or directly annotating the images as they are captured (Costache et al., 2006).
Video face recognition, compared with more traditional still face recognition, has the main
advantage of using multiple instances of the same individual in sequential frames for
recognition to occur. In still recognition case, the system has only one input image to make
the decision if the person is or is not in the database. If the image is not suitable for
recognition (due to face orientation, expression, quality or facial occlusions) the recognition
result will most likely be incorrect. In the video image there are multiple frames which can
www.intechopen.com
('1706790', 'Petronel Bigioi', 'petronel bigioi')
('1734172', 'Peter Corcoran', 'peter corcoran')
a125bc55bdf4bec7484111eea9ae537be314ec62Real-time Facial Expression Recognition in Image
Sequences Using an AdaBoost-based Multi-classifier
National Taiwan University of Science and Technology, Taipei 10607, Taiwan
National Taiwan University of Science and Technology, Taipei 10607, Taiwan
National Taiwan University of Science and Technology, Taipei 10607, Taiwan
To surmount the shortcomings as stated above, we
attempt to develop an automatic facial expression recognition
system that detects human faces and extracts facial features
from an image sequence. This system is employed for
recognizing six kinds of facial expressions: joy, anger,
surprise, fear, sadness, and neutral of a computer user. In the
expression classification procedure, we mainly compare the
performance of different classifiers using multi-layer
perceptions (MLPs), SVMs, and AdaBoost algorithms
(ABAs). Through evaluating experimental
the
performance of ABAs is superior to that of the other two.
According to this, we develop an AdaBoost-based multi-
classifier used in our facial expression recognition system.
results,
II. FACE AND FACIAL FEATURE DETECTION
In our system design philosophy, the skin color cue is an
obvious characteristic to detect human faces. To begin with,
we will execute skin color detection, then the morphological
dilation operation, and facial feature detection. Subsequently,
a filtering operation based on geometrical properties is
applied to eliminate the skin color regions that do not pertain
to human faces.
A. Color Space Transformation
Face detection is dependent on skin color detection
techniques which work in one of frequently used color spaces.
In the past, three color spaces YCbCr, HSI, and RGB have
been extensively applied for skin color detection. Accordingly,
we extract the common attribute from skin color regions to
perform face detection.
The color model of an image captured from the
experimental camera is composed of RGB values, but it’s
easy to be influenced by lighting. Herein, we adopt the HSI
color space to replace the traditional RGB color space for skin
color detection. We distinguish skin color regions from non-
skin color ones by means of lower and upper bound
thresholds. Via many experiments of detecting human faces,
we choose the H value between 3 and 38 as the range of skin
colors.
B. Connected Component Labeling
After the processing of skin color detection, we employ
linear-time connected-component
technique
labeling
the
('2574621', 'Chin-Shyurng Fahn', 'chin-shyurng fahn')
('2604646', 'Ming-Hui Wu', 'ming-hui wu')
('2309647', 'Chang-Yi Kao', 'chang-yi kao')
E-mail: csfahn@mail.ntust.edu.tw Tel: +886-02-2730-1215
E-mail: M9415054@mail.ntust.edu.tw Tel: +886-02-2733-3141 ext.7425
E-mail: D9515011@mail.ntust.edu.tw Tel: +886-02-2733-3141 ext.7425
a14ae81609d09fed217aa12a4df9466553db4859REVISED VERSION, JUNE 2011
Face Identification Using Large Feature Sets
('1679142', 'William Robson Schwartz', 'william robson schwartz')
('2723427', 'Huimin Guo', 'huimin guo')
('3826759', 'Jonghyun Choi', 'jonghyun choi')
('1693428', 'Larry S. Davis', 'larry s. davis')
a1f1120653bb1bd8bd4bc9616f85fdc97f8ce892Latent Embeddings for Zero-shot Classification
1MPI for Informatics
2IIT Kanpur
Saarland University
('3370667', 'Yongqin Xian', 'yongqin xian')
('2893664', 'Zeynep Akata', 'zeynep akata')
('2515597', 'Gaurav Sharma', 'gaurav sharma')
('33460941', 'Matthias Hein', 'matthias hein')
('1697100', 'Bernt Schiele', 'bernt schiele')
a1ee0176a9c71863d812fe012b5c6b9c15f9aa8aAffective recommender systems: the role of emotions in
recommender systems
Jurij Tasiˇc
University of Ljubljana Faculty
University of Ljubljana Faculty
University of Ljubljana Faculty
of electrical engineering
Tržaška 25, Ljubljana,
Slovenia
of electrical engineering
Tržaška 25, Ljubljana,
Slovenia
of electrical engineering
Tržaška 25, Ljubljana,
Slovenia
('1717186', 'Andrej Košir', 'andrej košir')marko.tkalcic@fe.uni-lj.si
andrej.kosir@fe.uni-lj.si
jurij.tasic@fe.uni-lj.si
a1dd9038b1e1e59c9d564e252d3e14705872fdecAttributes as Operators:
Factorizing Unseen Attribute-Object Compositions
The University of Texas at Austin
2 Facebook AI Research
('38661780', 'Tushar Nagarajan', 'tushar nagarajan')
('1794409', 'Kristen Grauman', 'kristen grauman')
tushar@cs.utexas.edu, grauman@fb.com∗
a1e97c4043d5cc9896dc60ae7ca135782d89e5fcIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Re-identification of Humans in Crowds using
Personal, Social and Environmental Constraints
('2963501', 'Shayan Modiri Assari', 'shayan modiri assari')
('1803711', 'Haroon Idrees', 'haroon idrees')
('1745480', 'Mubarak Shah', 'mubarak shah')
a16fb74ea66025d1f346045fda00bd287c20af0eA Coupled Evolutionary Network for Age Estimation
National Laboratory of Pattern Recognition, CASIA, Beijing, China 100190
Center for Research on Intelligent Perception and Computing, CASIA, Beijing, China 100190
University of Chinese Academy of Sciences, Beijing, China
('2112221', 'Peipei Li', 'peipei li')
('49995036', 'Yibo Hu', 'yibo hu')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
Email: {peipei.li, yibo.hu}@cripac.ia.ac.cn, {rhe, znsun}@nlpr.ia.ac.cn
ef940b76e40e18f329c43a3f545dc41080f68748

Research Article Volume 7 Issue No.3
ISSN XXXX XXXX © 2017 IJESC

A Face Recognition and Spoofing Detection Adapted to Visually-
Impaired People
K.K Wagh Institute of Engineering and Education Research, Nashik, India
Depart ment of Co mputer Engineering
Abstrac t:
According to estimates by the world Health organization, about 285 million people suffer fro m so me kind of v isual disabilit ies of
which 39 million are blind, resulting in 0.7 of the word population. As many v isual impaired peoples in the word they are unable
to recognize the people who is standing in front of them and some peoples who have problem to re me mbe r na me of the person.
They can easily recognize the person using this system. A co mputer vision technique and image ana lysis can help v isually
the home using face identification and spoofing detection system. This system also provide feature to add newly known people
and keep records of all peoples visiting their ho me.
Ke ywor ds: face-recognition, spoofing detection, visually-impaired, system architecture.
I.
INTRODUCTION
The facia l ana lysis can be used to e xtract very useful and
relevant information in order to help people with visual
impairment in several of its tasks daily providing them with a
greater degree of autonomy and security. Facia l recognition
has received many improve ments recent years and today is
approaching perfection. The advances in facia l recognition
have not been outside the People with disab ilities. For
e xa mple , recently it has an intelligent walking stick for the
blind that uses facial recognition [5]. The cane co mes
equipped with a fac ial recognition system, GPS and Bluetooth.
at the sight the face of any acquaintance or friend whose
picture is stored on the SD card stick, this will v ibrate and give
to Bluetooth headset through a necessary instructions to reach
this person. The system works with anyone who is at 10
meters or less. And thanks to the GPS, the user will rece ive
instructions for reach wherever, as with any GPS navigator.
However, in addition to the task of recognition today have
biometric systems to deal with other problems, such as
spoofing. In network security terms, this term re fers to Using
techniques through which an attacker, usually with malic ious
use, it is passed by a other than through the falsification of
data entity in a co mmun ication. Motivation of the p roject is to
propose, build and validate an architecture based on face
recognition and anti-spoofing system that both can be
integrated in a video entry as a mobile app. In this way, we
want to give the blind and visually impaired an instrument or
tool to allo w an ult imate goal to improve the quality of life
and increase both safety and the feel of it in your ho me or
when you
interact with other people. The p roposed
architecture has been validated with rea l users and a real
environment simulating the same conditions as could give
both the images captured by a video portero as images taken
by a person visually impa ired through their mobile device.
Contributions are d iscussed below: First an algorith m is
proposed for the normalization face robust user as to rotations
and misalignments in the face detection algorith m. It is shown
that a robust norma lizat ion algorithm you can significantly
increase the rate of success in a face detection algorithm
The organizat ion of this document is as follo ws. In Section 2
gives literature survey, Section 3 gives details of system
architecture. In Section 4 gives imp le mentation details.
Section 5 presents research findings and your analysis of those
findings. Section 6 concludes the paper.
II. LITERATURE S URVEY
A. Facial Rec ognition oriente d visual i mpair ment
The proble m of face recognition adapted to visually impaired
people has been investigated in their d ifferent ways. Belo w are
summarized the work impo rtant, indicating for each the most
important features that have been motivating development of
the architecture proposed here. In [6] fac ia l recognition system
is presented in mobile devices for the visually impaired, but
meet ings main ly focused on what aspects as visual fie ld
captured by the mobile focus much of the subject. In [7]
system developed facial recognition based on Local Binary
Pattern (LBP) [8]. They co mpared this with other a lternatives
descriptor (Local Te rnary Pattern [9] or Histogram of
Gradients [10]) and arrived It concluded that the performance
is slightly LBP superior, its computational cost is lower and
representation information is more co mpact. As has been
mentioned above, in [5] it has developed a system fac ial
recognition integrated into a cane. In none of these methods is
carried out detection spoofing, making the system has a
vulnerability high against such attacks. We believe it is a point
very important especially in people with visual d isabilities.
Moreover, none of the alternatives above mentioned is video
porters oriented.
B. De tection S poofing
As none of the above has been studied spoofing detection to
help people with visual impairment, we will discuss the
results more significant as
refers. There are many different methods
for detecting
spoofing. However, one o f the key factors in an application
that must run in rea l time and in a device Embedded is what
the method be co mputationally lightweight. Most algorith ms
or proposed are very comple x and are therefo re unfit for rea l,
far as detecting spoofing
International Journal of Engineering Science and Computing, March 2017 6051 http://ijesc.org/
efd308393b573e5410455960fe551160e1525f49Tracking Persons-of-Interest via
Unsupervised Representation Adaptation
('2481388', 'Shun Zhang', 'shun zhang')
('3068086', 'Jia-Bin Huang', 'jia-bin huang')
('33047058', 'Jongwoo Lim', 'jongwoo lim')
('1698965', 'Yihong Gong', 'yihong gong')
('32014778', 'Jinjun Wang', 'jinjun wang')
('1752333', 'Narendra Ahuja', 'narendra ahuja')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
ef230e3df720abf2983ba6b347c9d46283e4b690Page 1 of 20
QUIS-CAMPI: An Annotated Multi-biometrics Data Feed From
Surveillance Scenarios
IT - Instituto de Telecomunica es, University of Beira Interior
University of Beira Interior
IT - Instituto de Telecomunica es, University of Beira Interior
('1712429', 'Hugo Proença', 'hugo proença')*jcneves@ubi.pt
ef4ecb76413a05c96eac4c743d2c2a3886f2ae07Modeling the Importance of Faces in Natural Images
Jin B.a, Yildirim G.a, Lau C.a, Shaji A.a, Ortiz Segovia M.b and S¨usstrunk S.a
aEPFL, Lausanne, Switzerland;
bOc´e, Paris, France
efd28eabebb9815e34031316624e7f095c7dfcfeA. Uhl and P. Wild. Combining Face with Face-Part Detectors under Gaussian Assumption. In A. Campilho and M. Kamel,
editors, Proceedings of the 9th International Conference on Image Analysis and Recognition (ICIAR’12), volume 7325 of
LNCS, pages 80{89, Aveiro, Portugal, June 25{27, 2012. c⃝ Springer. doi: 10.1007/978-3-642-31298-4 10. The original
publication is available at www.springerlink.com.
Combining Face with Face-part Detectors
under Gaussian Assumption⋆
Multimedia Signal Processing and Security Lab
University of Salzburg, Austria
('1689850', 'Andreas Uhl', 'andreas uhl')
('2242291', 'Peter Wild', 'peter wild')
fuhl,pwildg@cosy.sbg.ac.at
eff87ecafed67cc6fc4f661cb077fed5440994bbEvaluation of Expression Recognition
Techniques
Beckman Institute, University of Illinois at Urbana-Champaign, USA
Faculty of Science, University of Amsterdam, The Netherlands
Leiden Institute of Advanced Computer Science, Leiden University, The Netherlands
('1774778', 'Ira Cohen', 'ira cohen')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1840164', 'Yafei Sun', 'yafei sun')
('1731570', 'Michael S. Lew', 'michael s. lew')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
ef458499c3856a6e9cd4738b3e97bef010786adbLearning Type-Aware Embeddings for Fashion
Compatibility
Department of Computer Science,
University of Illinois at Urbana-Champaign
('47087718', 'Mariya I. Vasileva', 'mariya i. vasileva')
('2856622', 'Bryan A. Plummer', 'bryan a. plummer')
('40895028', 'Krishna Dusad', 'krishna dusad')
('9560882', 'Shreya Rajpal', 'shreya rajpal')
('40439276', 'Ranjitha Kumar', 'ranjitha kumar')
{mvasile2,bplumme2,dusad2,srajpal2,ranjitha,daf}@illnois.edu
ef032afa4bdb18b328ffcc60e2dc5229cc1939bcFang and Yuan EURASIP Journal on Image and Video
Processing (2018) 2018:44
https://doi.org/10.1186/s13640-018-0282-x
EURASIP Journal on Image
and Video Processing
RESEARCH
Open Access
Attribute-enhanced metric learning for
face retrieval
('8589942', 'Yuchun Fang', 'yuchun fang')
('30438417', 'Qiulong Yuan', 'qiulong yuan')
ef2a5a26448636570986d5cda8376da83d96ef87Recurrent Neural Networks and Transfer Learning for Action Recognition
Stanford University
Stanford University
('11647121', 'Andrew Giel', 'andrew giel')
('32426361', 'Ryan Diaz', 'ryan diaz')
agiel@stanford.edu
ryandiaz@stanford.edu
ef5531711a69ed687637c48930261769465457f0Studio2Shop: from studio photo shoots to fashion articles
Zalando Research, Muehlenstr. 25, 10243 Berlin, Germany
Keywords:
computer vision, deep learning, fashion, item recognition, street-to-shop
('46928510', 'Julia Lasserre', 'julia lasserre')
('1724791', 'Katharina Rasch', 'katharina rasch')
('2742129', 'Roland Vollgraf', 'roland vollgraf')
julia.lasserre@zalando.de
ef559d5f02e43534168fbec86707915a70cd73a0DING, HUO, HU, LU: DEEPINSIGHT
DeepInsight: Multi-Task Multi-Scale Deep
Learning for Mental Disorder Diagnosis
1 School of Information
Renmin University of China
Beijing, 100872, China
2 Beijing Key Laboratory
of Big Data Management
and Analysis Methods
Beijing, 100872, China
('5535865', 'Mingyu Ding', 'mingyu ding')
('4140493', 'Yuqi Huo', 'yuqi huo')
('1745787', 'Jun Hu', 'jun hu')
('1776220', 'Zhiwu Lu', 'zhiwu lu')
d130143597@163.com
bnhony@163.com
junhu@ruc.edu.cn
luzhiwu@ruc.edu.cn
efa08283656714911acff2d5022f26904e451113Active Object Localization in Visual Situations ('3438473', 'Max H. Quinn', 'max h. quinn')
('13739397', 'Anthony D. Rhodes', 'anthony d. rhodes')
('4421478', 'Melanie Mitchell', 'melanie mitchell')
ef8de1bd92e9ee9d0d2dee73095d4d348dc54a98Fine-grained Activity Recognition
with Holistic and Pose based Features
Max Planck Institute for Informatics, Germany
Stanford University, USA
('2299109', 'Leonid Pishchulin', 'leonid pishchulin')
('1906895', 'Mykhaylo Andriluka', 'mykhaylo andriluka')
('1697100', 'Bernt Schiele', 'bernt schiele')
ef999ab2f7b37f46445a3457bf6c0f5fd7b5689dCalhoun: The NPS Institutional Archive
DSpace Repository
Theses and Dissertations
1. Thesis and Dissertation Collection, all items
2017-12
Improving face verification in photo albums by
combining facial recognition and metadata
with cross-matching
Monterey, California: Naval Postgraduate School
http://hdl.handle.net/10945/56868
Downloaded from NPS Archive: Calhoun
c32fb755856c21a238857b77d7548f18e05f482dMultimodal Emotion Recognition for Human-
Computer Interaction: A Survey
School of Computer and Communication Engineering, University of Science and Technology Beijing, 100083 Beijing, China
('10692633', 'Michele Mukeshimana', 'michele mukeshimana')
('1714904', 'Xiaojuan Ban', 'xiaojuan ban')
('17056027', 'Nelson Karani', 'nelson karani')
('7247643', 'Ruoyi Liu', 'ruoyi liu')
c3beae515f38daf4bd8053a7d72f6d2ed3b05d88
c3dc4f414f5233df96a9661609557e341b71670dTao et al. EURASIP Journal on Advances in Signal Processing 2011, 2011:4
http://asp.eurasipjournals.com/content/2011/1/4
RESEARCH
Utterance independent bimodal emotion
recognition in spontaneous communication
Open Access
('37670752', 'Jianhua Tao', 'jianhua tao')
('48027528', 'Shifeng Pan', 'shifeng pan')
('2740129', 'Minghao Yang', 'minghao yang')
('3295988', 'Kaihui Mu', 'kaihui mu')
('2253805', 'Jianfeng Che', 'jianfeng che')
c3b3636080b9931ac802e2dd28b7b684d6cf4f8bInternational Journal of Security and Its Applications
Vol. 7, No. 2, March, 2013
Face Recognition via Local Directional Pattern
Division of IT Convergence, Daegu Gyeongbuk Institute of Science and Technology
50-1, Sang-ri, Hyeonpung-myeon, Dalseong-gun, Daegu, Korea.
('2437301', 'Dong-Ju Kim', 'dong-ju kim')
('38107412', 'Sang-Heon Lee', 'sang-heon lee')
('2735120', 'Myoung-Kyu Sohn', 'myoung-kyu sohn')
*radioguy@dgist.ac.kr
c398684270543e97e3194674d9cce20acaef3db3Chapter 2
Comparative Face Soft Biometrics for
Human Identification
('19249411', 'Nawaf Yousef Almudhahka', 'nawaf yousef almudhahka')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('31534955', 'Jonathon S. Hare', 'jonathon s. hare')
c3285a1d6ec6972156fea9e6dc9a8d88cd001617
c3418f866a86dfd947c2b548cbdeac8ca5783c15
c3bcc4ee9e81ce9c5c0845f34e9992872a8defc0MVA2005 IAPR Conference on Machine VIsion Applications, May 16-18, 2005 Tsukuba Science City, Japan
8-10
A New Scheme for Image Recognition Using Higher-Order Local
Autocorrelation and Factor Analysis
yThe University of Tokyo
Tokyo, Japan
yyyAIST
Tukuba, Japan
('29737626', 'Naoyuki Nomoto', 'naoyuki nomoto')
('2163494', 'Yusuke Shinohara', 'yusuke shinohara')
('2981587', 'Takayoshi Shiraki', 'takayoshi shiraki')
('1800592', 'Takumi Kobayashi', 'takumi kobayashi')
('1809629', 'Nobuyuki Otsu', 'nobuyuki otsu')
f shiraki, takumi, otsug @isi.imi.i.u-tokyo.ac.jp
c34532fe6bfbd1e6df477c9ffdbb043b77e7804dA 3D Morphable Eye Region Model
for Gaze Estimation
University of Cambridge, Cambridge, UK
Carnegie Mellon University, Pittsburgh, USA
Max Planck Institute for Informatics, Saarbr ucken, Germany
('34399452', 'Erroll Wood', 'erroll wood')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
('39626495', 'Peter Robinson', 'peter robinson')
('3194727', 'Andreas Bulling', 'andreas bulling')
{eww23,pr10}@cl.cam.ac.uk
{tbaltrus,morency}@cs.cmu.edu
bulling@mpi-inf.mpg.de
c394a5dfe5bea5fbab4c2b6b90d2d03e01fb29c0Person Reidentification and Recognition in
Video
Computer Science and Engineering,
University of South Florida, Tampa, Florida, USA
http://figment.csee.usf.edu/
('3110392', 'Rangachar Kasturi', 'rangachar kasturi')R1K@cse.usf.edu,rajmadhan@mail.usf.edu
c32383330df27625592134edd72d69bb6b5cff5c422
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 2, APRIL 2012
Intrinsic Illumination Subspace for Lighting
Insensitive Face Recognition
('1686057', 'Chia-Ping Chen', 'chia-ping chen')
('1720473', 'Chu-Song Chen', 'chu-song chen')
c3a3f7758bccbead7c9713cb8517889ea6d04687
c32f04ccde4f11f8717189f056209eb091075254Analysis and Synthesis of Behavioural Specific
Facial Motion
A dissertation submitted to the University of Bristol in accordance with the requirements
for the degree of Doctor of Philosophy in the Faculty of Engineering, Department of
Computer Science.
February 2007
71657 words
('2903159', 'Lisa Nanette Gralewski', 'lisa nanette gralewski')
c30982d6d9bbe470a760c168002ed9d66e1718a2Multi-Camera Head Pose Estimation
Using an Ensemble of Exemplars
University City Blvd., Charlotte, NC
Department of Computer Science
University of North Carolina at Charlotte
('1715594', 'Scott Spurlock', 'scott spurlock')
('2549750', 'Peter Malmgren', 'peter malmgren')
('1873911', 'Hui Wu', 'hui wu')
('1690110', 'Richard Souvenir', 'richard souvenir')
{sspurloc, ptmalmyr, hwu13, souvenir}@uncc.edu
c39ffc56a41d436748b9b57bdabd8248b2d28a32Residual Attention Network for Image Classification
SenseTime Group Limited, 2Tsinghua University
The Chinese University of Hong Kong, 4Beijing University of Posts and Telecommunications
('1682816', 'Fei Wang', 'fei wang')
('9563639', 'Mengqing Jiang', 'mengqing jiang')
('40110742', 'Chen Qian', 'chen qian')
('1692609', 'Shuo Yang', 'shuo yang')
('49672774', 'Cheng Li', 'cheng li')
('1720776', 'Honggang Zhang', 'honggang zhang')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
1{wangfei, qianchen, chengli}@sensetime.com, 2jmq14@mails.tsinghua.edu.cn
3{ys014, xtang}@ie.cuhk.edu.hk, xgwang@ee.cuhk.edu.hk, 4zhhg@bupt.edu.cn
c32cd207855e301e6d1d9ddd3633c949630c793aOn the Effect of Illumination and Face Recognition
Jeffrey Ho
Department of CISE
University of Florida
Gainesville, FL 32611
Department of Computer Science
University of California at San Diego
La Jolla, CA 92093
('38998440', 'David Kriegman', 'david kriegman')Email: jho@cise.ufl.edu
Email: kriegman@cs.ucsd.edu
c317181fa1de2260e956f05cd655642607520a4fResearch Article
Research
Article for submission to journal
Subject Areas:
computer vision, pattern recognition,
feature descriptor
Keywords:
micro-facial expression, expression
recognition, action unit
Objective Classes for
Micro-Facial Expression
Recognition
Centre for Imaging Sciences, University of
Manchester, Manchester, United Kingdom
Sudan University of Science and Technology
Khartoum, Sudan
3School of Computing, Mathematics and Digital
Technology, Manchester Metropolitan University
Manchester, United Kingdom
instead of predicted emotion,
Micro-expressions are brief spontaneous facial expressions
that appear on a face when a person conceals an emotion,
making them different
to normal facial expressions in
subtlety and duration. Currently, emotion classes within
the CASME II dataset are based on Action Units and
self-reports, creating conflicts during machine learning
training. We will show that classifying expressions using
Action Units,
removes
the potential bias of human reporting. The proposed
classes are tested using LBP-TOP, HOOF and HOG 3D
feature descriptors. The experiments are evaluated on
two benchmark FACS coded datasets: CASME II and
SAMM. The best result achieves 86.35% accuracy when
classifying the proposed 5 classes on CASME II using
HOG 3D, outperforming the result of the state-of-the-
art 5-class emotional-based classification in CASME II.
Results indicate that classification based on Action Units
provides an objective method to improve micro-expression
recognition.
1. Introduction
A micro-facial expression is revealed when someone attempts
to conceal their true emotion [1,2]. When they consciously
realise that a facial expression is occurring, the person may try
to suppress the facial expression because showing the emotion
may not be appropriate [3]. Once the suppression has occurred,
the person may mask over the original facial expression and
cause a micro-facial expression. In a high-stakes environment,
these expressions tend to become more likely as there is more
risk to showing the emotion.
('3125772', 'Moi Hoon Yap', 'moi hoon yap')
('36059631', 'Adrian K. Davison', 'adrian k. davison')
('23986818', 'Walied Merghani', 'walied merghani')
('3125772', 'Moi Hoon Yap', 'moi hoon yap')
e-mail: M.Yap@mmu.ac.uk
c30e4e4994b76605dcb2071954eaaea471307d80
c37a971f7a57f7345fdc479fa329d9b425ee02beA Novice Guide towards Human Motion Analysis and Understanding ('40360970', 'Ahmed Nabil Mohamed', 'ahmed nabil mohamed')dr.ahmed.mohamed@ieee.org
c3638b026c7f80a2199b5ae89c8fcbedfc0bd8af
c32c8bfadda8f44d40c6cd9058a4016ab1c27499Unconstrained Face Recognition From a Single
Image
Siemens Corporate Research, 755 College Road East, Princeton, NJ
Center for Automation Research (CfAR), University of Maryland, College Park, MD
I. INTRODUCTION
In most situations, identifying humans using faces is an effortless task for humans. Is this true for computers?
This very question defines the field of automatic face recognition [10], [38], [79], one of the most active research
areas in computer vision, pattern recognition, and image understanding. Over the past decade, the problem of face
recognition has attracted substantial attention from various disciplines and has witnessed a skyrocketing growth of
the literature. Below, we mainly emphasize some key perspectives of the face recognition problem.
A. Biometric perspective
Face is a biometric. As a consequence, face recognition finds wide applications in authentication, security, and
so on. One recent application is the US-VISIT system by the Department of Homeland Security (DHS), collecting
foreign passengers’ fingerprints and face images.
Biometric signatures of a person characterize the physiological or behavioral characteristics. Physiological bio-
metrics are innate or naturally occuring, while behavioral biometrics arise from mannerisms or traits that are learned
or acquired. Table I lists commonly used biometrics. Biometric technologies provide the foundation for an extensive
array of highly secure identification and personal verification solutions. Compared to conventional identification and
verification methods based on personal identification numbers (PINs) or passwords, biometric technologies offer
many advantages. First, biometrics are individualized traits while passwords may be used or stolen by someone
other than the authorized user. Also, biometric is very convenient since there is nothing to carry or remember. In
addition, biometric technologies are becoming more accurate and less expensive.
Among all biometrics listed in Table I, face is a very unique one because it is the only biometric belonging to
both physiological and behavioral categories. While the physiological part of the face has been widely exploited
for face recognition, the behavioral part has not yet been fully investigated. In addition, as reported in [23], [51],
face enjoys many advantages over other biometrics because it is a natural, non-intrusive, and easy-to-use biometric.
For example [23], among six biometrics of face, finger, hand, voice, eye, and signature, face biometric ranks the
June 10, 2008
DRAFT
('1682187', 'Shaohua Kevin Zhou', 'shaohua kevin zhou')
('9215658', 'Rama Chellappa', 'rama chellappa')
('34713849', 'Narayanan Ramanathan', 'narayanan ramanathan')
Email: {shaohua.zhou}@siemens.com, {rama, ramanath}@cfar.umd.edu
c3fb2399eb4bcec22723715556e31c44d086e054499
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
1. INTRODUCTION
c37de914c6e9b743d90e2566723d0062bedc9e6a©2016 Society for Imaging Science and Technology
DOI: 10.2352/ISSN.2470-1173.2016.11.IMAWM-455
Joint and Discriminative Dictionary Learning
Expression Recognition
for Facial
('38611433', 'Sriram Kumar', 'sriram kumar')
('3168309', 'Behnaz Ghoraani', 'behnaz ghoraani')
('32219349', 'Andreas Savakis', 'andreas savakis')
c418a3441f992fea523926f837f4bfb742548c16A Computer Approach for Face Aging Problems
Centre for Pattern Recognition and Machine Intelligence,
Concordia University, Canada
('1769788', 'Khoa Luu', 'khoa luu')kh_lu@cenparmi.concordia.ca
c4fb2de4a5dc28710d9880aece321acf68338fdeInteractive Generative Adversarial Networks for Facial Expression Generation
in Dyadic Interactions
University of Central Florida
Educational Testing Service
Saad Khan
Educational Testing Service
('2974242', 'Behnaz Nojavanasghari', 'behnaz nojavanasghari')
('2224875', 'Yuchi Huang', 'yuchi huang')
behnaz@eecs.ucf.edu
yhuang001@ets.org
skhan002@ets.org
c44c84540db1c38ace232ef34b03bda1c81ba039Cross-Age Reference Coding for Age-Invariant
Face Recognition and Retrieval
Institute of Information Science, Academia Sinica, Taipei, Taiwan
National Taiwan University, Taipei, Taiwan
('33970300', 'Bor-Chun Chen', 'bor-chun chen')
('1720473', 'Chu-Song Chen', 'chu-song chen')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
c4f1fcd0a5cdaad8b920ee8188a8557b6086c1a4Int J Comput Vis (2014) 108:3–29
DOI 10.1007/s11263-014-0698-4
The Ignorant Led by the Blind: A Hybrid Human–Machine Vision
System for Fine-Grained Categorization
Received: 7 March 2013 / Accepted: 8 January 2014 / Published online: 20 February 2014
© Springer Science+Business Media New York 2014
('3251767', 'Steve Branson', 'steve branson')
('1690922', 'Pietro Perona', 'pietro perona')
c46a4db7247d26aceafed3e4f38ce52d54361817A CNN Cascade for Landmark Guided Semantic
Part Segmentation
School of Computer Science, The University of Nottingham, Nottingham, UK
('34596685', 'Aaron S. Jackson', 'aaron s. jackson')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
{aaron.jackson, michel.valstar, yorgos.tzimiropoulos}@nottingham.ac.uk
c43862db5eb7e43e3ef45b5eac4ab30e318f2002Provable Self-Representation Based Outlier Detection in a Union of Subspaces
Johns Hopkins University, Baltimore, MD, 21218, USA
('1878841', 'Chong You', 'chong you')
('1780452', 'Daniel P. Robinson', 'daniel p. robinson')
c4dcf41506c23aa45c33a0a5e51b5b9f8990e8ad Understanding Activity: Learning the Language of Action
Univ. of Rochester and Maryland
1.1 Overview
Understanding observed activity is an important
problem, both from the standpoint of practical applications,
and as a central issue in attempting to describe the
phenomenon of intelligence. On the practical side, there are a
large number of applications that would benefit from
improved machine ability to analyze activity. The most
prominent are various surveillance scenarios. The current
emphasis on homeland security has brought this issue to the
forefront, and resulted in considerable work on mostly low-
level detection schemes. There are also applications in
medical diagnosis and household assistants that, in the long
run, may be even more important. In addition, there are
numerous scientific projects, ranging from monitoring of
weather conditions to observation of animal behavior that
would be facilitated by automatic understanding of activity.
From a scientific standpoint, understanding activity
understanding is central to understanding intelligence.
Analyzing what is happening in the environment, and acting
on the results of that analysis is, to a large extent, what
natural intelligent systems do, whether they are human or
animal. Artificial intelligences, if we want them to work with
people in the natural world, will need commensurate abilities.
The importance of the problem has not gone unrecognized.
There is a substantial body of work on various components of
the problem, most especially on change detection, motion
analysis, and tracking. More recently, in the context of
surveillance applications, there have been some preliminary
efforts to come up with a general ontology of human activity.
These efforts have largely been top-down in the classic AI
tradition, and, as with earlier analogous effort in areas such
as object recognition and scene understanding, have seen
limited practical application because of the difficulty in
robustly extracting the putative primitives on which the top-
down formalism is based. We propose a novel alternative
approach, where understanding activity is centered on
('34344092', 'Randal Nelson', 'randal nelson')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
c42a8969cd76e9f54d43f7f4dd8f9b08da566c5fTowards Unconstrained Face Recognition
Using 3D Face Model
Intelligent Autonomous Systems (IAS), Technical University of Munich, Garching
Computer Vision Research Group, COMSATS Institute of Information
Technology, Lahore
1Germany
2Pakistan
1. Introduction
Over the last couple of decades, many commercial systems are available to identify human
faces. However, face recognition is still an outstanding challenge against different kinds of
real world variations especially facial poses, non-uniform lightings and facial expressions.
Meanwhile the face recognition technology has extended its role from biometrics and security
applications to human robot interaction (HRI). Person identity is one of the key tasks while
interacting with intelligent machines/robots, exploiting the non intrusive system security
and authentication of the human interacting with the system. This capability further helps
machines to learn person dependent traits and interaction behavior to utilize this knowledge
for tasks manipulation. In such scenarios acquired face images contain large variations which
demands an unconstrained face recognition system.
Fig. 1. Biometric analysis of past few years has been shown in figure showing the
contribution of revenue generated by various biometrics. Although AFIS are getting popular
in current biometric industry but faces are still considered as one of the widely used
biometrics.
www.intechopen.com
('1725709', 'Zahid Riaz', 'zahid riaz')
('4241648', 'M. Saquib Sarfraz', 'm. saquib sarfraz')
('1746229', 'Michael Beetz', 'michael beetz')
c41de506423e301ef2a10ea6f984e9e19ba091b4Modeling Attributes from Category-Attribute Proportions
Columbia University
2IBM Research
('1815972', 'Felix X. Yu', 'felix x. yu')
('29889388', 'Tao Chen', 'tao chen')
{yuxinnan, taochen, sfchang}@ee.columbia.edu
{liangliang.cao, mimerler, nccodell, jsmith}@us.ibm.com
c4934d9f9c41dbc46f4173aad2775432fe02e0e6Workshop track - ICLR 2017
GENERALIZATION TO NEW COMPOSITIONS OF KNOWN
ENTITIES IN IMAGE UNDERSTANDING
Bar Ilan University, Israel
Jonathan Berant &
Amir Globerson
Tel Aviv University
Israel
Vahid Kazemi &
Gal Chechik
Google Research,
Mountain View CA, USA
('34815079', 'Yuval Atzmon', 'yuval atzmon')yuval.atzmon@biu.ac.il
c40c23e4afc81c8b119ea361e5582aa3adecb157Coupled Marginal Fisher Analysis for
Low-resolution Face Recognition
Carnegie Mellon University, Electrical and Computer Engineering
5000 Forbes Avenue, Pittsburgh, Pennsylvania, USA 15213
('2883809', 'Stephen Siena', 'stephen siena')
('2232940', 'Vishnu Naresh Boddeti', 'vishnu naresh boddeti')
ssiena@andrew.cmu.edu
naresh@cmu.edu
kumar@ece.cmu.edu
c49aed65fcf9ded15c44f9cbb4b161f851c6fa88Multiscale Facial Expression Recognition using Convolutional Neural Networks
IDIAP, Martigny, Switzerland
('8745904', 'Beat Fasel', 'beat fasel')Beat.Fasel@idiap.ch
c466ad258d6262c8ce7796681f564fec9c2b143d14-21
MVA2013 IAPR International Conference on Machine Vision Applications, May 20-23, 2013, Kyoto, JAPAN
Pose-Invariant Face Recognition
Using A Single 3D Reference Model
National Taiwan University of Science and Technology
No. 43, Sec.4, Keelung Rd., Taipei, 106, Taiwan
('38801529', 'Gee-Sern Hsu', 'gee-sern hsu')
('3329222', 'Hsiao-Chia Peng', 'hsiao-chia peng')
*jison@mail.ntust.edu.tw
ea46951b070f37ad95ea4ed08c7c2a71be2daedcUsing phase instead of optical flow
for action recognition
Computer Vision Lab, Delft University of Technology, Netherlands
Intelligent Sensory Interactive Systems, University of Amsterdam, Netherlands
('9179750', 'Omar Hommos', 'omar hommos')
('37041694', 'Silvia L. Pintea', 'silvia l. pintea')
('1738975', 'Jan C. van Gemert', 'jan c. van gemert')
eac6aee477446a67d491ef7c95abb21867cf71fcJOURNAL
A survey of sparse representation: algorithms and
applications
('38448016', 'Zheng Zhang', 'zheng zhang')
('38649019', 'Yong Xu', 'yong xu')
('37081450', 'Jian Yang', 'jian yang')
('1720243', 'Xuelong Li', 'xuelong li')
('1698371', 'David Zhang', 'david zhang')
ea079334121a0ba89452036e5d7f8e18f6851519UNSUPERVISED INCREMENTAL LEARNING OF DEEP DESCRIPTORS
FROM VIDEO STREAMS
MICC University of Florence
('2619131', 'Federico Pernici', 'federico pernici')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
federico.pernici@unifi.it, alberto.delbimbo@unifi.it
eac1b644492c10546a50f3e125a1f790ec46365fChained Multi-stream Networks Exploiting Pose, Motion, and Appearance for
Action Classification and Detection
University of Freiburg
Freiburg im Breisgau, Germany
('2890820', 'Mohammadreza Zolfaghari', 'mohammadreza zolfaghari')
('2371771', 'Gabriel L. Oliveira', 'gabriel l. oliveira')
('31656404', 'Nima Sedaghat', 'nima sedaghat')
('1710872', 'Thomas Brox', 'thomas brox')
{zolfagha,oliveira,nima,brox}@cs.uni-freiburg.de
ea80a050d20c0e24e0625a92e5c03e5c8db3e786Face Verification and Face Image Synthesis
under Illumination Changes
using Neural Networks
by
Under the supervision of
Prof. Daphna Weinshall
School of Computer Science and Engineering
The Hebrew University of Jerusalem
Israel
Submitted in partial fulfillment of the
requirements of the degree of
Master of Science
December, 2017
eacba5e8fbafb1302866c0860fc260a2bdfff232VOS-GAN: Adversarial Learning of Visual-Temporal
Dynamics for Unsupervised Dense Prediction in Videos
∗ Pattern Recognition and Computer Vision (PeRCeiVe) Lab
University of Catania, Italy
www.perceivelab.com
§ Center for Research in Computer Vision
University of Central Florida, USA
http://crcv.ucf.edu
('31411067', 'C. Spampinato', 'c. spampinato')
('35323264', 'S. Palazzo', 's. palazzo')
('2004177', 'F. Murabito', 'f. murabito')
('1690194', 'D. Giordano', 'd. giordano')
('1797029', 'M. Shah', 'm. shah')
ea482bf1e2b5b44c520fc77eab288caf8b3f367aProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
2592
ea6f5c8e12513dbaca6bbdff495ef2975b8001bdApplying a Set of Gabor Filter to 2D-Retinal Fundus Image
to Detect the Optic Nerve Head (ONH)
1Higher National School of engineering of Tunis, ENSIT, Laboratory LATICE (Information Technology and Communication and
Electrical Engineering LR11ESO4), University of Tunis EL Manar. Adress: ENSIT 5, Avenue Taha Hussein, B. P. : 56, Bab
Menara, 1008 Tunis; 2University of Tunis El-Manar, Tunis with expertise in Mechanic, Optics, Biophysics, Conference Master
ISTMT, Laboratory of Research in Biophysics and Medical Technologies LRBTM Higher Institute of Medical Technologies of Tunis
ISTMT, University of Tunis El Manar Address: 9, Rue Docteur Zouhe r Safi 1006; 3Faculty of Medicine of Tunis; Address
Rue Djebel Lakhdhar. La Rabta. 1007, Tunis - Tunisia
Corresponding author:
High Institute of Medical Technologies
of Tunis, ISTMT, and High National
School Engineering of Tunis,
Information Technology and
Communication Technology and
Electrical Engineering, University of
Tunis El-Manar, ENSIT 5, Avenue Taha
Hussein, B. P.: 56, Bab Menara, 1008
Tunis, Tunisia,
Tel: 9419010363;
('9304667', 'Hédi Trabelsi', 'hédi trabelsi')
('2281259', 'Ines Malek', 'ines malek')
('31649078', 'Imed Jabri', 'imed jabri')
E-mail: rabelg@live.fr
eafda8a94e410f1ad53b3e193ec124e80d57d095Jeffrey F. Cohn
13
Observer-Based Measurement of Facial Expression
With the Facial Action Coding System
Facial expression has been a focus of emotion research for over
a hundred years (Darwin, 1872/1998). It is central to several
leading theories of emotion (Ekman, 1992; Izard, 1977;
Tomkins, 1962) and has been the focus of at times heated
debate about issues in emotion science (Ekman, 1973, 1993;
Fridlund, 1992; Russell, 1994). Facial expression figures
prominently in research on almost every aspect of emotion,
including psychophysiology (Levenson, Ekman, & Friesen,
1990), neural bases (Calder et al., 1996; Davidson, Ekman,
Saron, Senulis, & Friesen, 1990), development (Malatesta,
Culver, Tesman, & Shephard, 1989; Matias & Cohn, 1993),
perception (Ambadar, Schooler, & Cohn, 2005), social pro-
cesses (Hatfield, Cacioppo, & Rapson, 1992; Hess & Kirouac,
2000), and emotion disorder (Kaiser, 2002; Sloan, Straussa,
Quirka, & Sajatovic, 1997), to name a few.
Because of its importance to the study of emotion, a num-
ber of observer-based systems of facial expression measure-
ment have been developed (Ekman & Friesen, 1978, 1982;
Ekman, Friesen, & Tomkins, 1971; Izard, 1979, 1983; Izard
& Dougherty, 1981; Kring & Sloan, 1991; Tronick, Als, &
Brazelton, 1980). Of these various systems for describing
facial expression, the Facial Action Coding System (FACS;
Ekman & Friesen, 1978; Ekman, Friesen, & Hager, 2002) is
the most comprehensive, psychometrically rigorous, and
widely used (Cohn & Ekman, 2005; Ekman & Rosenberg,
2005). Using FACS and viewing video-recorded facial behav-
ior at frame rate and slow motion, coders can manually code
nearly all possible facial expressions, which are decomposed
into action units (AUs). Action units, with some qualifica-
tions, are the smallest visually discriminable facial move-
ments. By comparison, other systems are less thorough
(Malatesta et al., 1989), fail to differentiate between some
anatomically distinct movements (Oster, Hegley, & Nagel,
1992), consider movements that are not anatomically dis-
tinct as separable (Oster et al., 1992), and often assume a one-
to-one mapping between facial expression and emotion (for
a review of these systems, see Cohn & Ekman, in press).
Unlike systems that use emotion labels to describe ex-
pression, FACS explicitly distinguishes between facial actions
and inferences about what they mean. FACS itself is descrip-
tive and includes no emotion-specified descriptors. Hypoth-
eses and inferences about the emotional meaning of facial
actions are extrinsic to FACS. If one wishes to make emo-
tion-based inferences from FACS codes, a variety of related
resources exist. These include the FACS Investigators’ Guide
(Ekman et al., 2002), the FACS interpretive database (Ekman,
Rosenberg, & Hager, 1998), and a large body of empirical
research.(Ekman & Rosenberg, 2005). These resources sug-
gest combination rules for defining emotion-specified expres-
sions from FACS action units, but this inferential step remains
extrinsic to FACS. Because of its descriptive power, FACS
is regarded by many as the standard measure for facial be-
havior and is used widely in diverse fields. Beyond emo-
tion science, these include facial neuromuscular disorders
(Van Swearingen & Cohn, 2005), neuroscience (Bruce &
Young, 1998; Rinn, 1984, 1991), computer vision (Bartlett,
203
UNPROOFED PAGES
('2059653', 'Zara Ambadar', 'zara ambadar')
('21451088', 'Paul Ekman', 'paul ekman')
ea85378a6549bb9eb9bcc13e31aa6a61b655a9afDiplomarbeit
Template Protection for PCA-LDA-based 3D
Face Recognition System
von
Technische Universität Darmstadt
Fachbereich Informatik
Fachgebiet Graphisch-Interaktive Systeme
Fraunhoferstraße 5
64283 Darmstadt
('1788102', 'Daniel Hartung', 'daniel hartung')
('35069235', 'Xuebing Zhou', 'xuebing zhou')
('1734569', 'Dieter W. Fellner', 'dieter w. fellner')
ea2ee5c53747878f30f6d9c576fd09d388ab0e2bViola-Jones based Detectors: How much affects
the Training Set?
SIANI
Edif. Central del Parque Cient´ıfico Tecnol´ogico
Universidad de Las Palmas de Gran Canaria
35017 - Spain
('4643134', 'Javier Lorenzo-Navarro', 'javier lorenzo-navarro')
ea890846912f16a0f3a860fce289596a7dac575fORIGINAL RESEARCH ARTICLE
published: 09 October 2014
doi: 10.3389/fpsyg.2014.01154
Benefits of social vs. non-social feedback on learning and
generosity. Results from theTipping Game
Tilburg Center for Logic, General Ethics, and Philosophy of Science, Tilburg University, Tilburg, Netherlands
Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, UK
Edited by:
Giulia Andrighetto, Institute of
Cognitive Science and Technologies –
National Research Council, Italy
Reviewed by:
David R. Simmons, University of
Glasgow, UK
Aron Szekely, University of Oxford, UK
*Correspondence:
Logic, General Ethics, and Philosophy
of Science, Tilburg University
P. O. Box 90153, 5000 LE
Tilburg, Netherlands
Stankevicius have contributed equally
to this work.
Although much work has recently been directed at understanding social decision-making,
relatively little is known about how different types of feedback impact adaptive changes
in social behavior. To address this issue quantitatively, we designed a novel associative
learning task called the “Tipping Game,” in which participants had to learn a social norm
of tipping in restaurants. Participants were found to make more generous decisions
from feedback in the form of facial expressions,
in comparison to feedback in the
form of symbols such as ticks and crosses. Furthermore, more participants displayed
learning in the condition where they received social feedback than participants in the non-
social condition. Modeling results showed that the pattern of performance displayed by
participants receiving social feedback could be explained by a lower sensitivity to economic
costs.
Keywords: social/non-social feedback, facial expressions, social norms, tipping behavior, associative learning
INTRODUCTION
Several behavioral, neurobiological and theoretical studies have
shown that social norm compliance, and more generally adap-
tive changes in social behavior, often require the effective use and
weighing of different types of information, including expected
economic costs and benefits, the potential impact of our behavior
on the welfare of others and our own reputation, as well as feed-
back information (Bicchieri, 2006; Adolphs, 2009; Frith and Frith,
2012). Relatively little attention has been paid to how different
types of feedback (or reward) may impact the way social norms
are learned. The present study addresses this issue with behavioral
and modeling results from a novel associative learning task called
the “Tipping Game.” We take the example of tipping and ask: how
do social feedback in the form of facial expressions, as opposed
to non-social feedback in the form of such conventional signs as
ticks and crosses, affect the way participants learn a social norm
of tipping?
Recent findings indicate that people’s decision-making is often
biased by social stimuli. For example, images of a pair of eyes can
significantly increase pro-social behavior in laboratory conditions
as well as in real-world contexts (Haley and Fessler, 2005; Bateson
et al., 2006; Rigdon et al., 2009; Ernest-Jones et al., 2011). Fur-
thermore, decision-making can be systematically biased by facial
emotional expressions used as predictors of monetary reward
(Averbeck and Duchaine, 2009; Evans et al., 2011; Shore and
Heerey, 2011). Facial expressions of happiness elicit approach-
ing behavior, whereas angry faces elicit avoidance (Seidel et al.,
2010; for a review seeBlair, 2003). Because they can function as
signals to others, eliciting specific behavioral responses, emotional
facial expressions play a major role in socialization practices that
help individuals to adapt to the norms and values of their culture
(Keltner and Haidt, 1999; Frith, 2009).
Despite this body of findings, the literature does not pro-
vide an unambiguous answer to the question of how learning
performance is affected by social stimuli in comparison to differ-
ent types of non-social stimuli used as feedback about previous
decisions in a learning task (Ruff and Fehr, 2014). Consistent
with the view that social reinforcement is a powerful facili-
tator of human learning (Zajonc, 1965; Bandura, 1977), one
recent study using a feedback-guided item-category association
task found that learning performance in control groups was
improved when social (smiling or angry faces) instead of non-
social (green or red lights) reinforcement was used (Hurlemann
et al., 2010).
However, the paradigm used in this study did not distin-
guish between two conditions in which social-facilitative effects
on learning performance have been observed: first, a condition
characterized by the mere presence of others (Allport, 1920); and
second, a condition where others provide reinforcing feedback
(Zajonc, 1965). In the task used by Hurlemann et al. (2010), faces
were present onscreen throughout each trial, changing from a
neutral to a happy expression for correct responses or angry for
incorrect responses. So, this study could not identify the specific
effect of social feedback on learning.
Consistent with the assumption oft made in economics and
psychology that optimal decisions and learning are based on an
assessment of the evidence that is unbiased by the social or non-
social nature of the evidence itself (Becker, 1976; Oaksford and
Chater, 2007), Lin et al. (2012a) found that, instead of boosting
learning performance, social reward (smiling or angry faces) made
www.frontiersin.org
October 2014 | Volume 5 | Article 1154 | 1
('37157064', 'Matteo Colombo', 'matteo colombo')
('25749361', 'Aistis Stankevicius', 'aistis stankevicius')
('2771872', 'Peggy Seriès', 'peggy seriès')
('37157064', 'Matteo Colombo', 'matteo colombo')
('37157064', 'Matteo Colombo', 'matteo colombo')
e-mail: m.colombo@uvt.nl
eaaed082762337e7c3f8a1b1dfea9c0d3ca281bfVICTORIA UNIVERSITY OF WELLINGTON
Te Whare Wananga o te Upoko o te Ika a Maui
School of Mathematics, Statistics and Computer Science
Computer Science
Algebraic Simplification of Genetic
Programs during Evolution
Technical Report CS-TR-06/7
February 2006
School of Mathematics, Statistics and Computer Science
Victoria University
PO Box 600, Wellington
New Zealand
Tel: +64 4 463 5341
Fax: +64 4 463 5045
http://www.mcs.vuw.ac.nz/research
('1679067', 'Mengjie Zhang', 'mengjie zhang')Email: Tech.Reports@mcs.vuw.ac.nz
ea218cebea2228b360680cb85ca133e8c2972e56Recover Canonical-View Faces in the 明Tild with Deep
Neural Networks
Departm nt of Information Engin ering Th Chines University of Hong Kong
The Chinese University ofHong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
zz 012 日 ie . cuh k. edu . h k
('2042558', 'Zhenyao Zhu', 'zhenyao zhu')
('1693209', 'Ping Luo', 'ping luo')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
xgwang@ee . cuhk . edu . hk
p 1 uo .1 h 工 @gm a i l . com
xtang@ i e . cuhk. edu . hk
ea96bc017fb56593a59149e10d5f14011a3744a0
e1630014a5ae3d2fb7ff6618f1470a567f4d90f5Look, Listen and Learn - A Multimodal LSTM for Speaker Identification
SenseTime Group Limited1
The University of Hong Kong
Project page: http://www.deeplearning.cc/mmlstm
('46972608', 'Yongtao Hu', 'yongtao hu'){rensijie, yuwing, xuli, sunwenxiu, yanqiong}@sensetime.com
{herohuyongtao, wangchuan2400}@gmail.com
e19fb22b35c352f57f520f593d748096b41a4a7bModeling Context for Image
Understanding:
When, For What, and How?
Department of Electrical and Computer Engineering,
Carnegie Mellon University
A thesis submitted for the degree of
Doctor of Philosophy
April 3, 2009
('1713589', 'Devi Parikh', 'devi parikh')
e10a257f1daf279e55f17f273a1b557141953ce2
e171fba00d88710e78e181c3e807c2fdffc6798a
e1c59e00458b4dee3f0e683ed265735f33187f77Spectral Rotation versus K-Means in Spectral Clustering
Computer Science and Engineering Department
University of Texas at Arlington
Arlington,TX,76019
('39122448', 'Jin Huang', 'jin huang')
('1688370', 'Feiping Nie', 'feiping nie')
('1748032', 'Heng Huang', 'heng huang')
huangjinsuzhou@gmail.com, feipingnie@gmail.com, heng@uta.edu
e1f790bbedcba3134277f545e56946bc6ffce48d
International Journal of Innovative Research in Science,
Engineering and Technology
(An ISO 3297: 2007 Certified Organization)
Vol. 3, Issue 5, May 2014
Sparse Code Words


ISSN: 2319-8753
Image Retrieval Using Attribute Enhanced
SRV Engineering College, sembodai, india
P.G. Student, SRV Engineering College, sembodai, India
('5768860', 'M.Balaganesh', 'm.balaganesh')
('14176059', 'N.Arthi', 'n.arthi')
e1ab3b9dee2da20078464f4ad8deb523b5b1792ePre-Training CNNs Using Convolutional
Autoencoders
TU Berlin
TU Berlin
Sabbir Ahmmed
TU Berlin
TU Berlin
('16258861', 'Maximilian Kohlbrenner', 'maximilian kohlbrenner')
('40805229', 'Russell Hofmann', 'russell hofmann')
('3196053', 'Youssef Kashef', 'youssef kashef')
m.kohlbrenner@campus.tu-berlin.de
r.hofmann@campus.tu-berlin.de
ahmmed@campus.tu-berlin.de
kashefy@ni.tu-berlin.de
e16efd2ae73a325b7571a456618bfa682b51aef8
e19ebad4739d59f999d192bac7d596b20b887f78Learning Gating ConvNet for Two-Stream based Methods in Action
Recognition
('1696573', 'Jiagang Zhu', 'jiagang zhu')
('1726367', 'Wei Zou', 'wei zou')
('48147901', 'Zheng Zhu', 'zheng zhu')
e13360cda1ebd6fa5c3f3386c0862f292e4dbee4
e1f6e2651b7294951b5eab5d2322336af1f676dcAppl. Math. Inf. Sci. 9, No. 2L, 461-469 (2015)
461
Applied Mathematics & Information Sciences
An International Journal
http://dx.doi.org/10.12785/amis/092L21
Emotional Avatars: Appearance Augmentation and
Animation based on Facial Expression Analysis
Sejong University, 98 Gunja, Gwangjin, Seoul 143-747, Korea
Received: 22 May 2014, Revised: 23 Jul. 2014, Accepted: 24 Jul. 2014
Published online: 1 Apr. 2015
('2137943', 'Taehoon Cho', 'taehoon cho')
('4027010', 'Jin-Ho Choi', 'jin-ho choi')
('2849238', 'Hyeon-Joong Kim', 'hyeon-joong kim')
('7236280', 'Soo-Mi Choi', 'soo-mi choi')
e1d726d812554f2b2b92cac3a4d2bec678969368J Electr Eng Technol.2015; 10(?): 30-40
http://dx.doi.org/10.5370/JEET.2015.10.2.030
ISSN(Print)
1975-0102
ISSN(Online) 2093-7423
Human Action Recognition Bases on Local Action Attributes
and Mohan S Kankanhalli**
('3132751', 'Weizhi Nie', 'weizhi nie')
('3026404', 'Yongkang Wong', 'yongkang wong')
e1256ff535bf4c024dd62faeb2418d48674ddfa2Towards Open-Set Identity Preserving Face Synthesis
University of Science and Technology of China
2Microsoft Research
('3093568', 'Jianmin Bao', 'jianmin bao')
('39447786', 'Dong Chen', 'dong chen')
('1716835', 'Fang Wen', 'fang wen')
('7179232', 'Houqiang Li', 'houqiang li')
('1745420', 'Gang Hua', 'gang hua')
{doch, fangwen, ganghua}@microsoft.com
lihq@ustc.edu.cn
jmbao@mail.ustc.edu.cn
e1e6e6792e92f7110e26e27e80e0c30ec36ac9c2TSINGHUA SCIENCE AND TECHNOLOGY
ISSNll1007-0214
0?/?? pp???–???
DOI: 10.26599/TST.2018.9010000
Volume 1, Number 1, Septembelr 2018
Ranking with Adaptive Neighbors
('39021559', 'Muge Li', 'muge li')
('2897748', 'Liangyue Li', 'liangyue li')
('1688370', 'Feiping Nie', 'feiping nie')
cd9666858f6c211e13aa80589d75373fd06f6246A Novel Time Series Kernel for
Sequences Generated by LTI Systems
V.le delle Scienze Ed.6, DIID, Universit´a degli studi di Palermo, Italy
('1711610', 'Liliana Lo Presti', 'liliana lo presti')
('9127836', 'Marco La Cascia', 'marco la cascia')
cdc7bd87a2c9983dab728dbc8aac74d8c9ed7e66What Makes a Video a Video: Analyzing Temporal Information in Video
Understanding Models and Datasets
Stanford University, 2Facebook, 3Dartmouth College
('38485317', 'De-An Huang', 'de-an huang')
('34066479', 'Vignesh Ramanathan', 'vignesh ramanathan')
('49274550', 'Dhruv Mahajan', 'dhruv mahajan')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
('2210374', 'Manohar Paluri', 'manohar paluri')
('9200530', 'Juan Carlos Niebles', 'juan carlos niebles')
cd4941cbef1e27d7afdc41b48c1aff5338aacf06MovieGraphs: Towards Understanding Human-Centric Situations from Videos
University of Toronto
Vector Institute
Lluís Castrejón3
Montreal Institute for Learning Algorithms
http://moviegraphs.cs.toronto.edu
Figure 1: An example from the MovieGraphs dataset. Each of the 7637 video clips is annotated with: 1) a graph that captures the characters
in the scene and their attributes, interactions (with topics and reasons), relationships, and time stamps; 2) a situation label that captures the
overarching theme of the interactions; 3) a scene label showing where the action takes place; and 4) a natural language description of the
clip. The graphs at the bottom show situations that occur before and after the one depicted in the main panel.
('2039154', 'Paul Vicol', 'paul vicol')
('2103464', 'Makarand Tapaswi', 'makarand tapaswi')
('37895334', 'Sanja Fidler', 'sanja fidler')
{pvicol, makarand, fidler}@cs.toronto.edu, lluis.enric.castrejon.subira@umontreal.ca
cd4c047f4d4df7937aff8fc76f4bae7718004f40
cdef0eaff4a3c168290d238999fc066ebc3a93e8CONTRASTIVE-CENTER LOSS FOR DEEP NEURAL NETWORKS
1School of Information and Communication Engineering
2Beijing Key Laboratory of Network System and Network Culture
Beijing University of Posts and Telecommunications, Beijing, China
('49712251', 'Ce Qi', 'ce qi')
('1684263', 'Fei Su', 'fei su')
cd444ee7f165032b97ee76b21b9ff58c10750570UNIVERSITY OF CALIFORNIA
IRVINE
Relational Models for Human-Object Interactions and Object Affordances
DISSERTATION
submitted in partial satisfaction of the requirements
for the degree of
DOCTOR OF PHILOSOPHY
in Computer Science
by
Dissertation Committee:
Professor Deva Ramanan, Chair
Professor Charless Fowlkes
Professor Padhraic Smyth
Professor Serge Belongie
2013
('40277674', 'Chaitanya Desai', 'chaitanya desai')
cd23dc3227ee2a3ab0f4de1817d03ca771267aebWU, KAMATA, BRECKON: FACE RECOGNITION VIA DSGNN
Face Recognition via Deep Sparse Graph
Neural Networks
Renjie WU1
Toby Breckon2
1 Graduate School of Information,
Production and Systems
Waseda University
Kitakyushu-shi, Japan
2 Engineering and Computing Sciences
Durham University, Durham, UK
('35222422', 'Sei-ichiro Kamata', 'sei-ichiro kamata')wurj-sjtu-waseda@ruri.waseda.jp
kam@waseda.jp
toby.breckon@durham.ac.uk
cd596a2682d74bdfa7b7160dd070b598975e89d9Mood Detection: Implementing a facial
expression recognition system
1. Introduction
Facial expressions play a significant role in human dialogue. As a result, there has been
considerable work done on the recognition of emotional expressions and the application of this
research will be beneficial in improving human-machine dialogue. One can imagine the
improvements to computer interfaces, automated clinical (psychological) research or even
interactions between humans and autonomous robots.
Unfortunately, a lot of the literature does not focus on trying to achieve high recognition rates
across multiple databases. In this project we develop our own mood detection system that
addresses this challenge. The system involves pre-processing image data by normalizing and
applying a simple mask, extracting certain (facial) features using PCA and Gabor filters and then
using SVMs for classification and recognition of expressions. Eigenfaces for each class are used
to determine class-specific masks which are then applied to the image data and used to train
multiple, one against the rest, SVMs. We find that simply using normalized pixel intensities
works well with such an approach.
Figure 1 – Overview of our system design
2. Image pre-processing
We performed pre-processing on the images used to train and test our algorithms as follows:
1. The location of the eyes is first selected manually
2. Images are scaled and cropped to a fixed size (170 x 130) keeping the eyes in all images
aligned
3. The image is histogram equalized using the mean histogram of all the training images to
make it invariant to lighting, skin color etc.
4. A fixed oval mask is applied to the image to extract face region. This serves to eliminate
the background, hair, ears and other extraneous features in the image which provide no
information about facial expression.
This approach works reasonably well in capturing expression-relevant facial information across
all databases. Examples of pre-processed images from the various datasets are shown in Figure-
2a below.
('1906123', 'Neeraj Agrawal', 'neeraj agrawal')
('2929557', 'Rob Cosgriff', 'rob cosgriff')
('2594170', 'Ritvik Mudur', 'ritvik mudur')
cdb1d32bc5c1a9bb0d9a5b9c9222401eab3e9ca0Functional Faces: Groupwise Dense Correspondence using Functional Maps
The University of York, UK
2IMB/LaBRI, Universit´e de Bordeaux, France
('1720735', 'Chao Zhang', 'chao zhang')
('34895713', 'Arnaud Dessein', 'arnaud dessein')
('1737428', 'Nick Pears', 'nick pears')
('1694260', 'Hang Dai', 'hang dai')
{cz679, william.smith, nick.pears, hd816}@york.ac.uk
arnaud.dessein@u-bordeaux.fr
cda4fb9df653b5721ad4fe8b4a88468a410e55ecGabor wavelet transform and its application ('38784892', 'Wei-lun Chao', 'wei-lun chao')
cd3005753012409361aba17f3f766e33e3a7320dMultilinear Biased Discriminant Analysis: A Novel Method for Facial
Action Unit Representation
('1736464', 'Mahmoud Khademi', 'mahmoud khademi')
('2179339', 'Mehran Safayani', 'mehran safayani')
†: Sharif University of Tech., DSP Lab, {khademi@ce, safayani@ce, manzuri@}.sharif.edu
cd687ddbd89a832f51d5510c478942800a3e6854A Game to Crowdsource Data for Affective Computing
Games Studio, Faculty of Engineering and IT, University of Technology, Sydney
('1733360', 'Chek Tien Tan', 'chek tien tan')
('2117735', 'Hemanta Sapkota', 'hemanta sapkota')
('2823535', 'Daniel Rosser', 'daniel rosser')
('3141633', 'Yusuf Pisan', 'yusuf pisan')
chek@gamesstudio.org
hemanta.sapkota@student.uts.edu.au
daniel.j.rosser@gmail.com
yusuf.pisan@gamesstudio.org
cd436f05fb4aeeda5d1085f2fe0384526571a46eInformation Bottleneck Domain Adaptation with
Privileged Information for Visual Recognition
Lane Department of Computer Science and Electrical Engineering
West Virginia University
('2897426', 'Saeid Motiian', 'saeid motiian')
('1736352', 'Gianfranco Doretto', 'gianfranco doretto')
{samotiian,gidoretto}@mix.wvu.edu
cd2c54705c455a4379f45eefdf32d8d10087e521A Hybrid Model for Identity Obfuscation by
Face Replacement
Max Planck Institute for Informatics, Saarland Informatics Campus
('32222907', 'Qianru Sun', 'qianru sun')
('1739548', 'Mario Fritz', 'mario fritz')
{qsun, atewari, wxu, mfritz, theobalt, schiele}@mpi-inf.mpg.de
cd7a7be3804fd217e9f10682e0c0bfd9583a08dbWomen also Snowboard:
Overcoming Bias in Captioning Models
('40895688', 'Kaylee Burns', 'kaylee burns')
cd023d2d067365c83d8e27431e83e7e66082f718Real-Time Rotation-Invariant Face Detection with
Progressive Calibration Networks
1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
3 CAS Center for Excellence in Brain Science and Intelligence Technology
('41017549', 'Xuepeng Shi', 'xuepeng shi')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1693589', 'Meina Kan', 'meina kan')
('3126238', 'Shuzhe Wu', 'shuzhe wu')
('1710220', 'Xilin Chen', 'xilin chen')
{xuepeng.shi, shiguang.shan, meina.kan, shuzhe.wu, xilin.chen}@vipl.ict.ac.cn
cca9ae621e8228cfa787ec7954bb375536160e0dLearning to Collaborate for User-Controlled Privacy
Martin Bertran 1†
Natalia Martinez 1†*
Afroditi Papadaki 2
Miguel Rodrigues 2
Duke University, Durham, NC, USA
University College London, London, UK
†These authors contributed equally to this work.
Privacy is a human right. Tim Cook, Apple CEO.
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
martin.bertran@duke.edu
natalia.martinez@duke.edu
a.papadaki.17@ucl.ac.uk
qiuqiang@gmail.com
m.rodrigues@ucl.ac.uk
guillermo.sapiro@duke.edu
cc589c499dcf323fe4a143bbef0074c3e31f9b60A 3D Facial Expression Database For Facial Behavior Research
State University of New York at Binghamton
('8072251', 'Lijun Yin', 'lijun yin')
ccfcbf0eda6df876f0170bdb4d7b4ab4e7676f18JOURNAL OF LATEX CLASS FILES, VOL. 6, NO. 1, JUNE 2011
A Dynamic Appearance Descriptor Approach to
Facial Actions Temporal Modelling
('39532631', 'Bihan Jiang', 'bihan jiang')
('1694605', 'Maja Pantic', 'maja pantic')
cc2eaa182f33defbb33d69e9547630aab7ed9c9cSurpassing Humans and Computers with JELLYBEAN:
Crowd-Vision-Hybrid Counting Algorithms
Stanford University
University of Illinois
The Ohio State University
Aditya Parameswaran
University of Illinois
('32953042', 'Akash Das Sarma', 'akash das sarma')
('2636295', 'Ayush Jain', 'ayush jain')
('39393264', 'Arnab Nandi', 'arnab nandi')
akashds@stanford.edu
ajain42@illinois.edu
arnab@cse.osu.edu
adityagp@illinois.edu
ccbfc004e29b3aceea091056b0ec536e8ea7c47e
ccdea57234d38c7831f1e9231efcb6352c801c55Illumination Processing in Face Recognition
187
11
X
Illumination Processing in Face Recognition
Yongping Li, Chao Wang and Xinyu Ao
Shanghai Institute of Applied Physics, Chinese Academy of Sciences
China
1. Introduction
Driven by the demanding of public security, face recognition has emerged as a viable
solution and achieved comparable accuracies to fingerprint system under controlled
lightning environment. In recent years, with wide installing of camera in open area, the
automatic face recognition in watch-list application is facing a serious problem. Under the
open environment, lightning changes is unpredictable, and the performance of face
recognition degrades seriously.
Illumination processing is a necessary step for face recognition to be useful in the
uncontrolled environment. NIST has started a test called FRGC to boost the research in
improving the performance under changing illumination. In this chapter, we will focus on
the research effort made in this direction and the influence on face recognition caused by
illumination.
First of all, we will discuss the quest on the image formation mechanism under various
illumination situations, and the corresponding mathematical modelling. The Lambertian
lighting model, bilinear illuminating model and some recent model are reviewed. Secondly,
under different state of face, like various head pose and different facial expression, how
illumination influences the recognition result, where the different pose and illuminating will
be examined carefully. Thirdly, the current methods researcher employ to counter the change
of illumination to maintain good performance on face recognition are assessed briefly. The
processing technique in video and how it will improve face recognition on video, where
Wang’s (Wang & Li, 2009) work will be discussed to give an example on the related
advancement in the fourth part. And finally, the current state-of-art of illumination
processing and its future trends will be discussed.
2. The formation of camera imaging and its difference from the human visual
system
With the camera invented in 1814 by Joseph N, recording of human face began its new era.
Since we do not need to hire a painter to draw our figures, as the nobles did in the middle
age. And the machine recorded our image as it is, if the camera is in good condition.
Currently, the imaging system is mostly to be digital format. The central part is CCD
(charge-coupled device) or CMOS (complimentary metal-oxide semiconductor). The
CCD/CMOS operates just like the human eyes. Both CCD and CMOS image sensors operate
www.intechopen.com
cc38942825d3a2c9ee8583c153d2c56c607e61a7Database Cross Matching: A Novel Source of
Fictitious Forensic Cases
Signals and Systems Group, EEMCS,
University of Twente, Netherlands
('34214663', 'Abhishek Dutta', 'abhishek dutta')
('39128850', 'Raymond Veldhuis', 'raymond veldhuis')
('1745742', 'Luuk Spreeuwers', 'luuk spreeuwers')
{a.dutta,r.n.j.veldhuis,l.j.spreeuwers}@utwente.nl
cc3c273bb213240515147e8be68c50f7ea22777cGaining Insight Into Films
Via Topic Modeling & Visualization
KEYWORDS Collaboration, computer vision, cultural
analytics, economy of abundance, interactive data
visualization
We moved beyond misuse when the software actually
became useful for film analysis with the addition of audio
analysis, subtitle analysis, facial recognition, and topic
modeling. Using multiple types of visualizations and
a back-and-fourth workflow between people and AI
we arrived at an approach for cultural analytics that
can be used to review and develop film criticism. Finally,
we present ways to apply these techniques to Database
Cinema and other aspects of film and video creation.
PROJECT DATE 2014
URL http://misharabinovich.com/soyummy.html
('40462877', 'MISHA RABINOVICH', 'misha rabinovich')
('1679896', 'Yogesh Girdhar', 'yogesh girdhar')
cc8e378fd05152a81c2810f682a78c5057c8a735International Journal of Computer Sciences and Engineering Open Access
Research Paper Volume-5, Issue-12 E-ISSN: 2347-2693
Expression Invariant Face Recognition System based on Topographic
Independent Component Analysis and Inner Product Classifier

Department of Electrical Engineering, IIT Delhi, New Delhi, India
Available online at: www.ijcseonline.org
Received: 07/Nov/2017, Revised: 22/Nov/2017, Accepted: 14/Dec/2017, Published: 31/Dec/2017
('40258123', 'Aruna Bhat', 'aruna bhat')*Corresponding Author: abigit06@yahoo.com
ccf43c62e4bf76b6a48ff588ef7ed51e87ddf50bAmerican Journal of Food Science and Health
Vol. 2, No. 2, 2016, pp. 7-17
http://www.aiscience.org/journal/ajfsh
ISSN: 2381-7216 (Print); ISSN: 2381-7224 (Online)
Nutraceuticals and Cosmeceuticals for Human
Beings–An Overview
Narayana Pharmacy College, Nellore, India
('40179150', 'R. Ramasubramania Raja', 'r. ramasubramania raja')
cc31db984282bb70946f6881bab741aa841d3a7cALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV
Learning Grimaces by Watching TV
http://www.robots.ox.ac.uk/~albanie
http://www.robots.ox.ac.uk/~vedaldi
Engineering Science Department
Univeristy of Oxford
Oxford, UK
('7641268', 'Samuel Albanie', 'samuel albanie')
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
cc8bf03b3f5800ac23e1a833447c421440d92197
cc91001f9d299ad70deb6453d55b2c0b967f8c0dOPEN ACCESS
ISSN 2073-8994
Article
Performance Enhancement of Face Recognition in Smart TV
Using Symmetrical Fuzzy-Based Quality Assessment
Division of Electronics and Electrical Engineering, Dongguk University, 26 Pil-dong 3-ga, Jung-gu
Tel.: +82-10-3111-7022; Fax: +82-2-2277-8735.
Academic Editor: Christopher Tyler
Received: 31 March 2015 / Accepted: 21 August 2015 / Published: 25 August 2015
('3021526', 'Yeong Gon Kim', 'yeong gon kim')
('2026806', 'Won Oh Lee', 'won oh lee')
('1922686', 'Hyung Gil Hong', 'hyung gil hong')
('4634733', 'Kang Ryoung Park', 'kang ryoung park')
Seoul 100-715, Korea; E-Mails: csokyg@dongguk.edu (Y.G.K.); 215p8@hanmail.net (W.O.L.);
yawara18@hotmail.com (K.W.K.); hell@dongguk.edu (H.G.H.)
* Author to whom correspondence should be addressed; E-Mail: parkgr@dgu.edu;
cc96eab1e55e771e417b758119ce5d7ef1722b43An Empirical Study of Recent
Face Alignment Methods
('2966679', 'Heng Yang', 'heng yang')
('34760532', 'Xuhui Jia', 'xuhui jia')
('1717179', 'Chen Change Loy', 'chen change loy')
('39626495', 'Peter Robinson', 'peter robinson')
cc7e66f2ba9ac0c639c80c65534ce6031997acd7Facial Descriptors for Identity-Preserving
Multiple People Tracking
CVLab, School of Computer and Communication Sciences
Swiss Federal Institute of Technology, Lausanne (EPFL
EPFL-REPORT-187534
July 2013
Michalis Zervos1 (michail.zervos@epfl.ch)
Horesh Ben Shitrit1 (horesh.benshitrit@epfl.ch)
Franc¸ois Fleuret(cid:63) (francois.fleuret@idiap.ch)
Pascal Fua (pascal.fua@epfl.ch)
cc9057d2762e077c53e381f90884595677eceafaOn the Exploration of Joint Attribute Learning
for Person Re-identification
Michigan State University
('38993748', 'Joseph Roth', 'joseph roth')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
{rothjos1,liuxm}@cse.msu.edu
ccf16bcf458e4d7a37643b8364594656287f5bfcA CNN Cascade for Landmark Guided Semantic
Part Segmentation
School of Computer Science, The University of Nottingham, Nottingham, UK
('34596685', 'Aaron S. Jackson', 'aaron s. jackson')
('46637307', 'Michel Valstar', 'michel valstar')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
{aaron.jackson, michel.valstar, yorgos.tzimiropoulos}@nottingham.ac.uk
e64b683e32525643a9ddb6b6af8b0472ef5b6a37Face Recognition and Retrieval in Video ('10795229', 'Caifeng Shan', 'caifeng shan')
e69ac130e3c7267cce5e1e3d9508ff76eb0e0eefResearch Article
Addressing the illumination challenge in two-
dimensional face recognition: a survey
ISSN 1751-9632
Received on 31st March 2014
Revised on 7th January 2015
Accepted on 9th April 2015
doi: 10.1049/iet-cvi.2014.0086
www.ietdl.org
Computational Biomedicine Laboratory, University of Houston, Houston, Texas 77204, USA
2Department of Computer Science, Cybersecurity Laboratory, Instituto Tecnológico y de Estudios Superiores de Monterrey, Monterrey,
NL 64840, Mexico
('2899018', 'Miguel A. Ochoa-Villegas', 'miguel a. ochoa-villegas')
('1905427', 'Olivia Barron-Cano', 'olivia barron-cano')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
✉ E-mail: ioannisk@uh.edu
e6b45d5a86092bbfdcd6c3c54cda3d6c3ac6b227Pairwise Relational Networks for Face
Recognition
1 Department of Creative IT Engineering, POSTECH, Korea
2 Department of Computer Science and Engineering, POSTECH, Korea
('2794366', 'Bong-Nam Kang', 'bong-nam kang')
('50682377', 'Yonghyun Kim', 'yonghyun kim')
('1695669', 'Daijin Kim', 'daijin kim')
{bnkang,gkyh0805,dkim}@postech.ac.kr
e6865b000cf4d4e84c3fe895b7ddfc65a9c4aaecChapter 15. The critical role of the
cold-start problem and incentive systems
in emotional Web 2.0 services
('2443050', 'Tobias Siebenlist', 'tobias siebenlist')
('2153585', 'Kathrin Knautz', 'kathrin knautz')
e6d689054e87ad3b8fbbb70714d48712ad84dc1cRobust Facial Feature Tracking
School of Computing, Staffordshire University
Stafford ST18 0DG
('2155770', 'Fabrice Bourel', 'fabrice bourel')
('1919849', 'Claude C. Chibelushi', 'claude c. chibelushi')
('32890308', 'Adrian A. Low', 'adrian a. low')
F.Bourel@staffs.ac.uk
C.C.Chibelushi@staffs.ac.uk
A.A.Low@staffs.ac.uk
e6dc1200a31defda100b2e5ddb27fb7ecbbd4acd1921
Flexible Manifold Embedding: A Framework
for Semi-Supervised and Unsupervised
Dimension Reduction
0 =
, the linear regression function (
('1688370', 'Feiping Nie', 'feiping nie')
('1714390', 'Dong Xu', 'dong xu')
('1700883', 'Changshui Zhang', 'changshui zhang')
e6f20e7431172c68f7fce0d4595100445a06c117Searching Action Proposals via Spatial
Actionness Estimation and Temporal Path
Inference and Tracking
cid:93)Peking University Shenzhen Graduate School, Shenzhen, P.R.China
DISI, University of Trento, Trento, Italy
('40147776', 'Dan Xu', 'dan xu')
('3238696', 'Zhihao Li', 'zhihao li')
('1684933', 'Ge Li', 'ge li')
e6e5a6090016810fb902b51d5baa2469ae28b8a1Title
Energy-Efficient Deep In-memory Architecture for NAND
Flash Memories
Archived version
Accepted manuscript: the content is same as the published
paper but without the final typesetting by the publisher
Published version
DOI
Published paper
URL
Authors (contact)
10.1109/ISCAS.2018.8351458
e6540d70e5ffeed9f447602ea3455c7f0b38113e
e6ee36444038de5885473693fb206f49c1369138
e6178de1ef15a6a973aad2791ce5fbabc2cb8ae5Improving Facial Landmark Detection via a
Super-Resolution Inception Network
Institute for Human-Machine Communication
Technical University of Munich, Germany
('38746426', 'Martin Knoche', 'martin knoche')
('3044182', 'Daniel Merget', 'daniel merget')
('1705843', 'Gerhard Rigoll', 'gerhard rigoll')
f913bb65b62b0a6391ffa8f59b1d5527b7eba948
f9784db8ff805439f0a6b6e15aeaf892dba47ca0Comparing the performance of Emotion-Recognition Implementations
in OpenCV, Cognitive Services, and Google Vision APIs
Department of Informatics and Artificial Intelligence
Tomas Bata University in Zl n
Nad Stráněmi 4511, 76005, Zlín
CZECH REPUBLIC
beltran_prieto@fai.utb.cz
f935225e7811858fe9ef6b5fd3fdd59aec9abd1awww.elsevier.com/locate/ynimg
Spatiotemporal dynamics and connectivity pattern differences
between centrally and peripherally presented faces
Laboratory for Human Brain Dynamics, RIKEN Brain Science Institute (BSI), 2-1 Hirosawa, Wakoshi, Saitama, 351-0198, Japan
Received 4 May 2005; revised 26 January 2006; accepted 6 February 2006
Available online 24 March 2006
Most neuroimaging studies on face processing used centrally presented
images with a relatively large visual field. Images presented in this way
activate widespread striate and extrastriate areas and make it difficult
to study spatiotemporal dynamics and connectivity pattern differences
from various parts of the visual field. Here we studied magneto-
encephalographic responses in humans to centrally and peripherally
presented faces for testing the hypothesis that processing of visual
stimuli with facial expressions of emotions depends on where the
stimuli are presented in the visual field. Using our tomographic and
statistical parametric mapping analyses, we identified occipitotemporal
areas activated by face stimuli more than by control conditions. V1/V2
activity was significantly stronger for lower than central and upper
visual field presentation. Fusiform activity, however, was significantly
stronger for central than for peripheral presentation. Both the V1/V2
and fusiform areas activated earlier for peripheral than for central
presentation. Fast responses in the fusiform were found at 70 – 80 ms
after image onset, as well as a response at 130 – 160 ms. For peripheral
presentation, contralateral V1/V2 and fusiform activated earlier (10 ms
and 23 ms, respectively) and significantly stronger than their ipsilateral
counterparts. Mutual
information analysis further showed linked
activity from bilateral V1/V2 to fusiform for central presentation and
from contralateral V1/V2 to fusiform for lower visual field presenta-
tion. In the upper visual field, the linkage was from fusiform to V1/V2.
Our results showed that face stimuli are processed predominantly in
the hemisphere contralateral to the stimulation and demonstrated for
the first time early fusiform activation leading V1/V2 activation for
upper visual field stimulation.
D 2006 Elsevier Inc. All rights reserved.
Keywords: Magnetoencephalography (MEG); Striate cortex; Extrastriate
cortex; Fusiform gyrus; Face perception; Connectivity
Introduction
It is well established that visual stimuli presented in one part of
the visual field are projected to the contralateral part of the visual
cortex such that images presented in the right visual field are
* Corresponding author. Fax: +81 48 467 9731.
Available online on ScienceDirect (www.sciencedirect.com).
1053-8119/$ - see front matter D 2006 Elsevier Inc. All rights reserved.
projected to the left visual cortex. It is, however, unclear whether
stimuli presented in different parts of the visual field are processed
differently in extrastriate areas that specialize for processing
complex properties of stimuli and whether different connectivity
patterns are produced between striate and extrastriate cortices when
such complex stimuli are presented to different quadrants. To
address these questions, one needs to incorporate three ingredients
in the experimental design and analysis. First, one must use stimuli
that are known to excite at least one specific extrastriate area well.
Second, one must present stimuli at positions in the visual field
known to project to specific parts of the visual cortex so that the
early entry into the visual system via V1 can be reliably extracted
for connectivity analysis. Third, one must use a technique that can
provide refined spatial and temporal
information about brain
activity. The information can then be used in follow-up analysis of
spatiotemporal dynamics and connectivity patterns in the brain.
The choice of faces is obvious because many studies have
shown that faces are effective stimuli for exciting extrastriate areas.
The posterior fusiform gyrus was first associated with cortical face
processing from lesion studies on patients with specific recognition
deficits of familiar faces (Meadows, 1974; Damasio et al., 1990;
Sergent and Poncet, 1990). Neuroimaging studies have shown that
extrastriate areas are involved in face processing in normal subjects
using techniques such as positron emission tomography (PET)
(Sergent et al., 1992; Haxby et al., 1994), functional magnetic
resonance imaging (fMRI) (Puce et al., 1995; McCarthy et al.,
1997; Kanwisher et al., 1997; Halgren et al., 1999), electroen-
cephalography (EEG) (Allison et al., 1994; Bentin et al., 1996;
George et al., 1996) and magnetoencephalography (MEG) (Link-
enkaer-Hansen et al., 1998; Halgren et al., 2000). In the present
study, we chose the same face stimuli from our earlier MEG study
on complex object and face affect recognition that were shown to
activate extrastriate areas well (Liu et al., 1999; Ioannides et al.,
2000).
Most of the earlier studies mentioned above, including ours
have presented facial images centrally with a relatively large visual
field covering both the fovea and parafovea. Central presentation
of images activates widespread striate and extrastriate areas. Low
order visual areas (V1/V2) corresponding to left – right – upper –
lower visual field stimulation are therefore activated by the same
('2259342', 'Lichan Liu', 'lichan liu')E-mail address: ioannides@postman.riken.jp (A.A. Ioannides).
f963967e52a5fd97fa3ebd679fd098c3cb70340eAnalysis, Interpretation, and Recognition of Facial
Action Units and Expressions Using Neuro-Fuzzy
Modeling
and Ali A. Kiaei1
DSP Lab, Sharif University of Technology, Tehran, Iran
Institute for Studies in Fundamental Sciences (IPM), Tehran, Iran
('1736464', 'Mahmoud Khademi', 'mahmoud khademi')
('1702826', 'Mohammad Hadi Kiapour', 'mohammad hadi kiapour')
{khademi@ce.,kiapour@ee.,manzuri@,kiaei@ce.}sharif.edu
f9e0209dc9e72d64b290d0622c1c1662aa2cc771CONTRIBUTIONS TO BIOMETRIC RECOGNITION:
MATCHING IDENTICAL TWINS AND LATENT FINGERPRINTS
By
A DISSERTATION
Submitted
to Michigan State University
in partial fulfillment of the requirements
for the degree of
Computer Science– Doctor of Philosophy
2013
('31508481', 'Alessandra Aparecida Paulino', 'alessandra aparecida paulino')
f92ade569cbe54344ffd3bb25efd366dcd8ad659EFFECT OF SUPER RESOLUTION ON HIGH DIMENSIONAL FEATURES FOR
UNSUPERVISED FACE RECOGNITION IN THE WILD
University of Bridgeport, Bridgeport, CT 06604, USA
('40373065', 'Ahmed ElSayed', 'ahmed elsayed')
('37374395', 'Ausif Mahmood', 'ausif mahmood')
Emails: aelsayed@my.bridgeport.edu, {mahmood,sobh}@bridgeport.edu
f96bdd1e2a940030fb0a89abbe6c69b8d7f6f0c1
f93606d362fcbe62550d0bf1b3edeb7be684b000The Computer Journal Advance Access published February 1, 2012
The Author 2012. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved
doi:10.1093/comjnl/bxs001
Nearest Neighbor Classifier Based
on Nearest Feature Decisions
Machine Intelligence Group, School of Computer Science, Indian Institute of Information Technology and
Queensland Micro- and Nanotechnology Centre and Grif th School of Engineering, Grif th University
Management, Kerala, India
Nathan, Australia
High feature dimensionality of realistic datasets adversely affects the recognition accuracy of nearest
neighbor (NN) classifiers. To address this issue, we introduce a nearest feature classifier that shifts
the NN concept from the global-decision level to the level of individual features. Performance
comparisons with 12 instance-based classi ers on 13 benchmark University of California Irvine
classification datasets show average improvements of 6 and 3.5% in recognition accuracy and
area under curve performance measures, respectively. The statistical significance of the observed
performance improvements is verified by the Friedman test and by the post hoc Bonferroni–Dunn
test. In addition, the application of the classifier is demonstrated on face recognition databases, a
character recognition database and medical diagnosis problems for binary and multi-class diagnosis
on databases including morphological and gene expression features.
Keywords: nearest neighbors; classification; local features; local ranking
Received 2 September 2011; revised 3 December 2011
Handling editor: Ethem Alpaydin
1.
INTRODUCTION
Automatic classification of patterns has been continuously and
rigorously investigated for the last 30 years. Simple classifiers,
based on the nearest neighbor (NN) principle, have been used
to solve a wide range of classification problems [1–5]. The NN
classification works on the idea of calculating global distances
between patterns, followed by ranking to determine the NNs
that best represent the class of a test pattern. Usually, distance
metric measures are used to compute the distances between
feature vectors. The accuracy of the calculated distances is
affected by the quality of the features, which can be degraded by
natural variability and measurement noise. Furthermore, some
distance calculations are affected by falsely assumed correlation
between different features. For example, Mahalanobis distance
will include the comparison between poorly or uncorrelated
features. This problem is more pronounced when the number
of features in a pattern is very large because the irrelevant
distance calculations can accumulate to a large value (for
example,
there will be many false correlations in gene
expressions data that can have dimensionality higher than 104
features). In addition to this problem, a considerable increase
in dimensionality complicates the classifier implementations
resulting in ‘curse of dimensionality’, where a possible
convergence to a classification solution becomes very slow
and inaccurate [6, 7]. The conventional solution to address
these problems is to rely on feature extraction and feature
selection methods [8–10]. However, unpredictability of natural
variability in patterns makes processing a specific feature
inapplicable to diverse pattern-recognition problems. Another
approach to improve the classifier performance is by using
machine learning techniques to learn the distance metrics
[11–13]. These methods attempt to reduce the inaccuracies
that occur with distance calculations. However, this solution
tends to include optimization problems that suffer from
high computational complexity and require reduced feature
dimensionality, resulting in low accuracies when the feature
vectors are highly dimensional and the number of intra-class
gallery objects is low. Learning distance metrics can completely
fail in high- and ultra-high-dimensional databases when the
relevance and redundancy of features often become impossible
to trace even with feature weighting or selection schemes.
Owing to these reasons, performance improvement of the NN
The Computer Journal, 2012
('1744784', 'Alex Pappachen James', 'alex pappachen james')
('1697594', 'Sima Dimitrijev', 'sima dimitrijev')
For Permissions, please email: journals.permissions@oup.com
Corresponding author: apj@ieee.org
f94f366ce14555cf0d5d34248f9467c18241c3eeDeep Convolutional Neural Network in
Deformable Part Models for Face Detection
University of Science, Vietnam National University, HCMC
School of Information Science, Japan Advanced Institute of Science and Technology
('2187730', 'Dinh-Luan Nguyen', 'dinh-luan nguyen')
('34453615', 'Vinh-Tiep Nguyen', 'vinh-tiep nguyen')
('1780348', 'Minh-Triet Tran', 'minh-triet tran')
('2854896', 'Atsuo Yoshitaka', 'atsuo yoshitaka')
1212223@student.hcmus.edu.vn
{nvtiep,tmtriet}@fit.hcmus.edu.vn
ayoshi@jaist.ac.jp
f997a71f1e54d044184240b38d9dc680b3bbbbc0Deep Cross Modal Learning for Caricature Verification and
Identification(CaVINet)
https://lsaiml.github.io/CaVINet/
Indian Institute of Technology Ropar
Indian Institute of Technology Ropar
Indian Institute of Technology Ropar
Narayanan C Krishnan
Indian Institute of Technology Ropar
('6220011', 'Jatin Garg', 'jatin garg')
('51152207', 'Himanshu Tolani', 'himanshu tolani')
('41021778', 'Skand Vishwanath Peri', 'skand vishwanath peri')
2014csb1017@iitrpr.ac.in
2014csb1015@iitrpr.ac.in
pvskand@gmail.com
ckn@iitrpr.ac.in
f909d04c809013b930bafca12c0f9a8192df9d92Single Image Subspace for Face Recognition
Nanjing University of Aeronautics and Astronautics, China
1 Department of Computer Science and Engineering,
2 National Key Laboratory for Novel Software Technology,
Nanjing University, China
('39497343', 'Jun Liu', 'jun liu')
('1680768', 'Songcan Chen', 'songcan chen')
('1692625', 'Zhi-Hua Zhou', 'zhi-hua zhou')
('2248421', 'Xiaoyang Tan', 'xiaoyang tan')
{j.liu, s.chen, x.tan}@nuaa.edu.cn
zhouzh@nju.edu.cn
f9d1f12070e5267afc60828002137af949ff1544Maximum Entropy Binary Encoding for Face Template Protection
Rohit Kumar Pandey
University at Buffalo, SUNY
('34872128', 'Yingbo Zhou', 'yingbo zhou')
('3352136', 'Bhargava Urala Kota', 'bhargava urala kota')
('1723877', 'Venu Govindaraju', 'venu govindaraju')
{rpandey, yingbozh, buralako, govind}@buffalo.edu
f9ccfe000092121a2016639732cdb368378256d5Cognitive behaviour analysis based on facial
information using depth sensors
Kingston University London, University of Westminster London
Imperial College London
('1686887', 'Juan Manuel Fernandez Montenegro', 'juan manuel fernandez montenegro')
('2866802', 'Barbara Villarini', 'barbara villarini')
('2140622', 'Athanasios Gkelias', 'athanasios gkelias')
('1689047', 'Vasileios Argyriou', 'vasileios argyriou')
Juan.Fernandez@kingston.ac.uk,B.Villarini@westminster.ac.uk,A.Gkelias@
imperial.ac.uk,Vasileios.Argyriou@kingston.ac.uk
f08e425c2fce277aedb51d93757839900d591008Neural Motifs: Scene Graph Parsing with Global Context
Paul G. Allen School of Computer Science and Engineering, University of Washington
Allen Institute for Arti cial Intelligence
School of Computer Science, Carnegie Mellon University
https://rowanzellers.com/neuralmotifs
('2545335', 'Rowan Zellers', 'rowan zellers')
('38094552', 'Sam Thomson', 'sam thomson')
{rowanz, my89, yejin}@cs.washington.edu, sthomson@cs.cmu.edu
f02f0f6fcd56a9b1407045de6634df15c60a85cdLearning Low-shot facial representations via 2D warping
RWTH Aachen University
('35362682', 'Shen Yan', 'shen yan')shen.yan@rwth-aachen.de
f0cee87e9ecedeb927664b8da44b8649050e1c86
f0f4f16d5b5f9efe304369120651fa688a03d495Temporal Generative Adversarial Nets
Preferred Networks inc., Japan
('49160719', 'Masaki Saito', 'masaki saito')
('8252749', 'Eiichi Matsumoto', 'eiichi matsumoto')
{msaito, matsumoto}@preferred.jp
f0ca31fd5cad07e84b47d50dc07db9fc53482a46Advances in Pure Mathematics, 2012, 2, 226-242
http://dx.doi.org/10.4236/apm.2012.24033 Published Online July 2012 (http://www.SciRP.org/journal/apm)
Feature Patch Illumination Spaces and Karcher
Compression for Face Recognition via
Grassmannians
California State University, Long Beach, USA
Colorado State University, Fort Collins, USA
Received January 7, 2012; revised February 20, 2012; accepted February 27, 2012
('2640182', 'Jen-Mei Chang', 'jen-mei chang')
('30383278', 'Chris Peterson', 'chris peterson')
('41211081', 'Michael Kirby', 'michael kirby')
Email: jen-mei.chang@csulb.edu, {peterson, Kirby}@math.colostate.edu
f0ae807627f81acb63eb5837c75a1e895a92c376International Journal of Emerging Engineering Research and Technology
Volume 3, Issue 12, December 2015, PP 128-133
ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online)
Facial Landmark Detection using Ensemble of Cascaded
Regressions
Faculty of Telecommunications, Technical University, Sofia, Bulgaria
Faculty of Telecommunications, Technical University, Sofia, Bulgaria
('6203133', 'Martin Penev', 'martin penev')
('1734848', 'Ognian Boumbarov', 'ognian boumbarov')
f074e86e003d5b7a3b6e1780d9c323598d93f3bcOPEN ACCESS
ISSN 2075-1680
Article
Characteristic Number: Theory and Its Application to
Shape Analysis
School of Software, Dalian University of Technology, Tuqiang St. 321, Dalian 116620, China
School of Mathematical Sciences, Dalian University of Technology, Linggong Rd. 2, Dalian
Tel.: +86-411-87571777; Fax: +86-411-87571567.
Received: 27 March 2014; in revised form: 28 April 2014 / Accepted: 28 April 2014 /
Published: 15 May 2014
('1710408', 'Xin Fan', 'xin fan')
('7864960', 'Zhongxuan Luo', 'zhongxuan luo')
('1732068', 'Jielin Zhang', 'jielin zhang')
('2758604', 'Xinchen Zhou', 'xinchen zhou')
('2235253', 'Qi Jia', 'qi jia')
('3136305', 'Daiyun Luo', 'daiyun luo')
E-Mails: xin.fan@ieee.org (X.F.); jiaqi7166@gmail.com (Q.J.)
China; E-Mails: jielinzh@dlut.edu.cn (J.Z.); dasazxc@gmail.com (X.Z.); 419524597@qq.com (D.L.)
* Author to whom correspondence should be addressed; E-Mail: zxluo@dlut.edu.cn;
f0a4a3fb6997334511d7b8fc090f9ce894679fafGenerative Face Completion
University of California, Merced
2Adobe Research
('1754382', 'Yijun Li', 'yijun li')
('2391885', 'Sifei Liu', 'sifei liu')
('1768964', 'Jimei Yang', 'jimei yang')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
{yli62,sliu32,mhyang}@ucmerced.edu
jimyang@adobe.com
f0681fc08f4d7198dcde803d69ca62f09f3db6c5Spatiotemporal Features for Effective Facial
Expression Recognition
Hatice C¸ ınar Akakın and B¨ulent Sankur
Bogazici University, Bebek
Istanbul
http://www.ee.boun.edu.tr
{hatice.cinar,bulent.sankur}@boun.edu.tr
f0f501e1e8726148d18e70c8e9f6feea9360d119OULU 2015
C 537
U N I V E R S I TAT I S O U L U E N S I S
U N I V E R S I TAT I S O U L U E N S I S
CTECHNICA
CTECHNICA
C537etukansi.kesken.fm Page 1 Thursday, June 18, 2015 3:57 PM
UNIVERSITY OF OULU P.O. Box 8000 FI-90014 UNIVERSITY OF OULU FINLAND
A C T A U N I V E R S I T A T I S O U L U E N S I S
ACTA
ACTA
Professor Esa Hohtola
University Lecturer Veli-Matti Ulvinen
University Lecturer Anu Soikkeli
Publications Editor Kirsti Nurkkala
ISBN 978-952-62-0872-5 (Paperback)
ISBN 978-952-62-0873-2 (PDF)
ISSN 0355-3213 (Print)
ISSN 1796-2226 (Online)
SOFTWARE-BASED
COUNTERMEASURES TO 2D
FACIAL SPOOFING ATTACKS
UNIVERSITY OF OULU GRADUATE SCHOOL
UNIVERSITY OF OULU
FACULTY OF INFORMATION TECHNOLOGY AND ELECTRICAL ENGINEERING,
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING;
INFOTECH OULU
('6433503', 'Santeri Palviainen', 'santeri palviainen')
('3797304', 'Sanna Taskila', 'sanna taskila')
('5451992', 'Olli Vuolteenaho', 'olli vuolteenaho')
('6238085', 'Sinikka Eskelinen', 'sinikka eskelinen')
('2165962', 'Jari Juga', 'jari juga')
('5451992', 'Olli Vuolteenaho', 'olli vuolteenaho')
('35709493', 'Jukka Komulainen', 'jukka komulainen')
f0398ee5291b153b716411c146a17d4af9cb0edcLEARNING OPTICAL FLOW VIA DILATED NETWORKS AND OCCLUSION REASONING
University of California, Merced
5200 N Lake Rd, Merced, CA, US
('1749901', 'Yi Zhu', 'yi zhu'){yzhu25, snewsam}@ucmerced.edu
f0f0e94d333b4923ae42ee195df17c0df62ea0b1Scaling Manifold Ranking Based Image Retrieval
†NTT Software Innovation Center, 3-9-11 Midori-cho Musashino-shi, Tokyo, Japan
‡NTT Service Evolution Laboratories, 1-1 Hikarinooka Yokosuka-shi, Kanagawa, Japan
California Institute of Technology, 1200 East California Boulevard Pasadena, California, USA
Osaka University, 1-5 Yamadaoka, Suita-shi, Osaka, Japan
('32130106', 'Yasuhiro Fujiwara', 'yasuhiro fujiwara')
('32285163', 'Go Irie', 'go irie')
('46593534', 'Shari Kuroyama', 'shari kuroyama')
('48075831', 'Makoto Onizuka', 'makoto onizuka')
{fujiwara.yasuhiro, irie.go}@lab.ntt.co.jp, kuroyama@caltech.edu, oni@acm.org
f06b015bb19bd3c39ac5b1e4320566f8d83a0c84
f0a3f12469fa55ad0d40c21212d18c02be0d1264Sparsity Sharing Embedding for Face
Verification
Department of Electrical Engineering, KAIST, Daejeon, Korea
('2350325', 'Donghoon Lee', 'donghoon lee')
('2857402', 'Hyunsin Park', 'hyunsin park')
('8270717', 'Junyoung Chung', 'junyoung chung')
('2126465', 'Youngook Song', 'youngook song')
f05ad40246656a977cf321c8299158435e3f3b61Face Recognition Using Face Patch Networks
The Chinese University of Hong Kong
('2312486', 'Chaochao Lu', 'chaochao lu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1678783', 'Deli Zhao', 'deli zhao')
{cclu,dlzhao,xtang}@ie.cuhk.edu.hk
f02a6bccdaee14ab55ad94263539f4f33f1b15bbArticle
Segment-Tube: Spatio-Temporal Action Localization
in Untrimmed Videos with Per-Frame Segmentation
Institute of Arti cial Intelligence and Robotics, Xi an Jiaotong University, Xi an, Shannxi 710049, China
Received: 23 April 2018; Accepted: 16 May 2018; Published: 22 May 2018
('40367806', 'Le Wang', 'le wang')
('46809347', 'Xuhuan Duan', 'xuhuan duan')
('46324995', 'Qilin Zhang', 'qilin zhang')
('1786361', 'Zhenxing Niu', 'zhenxing niu')
('1745420', 'Gang Hua', 'gang hua')
('1715389', 'Nanning Zheng', 'nanning zheng')
duanxuhuan0123@stu.xjtu.edu.cn (X.D.); nnzheng@xjtu.edu.cn (N.Z.)
2 HERE Technologies, Chicago, IL 60606, USA; qilin.zhang@here.com
3 Alibaba Group, Hangzhou 311121, China; zhenxing.nzx@alibaba-inc.com
4 Microsoft Research, Redmond, WA 98052, USA; ganghua@microsoft.com
* Correspondence: lewang@xjtu.edu.cn; Tel.: +86-29-8266-8672
f7dea4454c2de0b96ab5cf95008ce7144292e52a
f781e50caa43be13c5ceb13f4ccc2abc7d1507c5MVA2005 IAPR Conference on Machine VIsion Applications, May 16-18, 2005 Tsukuba Science City, Japan
12-1
Towards Flexible and Intelligent Vision Systems
– From Thresholding to CHLAC –
University of Tokyo
AISTy
y National Institute of Advanced Industrial Science and Technology
Umezono 1-1-1, Tsukuba-shi, Ibaraki-ken, 305-8568 Japan
('1809629', 'Nobuyuki Otsu', 'nobuyuki otsu')Email: otsu.n@aist.go.jp
f7b4bc4ef14349a6e66829a0101d5b21129dcf55LONG ET AL.: TOWARDS LIGHT-WEIGHT ANNOTATIONS: FIR FOR ZSL
Towards Light-weight Annotations: Fuzzy
Interpolative Reasoning for Zero-shot Image
Classification
1 Open Lab, School of Computing
Newcastle University, UK
2 Department of Computer Science and
Digital Technologies, Northumbria Uni-
versity, UK
Inception Institute of Arti cial
gence, UAE
Intelli-
('50363618', 'Yang Long', 'yang long')
('48272923', 'Yao Tan', 'yao tan')
('34975328', 'Daniel Organisciak', 'daniel organisciak')
('1706028', 'Longzhi Yang', 'longzhi yang')
('40799321', 'Ling Shao', 'ling shao')
yang.long@ieee.org
yao.tan@northumbria.ac.uk
d.organisciak@gmail.com
longzhi.yang@northumbria.ac.uk
ling.shao@ieee.org
f7b422df567ce9813926461251517761e3e6cda0FACE AGING WITH CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS
(cid:63) Orange Labs, 4 rue Clos Courtel, 35512 Cesson-S´evign´e, France
† Eurecom, 450 route des Chappes, 06410 Biot, France
('3116433', 'Grigory Antipov', 'grigory antipov')
('2341854', 'Moez Baccouche', 'moez baccouche')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
f7824758800a7b1a386db5bd35f84c81454d017aKEPLER: Keypoint and Pose Estimation of Unconstrained Faces by
Learning Efficient H-CNN Regressors
Department of Electrical and Computer Engineering, CFAR and UMIACS
University of Maryland-College Park, USA
('50333013', 'Amit Kumar', 'amit kumar')
('2943431', 'Azadeh Alavi', 'azadeh alavi')
('9215658', 'Rama Chellappa', 'rama chellappa')
{akumar14,azadeh,rama}@umiacs.umd.edu
f74917fc0e55f4f5682909dcf6929abd19d33e2eWorkshop track - ICLR 2018
GAN QUALITY INDEX (GQI) BY GAN-INDUCED
CLASSIFIER
The City College and the Graduate Center
The City University of New York
Department of Electrical & Computer Engineering
Northeastern University
Microsoft Research
('3105254', 'Yuancheng Ye', 'yuancheng ye')
('39092100', 'Yue Wu', 'yue wu')
('1689145', 'Lijuan Wang', 'lijuan wang')
('2249952', 'Yinpeng Chen', 'yinpeng chen')
('3419208', 'Zicheng Liu', 'zicheng liu')
yye@gradcenter.cuny.edu
ytian@ccny.cuny.edu
yuewu@ece.neu.edu
{lijuanw, yiche, zliu, zhang}@microsoft.com
f740bac1484f2f2c70777db6d2a11cf4280081d6Soft Locality Preserving Map (SLPM) for Facial Expression
Recognition
a Centre for Signal Processing, Department of Electronic and Information Engineering, The Hong
Kong Polytechnic University, Kowloon, Hong Kong
b Computer Science, School of Electrical and Data Engineering, University of Technology, Sydney
Australia
('13671251', 'Cigdem Turan', 'cigdem turan')
('1703078', 'Kin-Man Lam', 'kin-man lam')
('1706670', 'Xiangjian He', 'xiangjian he')
E-mail addresses: cigdem.turan@connect.polyu.hk (C. Turan), enkmlam@polyu.edu.hk (K.-M. Lam),
xiangjian.he@uts.edu.au (X. He)
f78fe101b21be36e98cd3da010051bb9b9829a1eHindawi
Computational Intelligence and Neuroscience
Volume 2018, Article ID 7208794, 10 pages
https://doi.org/10.1155/2018/7208794
Research Article
Unsupervised Domain Adaptation for Facial Expression
Recognition Using Generative Adversarial Networks
1,2
State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, 300072, China
Key Laboratory of MOEMS of the Ministry of Education, Tianjin University, 300072, China
Received 14 April 2018; Accepted 19 June 2018; Published 9 July 2018
Academic Editor: Ant´onio D. P. Correia
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In the facial expression recognition task, a good-performing convolutional neural network (CNN) model trained on one dataset
(source dataset) usually performs poorly on another dataset (target dataset). This is because the feature distribution of the same
emotion varies in different datasets. To improve the cross-dataset accuracy of the CNN model, we introduce an unsupervised
domain adaptation method, which is especially suitable for unlabelled small target dataset. In order to solve the problem of lack of
samples from the target dataset, we train a generative adversarial network (GAN) on the target dataset and use the GAN generated
samples to fine-tune the model pretrained on the source dataset. In the process of fine-tuning, we give the unlabelled GAN generated
samples distributed pseudolabels dynamically according to the current prediction probabilities. Our method can be easily applied
to any existing convolutional neural networks (CNN). We demonstrate the effectiveness of our method on four facial expression
recognition datasets with two CNN structures and obtain inspiring results.
1. Introduction
Facial expressions recognition (FER) has a wide spectrum of
application potentials in human-computer interaction, cog-
nitive psychology, computational neuroscience, and medical
healthcare. In recent years, convolutional neural networks
(CNN) have achieved many exciting results in artificial
intelligent and pattern recognition and have been successfully
used in facial expression recognition [1]. Jaiswal et al. [2]
present a novel approach to facial action unit detection
using a combination of Convolutional and Bidirectional
Long Short-Term Memory Neural Networks (CNN-BLSTM),
which jointly learns shape, appearance, and dynamics in a
deep learning manner. You et al. [3] introduce a new data
set, which contains more than 3 million weakly labelled
images of different emotions. Esser et al. [4] develop a model
for efficient neuromorphic computing using the Deep CNN
technique. H-W.Ng et al. [5] develop a cascading fine-tuning
approach for emotion recognition. Neagoe et al. [6] propose
a model for subject independent emotion recognition from
facial expressions using combined CNN and DBN. However,
these CNN models are often trained and tested on the
same dataset, whereas the cross-dataset performance is less
concerned. Although the basic emotions defined by Ekman
and Friesen [7], anger, disgust, fear, happy, sadness, and
surprise, are believed to be universal, the way of expressing
these emotions can be quite diverse across different cultures,
ages, and genders [8]. As a result, a well-trained CNN model,
having high recognition accuracy on the training dataset,
usually performs poorly on other datasets. In order to make
the facial expression recognition system more practical, it
is necessary to improve the generalization ability of the
recognition model.
In this paper, we aim at improving the cross-dataset
accuracy of a CNN model on facial expression recognition.
One way to solve this problem is to rebuild models from
scratch using large-scale newly collected samples. Large
amounts of training samples, such as the dataset ImageNet [9]
containing over 15 million images, can reduce the overfitting
problem and help to train a reliable model. However, for
facial expression recognition,
it is expensive and some-
times even impossible to get enough labelled training data.
Therefore, we proposed an unsupervised domain adaptation
method, which is especially suitable for unlabelled small
('47119020', 'Xiaoqing Wang', 'xiaoqing wang')
('36142058', 'Xiangjun Wang', 'xiangjun wang')
('3332231', 'Yubo Ni', 'yubo ni')
('47119020', 'Xiaoqing Wang', 'xiaoqing wang')
Correspondence should be addressed to Xiangjun Wang; tjuxjw@126.com
f79c97e7c3f9a98cf6f4a5d2431f149ffacae48fProvided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published
version when available.
Title
On color texture normalization for active appearance models
Author(s)
Ionita, Mircea C.; Corcoran, Peter M.; Buzuloiu, Vasile
Publication
Date
2009-05-12
Publication
Information
Ionita, M. C., Corcoran, P., & Buzuloiu, V. (2009). On Color
Texture Normalization for Active Appearance Models. Image
Processing, IEEE Transactions on, 18(6), 1372-1378.
Publisher
IEEE
Link to
publisher's
version
http://dx.doi.org/10.1109/TIP.2009.2017163
Item record
http://hdl.handle.net/10379/1350
Some rights reserved. For more information, please see the item record link above.
Downloaded 2017-06-17T22:38:27Z
f7452a12f9bd927398e036ea6ede02da79097e6e
f7a271acccf9ec66c9b114d36eec284fbb89c7efOpen Access
Research
Does attractiveness influence condom
use intentions in heterosexual men?
An experimental study
To cite: Eleftheriou A,
Bullock S, Graham CA, et al.
Does attractiveness influence
condom use intentions in
heterosexual men?
An experimental study. BMJ
Open 2016;6:e010883.
doi:10.1136/bmjopen-2015-
010883
▸ Prepublication history for
this paper is available online.
To view these files please
visit the journal online
(http://dx.doi.org/10.1136/
bmjopen-2015-010883).
Received 17 December 2015
Revised 1 March 2016
Accepted 7 April 2016
1Department of Electronics
and Computer Science,
University of Southampton
Southampton, UK
Institute for Complex
Systems Simulation,
University of Southampton
Southampton, UK
3Department of Computer
Science, University of Bristol
Bristol, UK
4Centre for Sexual Health
Research, Department of
Psychology, University of
Southampton, Southampton,
UK
Correspondence to
('6093065', 'Anastasia Eleftheriou', 'anastasia eleftheriou')
('1733871', 'Seth Bullock', 'seth bullock')
('4712904', 'Cynthia A Graham', 'cynthia a graham')
('48479171', 'Nicole Stone', 'nicole stone')
('50227141', 'Roger Ingham', 'roger ingham')
('6093065', 'Anastasia Eleftheriou', 'anastasia eleftheriou')
ae2n12@soton.ac.uk
f7093b138fd31956e30d411a7043741dcb8ca4aaHierarchical Clustering in Face Similarity Score
Space
Jason Grant and Patrick Flynn
Department of Computer Science and Engineering
University of Notre Dame
Notre Dame, IN 46556
f7dcadc5288653ec6764600c7c1e2b49c305dfaaCopyright
by
Adriana Ivanova Kovashka
2014
f7de943aa75406fe5568fdbb08133ce0f9a765d4Project 1.5: Human Identification at a Distance - Hornak, Adjeroh, Cukic, Gautum, & Ross
Project 1.5
Biometric Identification and Surveillance1
Year 5 Deliverable 
Technical Report: 
and
Research Challenges in Biometrics
Indexed biography of relevant biometric research literature
Donald Adjeroh, Bojan Cukic, Arun Ross 
April, 2014  
                                                            
1 "This research was supported by the United States Department of Homeland Security through the National Center for Border Security
and Immigration (BORDERS) under grant number 2008-ST-061-BS0002. However, any opinions, findings, and conclusions or
recommendations in this document are those of the authors and do not necessarily reflect views of the United States Department of
Homeland Security."
('4800511', 'Don Adjeroh', 'don adjeroh')
('1702603', 'Bojan Cukic', 'bojan cukic')
('1698707', 'Arun Ross', 'arun ross')
donald.adjeroh@mail.wvu.edu; bojan.cukic@mail.wvu.edu; arun.ross@mail.wvu.edu
f75852386e563ca580a48b18420e446be45fcf8dILLUMINATION INVARIANT FACE RECOGNITION

















ENEE 631: Digital Image and Video Processing
Instructor: Dr. K. J. Ray Liu
Term Project - Spring 2006
1.
INTRODUCTION


The performance of the Face Recognition algorithms is severely affected by two
important factors: the change in Pose and Illumination conditions of the subjects. The
changes in Illumination conditions of the subjects can be so drastic that, the variation in
lighting will be of the similar order as that of the variation due to the change in subjects
[1] and this can result in misclassification.

For example, in the acquisition of the face of a person from a real time video, the
ambient conditions will cause different lighting variations on the tracked face. Some
examples of images with different illumination conditions are shown in Fig. 1. In this
project, we study some algorithms that are capable of performing Illumination Invariant
Face Recognition. The performances of these algorithms were compared on the CMU-
Illumination dataset [13], by using the entire face as the input to the algorithms. Then, a
model of dividing the face into four regions is proposed and the performance of the
algorithms on these new features is analyzed.

('33692583', 'Raghuraman Gopalan', 'raghuraman gopalan')raghuram@umd.edu
f7c50d2be9fba0e4527fd9fbe3095e9d9a94fdd3Large Margin Multi-Metric Learning for Face
and Kinship Verification in the Wild
School of EEE, Nanyang Technological University, Singapore
2Advanced Digital Sciences Center, Singapore
('34651153', 'Junlin Hu', 'junlin hu')
('1697700', 'Jiwen Lu', 'jiwen lu')
('34316743', 'Junsong Yuan', 'junsong yuan')
('1689805', 'Yap-Peng Tan', 'yap-peng tan')
f78863f4e7c4c57744715abe524ae4256be884a9
f77c9bf5beec7c975584e8087aae8d679664a1ebLocal Deep Neural Networks for Age and Gender Classification
March 27, 2017
('9949538', 'Zukang Liao', 'zukang liao')
('2403354', 'Stavros Petridis', 'stavros petridis')
('1694605', 'Maja Pantic', 'maja pantic')
f7ba77d23a0eea5a3034a1833b2d2552cb42fb7aThis is a pre-print of the original paper accepted at the International Joint Conference on Biometrics (IJCB) 2017.
LOTS about Attacking Deep Features
Vision and Security Technology (VAST) Lab
University of Colorado, Colorado Springs, USA
('2974221', 'Andras Rozsa', 'andras rozsa')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
{arozsa,mgunther,tboult}@vast.uccs.edu
e8686663aec64f4414eba6a0f821ab9eb9f93e38IMPROVING SHAPE-BASED FACE RECOGNITION BY MEANS OF A SUPERVISED
DISCRIMINANT HAUSDORFF DISTANCE
J.L. Alba
, A. Pujol
††
, A. L´opez
†††
and J.J. Villanueva
†††
University of Vigo, Spain
†††Centre de Visio per Computador, Universitat Autonoma de Barcelona, Spain
††Digital Pointer MVT
e82360682c4da11f136f3fccb73a31d7fd195694AALTO UNIVERSITY
SCHOOL OF SCIENCE AND TECHNOLOGY
Faculty of Information and Natural Science
Department of Information and Computer Science
Online Face Recognition with
Application to Proactive Augmented
Reality
Master’s Thesis submitted in partial fulfillment of the requirements for the
degree of Master of Science in Technology.
Espoo, May 25, 2010
Supervisor:
Instructor:
Professor Erkki Oja
('1700492', 'Jing Wu', 'jing wu')
('1758971', 'Markus Koskela', 'markus koskela')
e8410c4cd1689829c15bd1f34995eb3bd4321069
e8fdacbd708feb60fd6e7843b048bf3c4387c6dbDeep Learning
Hinnerup Net A/S
www.hinnerup.net
July 4, 2014
Introduction
Deep learning is a topic in the field of artificial intelligence (AI) and is a relatively
new research area although based on the popular artificial neural networks (supposedly
mirroring brain function). With the development of the perceptron in the 1950s and
1960s by Frank RosenBlatt, research began on artificial neural networks. To further
mimic the architectural depth of the brain, researchers wanted to train a deep multi-
layer neural network – this, however, did not happen until Geoffrey Hinton in 2006
introduced Deep Belief Networks [1].
Recently, the topic of deep learning has gained public interest. Large web companies such
as Google and Facebook have a focused research on AI and an ever increasing amount
of compute power, which has led to researchers finally being able to produce results
that are of interest to the general public. In July 2012 Google trained a deep learning
network on YouTube videos with the remarkable result that the network learned to
recognize humans as well as cats [6], and in January this year Google successfully used
deep learning on Street View images to automatically recognize house numbers with
an accuracy comparable to that of a human operator [5]. In March this year Facebook
announced their DeepFace algorithm that is able to match faces in photos with Facebook
users almost as accurately as a human can do [9].
Deep learning and other AI are here to stay and will become more and more present in
our daily lives, so we had better make ourselves acquainted with the technology. Let’s
dive into the deep water and try not to drown!
Data Representations
Before presenting data to an AI algorithm, we would normally prepare the data to make
it feasible to work with. For instance, if the data consists of images, we would take each
e8f0f9b74db6794830baa2cab48d99d8724e8cb6Active Image Labeling and Its Application to
Facial Action Labeling
Electrical, Computer, Rensselaer Polytechnic Institute
Visualization and Computer Vision Lab, GE Global Research Center
('40396543', 'Lei Zhang', 'lei zhang')
('1686235', 'Yan Tong', 'yan tong')
('1726583', 'Qiang Ji', 'qiang ji')
zhangl2@rpi.edu,tongyan@research.ge.com,qji@ecse.rpi.edu
e8b2a98f87b7b2593b4a046464c1ec63bfd13b51CMS-RCNN: Contextual Multi-Scale
Region-based CNN for Unconstrained Face
Detection
('3117715', 'Chenchen Zhu', 'chenchen zhu')
('3049981', 'Yutong Zheng', 'yutong zheng')
('1769788', 'Khoa Luu', 'khoa luu')
('1794486', 'Marios Savvides', 'marios savvides')
e87d6c284cdd6828dfe7c092087fbd9ff5091ee4Unsupervised Creation of Parameterized Avatars
1Facebook AI Research
School of Computer Science, Tel Aviv University
('1776343', 'Lior Wolf', 'lior wolf')
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('33964593', 'Adam Polyak', 'adam polyak')
e8523c4ac9d7aa21f3eb4062e09f2a3bc1eedcf7Towards End-to-End Face Recognition through Alignment Learning
Tsinghua University
Beijing, China, 100084
('8802368', 'Yuanyi Zhong', 'yuanyi zhong')
('1752427', 'Jiansheng Chen', 'jiansheng chen')
('39071060', 'Bo Huang', 'bo huang')
zhongyy13@mails.tsinghua.edu.cn, jschenthu@mail.tsinghua.edu.cn, huangb14@mails.tsinghua.edu.cn
e85a255a970ee4c1eecc3e3d110e157f3e0a4629Fusing Hierarchical Convolutional Features for Human Body Segmentation and
Clothing Fashion Classification
School of Computer Science, Wuhan University, P.R. China
('47294008', 'Zheng Zhang', 'zheng zhang')
('3127916', 'Chengfang Song', 'chengfang song')
('4793870', 'Qin Zou', 'qin zou')
E-mails: {zhangzheng, songchf, qzou}@whu.edu.cn
e8c9dcbf56714db53063b9c367e3e44300141ff6Automated FACS face analysis benefits from the addition of velocity
Get The FACS Fast:
Timothy R. Brick
University of Virginia
Charlottesville, VA 22904
Michael D. Hunter
University of Virginia
Charlottesville, VA 22904
Jeffrey F. Cohn
University of Pittsburgh
Pittsburgh, PA 15260
tbrick@virginia.edu
mhunter@virginia.edu
jeffcohn@cs.cmu.edu
e8d1b134d48eb0928bc999923a4e092537e106f6WEIGHTED MULTI-REGION CONVOLUTIONAL NEURAL NETWORK FOR ACTION
RECOGNITION WITH LOW-LATENCY ONLINE PREDICTION
cid:63)University of Science and Technology of China, Hefei, Anhui, China
†HERE Technologies, Chicago, Illinois, USA
('49417387', 'Yunfeng Wang', 'yunfeng wang')
('38272296', 'Wengang Zhou', 'wengang zhou')
('46324995', 'Qilin Zhang', 'qilin zhang')
('49897466', 'Xiaotian Zhu', 'xiaotian zhu')
('7179232', 'Houqiang Li', 'houqiang li')
e8c6c3fc9b52dffb15fe115702c6f159d955d30813
Linear Subspace Learning for
Facial Expression Analysis
Philips Research
The Netherlands
1. Introduction
Facial expression, resulting from movements of the facial muscles, is one of the most
powerful, natural, and immediate means for human beings to communicate their emotions
and intentions. Some examples of facial expressions are shown in Fig. 1. Darwin (1872) was
the first to describe in detail the specific facial expressions associated with emotions in
animals and humans; he argued that all mammals show emotions reliably in their faces.
Psychological studies (Mehrabian, 1968; Ambady & Rosenthal, 1992) indicate that facial
expressions, with other non-verbal cues, play a major and fundamental role in face-to-face
communication.
Fig. 1. Facial expressions of George W. Bush.
Machine analysis of facial expressions, enabling computers to analyze and interpret facial
expressions as humans do, has many important applications including intelligent human-
computer interaction, computer animation, surveillance and security, medical diagnosis,
law enforcement, and awareness system (Shan, 2007). Driven by its potential applications
and theoretical interests of cognitive and psychological scientists, automatic facial
expression analysis has attracted much attention in last two decades (Pantic & Rothkrantz,
2000a; Fasel & Luettin, 2003; Tian et al, 2005; Pantic & Bartlett, 2007). It has been studied in
multiple disciplines such as psychology, cognitive science, computer vision, pattern
Source: Machine Learning, Book edited by: Abdelhamid Mellouk and Abdennacer Chebira,
ISBN 978-3-902613-56-1, pp. 450, February 2009, I-Tech, Vienna, Austria
www.intechopen.com
('10795229', 'Caifeng Shan', 'caifeng shan')
e8b3a257a0a44d2859862cdec91c8841dc69144dLiquid Pouring Monitoring via
Rich Sensory Inputs
National Tsing Hua University, Taiwan
Stanford University, USA
('27555915', 'Tz-Ying Wu', 'tz-ying wu')
('9618379', 'Juan-Ting Lin', 'juan-ting lin')
('27538483', 'Chan-Wei Hu', 'chan-wei hu')
('9200530', 'Juan Carlos Niebles', 'juan carlos niebles')
('46611107', 'Min Sun', 'min sun')
{gina9726, brade31919, johnsonwang0810, huchanwei1204}@gmail.com,
sunmin@ee.nthu.edu.tw
jniebles@cs.stanford.edu
fa90b825346a51562d42f6b59a343b98ea2e501aModeling Naive Psychology of Characters in Simple Commonsense Stories
Paul G. Allen School of Computer Science and Engineering, University of Washington
Allen Institute for Arti cial Intelligence
Information Sciences Institute and Computer Science, University of Southern California
('2516777', 'Hannah Rashkin', 'hannah rashkin')
('2691021', 'Antoine Bosselut', 'antoine bosselut')
('2729164', 'Maarten Sap', 'maarten sap')
('1710034', 'Kevin Knight', 'kevin knight')
('1699545', 'Yejin Choi', 'yejin choi')
{hrashkin,msap,antoineb,yejin}@cs.washington.edu
knight@isi.edu
fab83bf8d7cab8fe069796b33d2a6bd70c8cefc6Draft: Evaluation Guidelines for Gender
Classification and Age Estimation
July 1, 2011
Introduction
In previous research on gender classification and age estimation did not use a
standardised evaluation procedure. This makes comparison the different ap-
proaches difficult.
Thus we propose here a benchmarking and evaluation protocol for gender
classification as well as age estimation to set a common ground for future re-
search in these two areas.
The evaluations are designed such that there is one scenario under controlled
labratory conditions and one under uncontrolled real life conditions.
The datasets were selected with the criteria of being publicly available for
research purposes.
File lists for the folds corresponding to the individual benchmarking proto-
cols will be provided over our website at http://face.cs.kit.edu/befit. We
will provide two kinds of folds for each of the tasks and conditions: one set of
folds using the whole dataset and one set of folds using a reduced dataset, which
is approximately balanced in terms of age, gender and ethnicity.
2 Gender Classification
In this task the goal is to determine the gender of the persons depicted in the
individual images.
2.1 Data
In previous works one of the most commonly used databases is the Feret database [1,
2]. We decided here not to take this database, because of its low number of im-
ages.
('40303076', 'Tobias Gehrig', 'tobias gehrig')
('39504159', 'Matthias Steiner', 'matthias steiner')
{tobias.gehrig, ekenel}@kit.edu
faeefc5da67421ecd71d400f1505cfacb990119cOriginal research
published: 20 November 2017
doi: 10.3389/frobt.2017.00061
PastVision+: Thermovisual inference
of recent Medicine intake by
Detecting heated Objects and
cooled lips
Intelligent Systems Laboratory, Halmstad University, Halmstad, Sweden
This article addresses the problem of how a robot can infer what a person has done
recently, with a focus on checking oral medicine intake in dementia patients. We present
PastVision+, an approach showing how thermovisual cues in objects and humans can
be leveraged to infer recent unobserved human–object interactions. Our expectation
is that this approach can provide enhanced speed and robustness compared to exist-
ing methods, because our approach can draw inferences from single images without
needing to wait to observe ongoing actions and can deal with short-lasting occlusions;
when combined, we expect a potential improvement in accuracy due to the extra infor-
mation from knowing what a person has recently done. To evaluate our approach, we
obtained some data in which an experimenter touched medicine packages and a glass
of water to simulate intake of oral medicine, for a challenging scenario in which some
touches were conducted in front of a warm background. Results were promising, with
a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s
mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark
for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for
another challenging scenario in which some participants pretended to take medicine or
otherwise touched a medicine package: accuracies of inferring object touches, mouth
touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0%
at the 15 s mark, with a rate of 89.0% for person identification. The results suggested
some areas in which further improvements would be possible, toward facilitating robot
inference of human actions, in the context of medicine intake monitoring.
Keywords: thermovisual inference, touch detection, medicine intake, action recognition, monitoring, near past
inference
1. inTrODUcTiOn
This article addresses the problem of how a robot can detect what a person has touched recently,
with a focus on checking oral medicine intake in dementia patients.
Detecting recent touches would be useful because touch is a typical component of many human–
object interactions; moreover, knowing which objects have been touched allows inference into
what actions have been conducted, which is an important requirement for robots to collaborate
effectively with people (Vernon et al., 2016). For example, touches to a stove, door handle, or pill
bottle can occur as a result of cooking, leaving one’s house, or taking medicine, all of which could
potentially be dangerous for a person with dementia, if they forget to turn off the heat, lose their
way, or make a mistake. Here, we focus on the latter problem of medicine adherence—whose
Edited by:
Alberto Montebelli,
University of Sk vde, Sweden
Reviewed by:
Sam Neymotin,
Brown University, United States
Per Backlund,
University of Sk vde, Sweden
Fernando Bevilacqua,
University of Sk vde, Sweden
(in collaboration with Per Backlund)
*Correspondence:
Specialty section:
This article was submitted to
Computational Intelligence,
a section of the journal
Frontiers in Robotics and AI
Received: 15 May 2017
Accepted: 02 November 2017
Published: 20 November 2017
Citation:
Cooney M and Bigun J (2017)
PastVision+: Thermovisual Inference
of Recent Medicine Intake by
Detecting Heated Objects
and Cooled Lips.
Front. Robot. AI 4:61.
doi: 10.3389/frobt.2017.00061
Frontiers in Robotics and AI | www.frontiersin.org
November 2017 | Volume 4 | Article 61
('7149684', 'Martin Cooney', 'martin cooney')
('5058247', 'Josef Bigun', 'josef bigun')
('7149684', 'Martin Cooney', 'martin cooney')
martin.daniel.cooney@gmail.com
fa4f59397f964a23e3c10335c67d9a24ef532d5cDAP3D-Net: Where, What and How Actions Occur in Videos?
Department of Computer Science and Digital Technologies
Northumbria University, Newcastle upon Tyne, NE1 8ST, UK
('40241836', 'Li Liu', 'li liu')
('47942896', 'Yi Zhou', 'yi zhou')
('40799321', 'Ling Shao', 'ling shao')
li2.liu@northumbria.ac.uk, m.y.yu@ieee.org, ling.shao@ieee.org
fa08a4da5f2fa39632d90ce3a2e1688d147ece61Supplementary material for
“Unsupervised Creation of Parameterized Avatars”
1 Summary of Notations
Tab. 1 itemizes the symbols used in the submission. Fig. 2,3,4 of the main text illustrate many of these
symbols.
2 DANN results
Fig. 1 shows side by side samples of the original image and the emoji generated by the method of [1].
As can be seen, these results do not preserve the identity very well, despite considerable effort invested in
finding suitable architectures.
3 Multiple Images Per Person
Following [4], we evaluate the visual quality that is obtained per person and not just per image, by testing
TOS on the Facescrub dataset [3]. For each person p, we considered the set of their images Xp, and selected
the emoji that was most similar to their source image, i.e., the one for which:
||f (x) − f (e(c(G(x))))||.
argmin
x∈Xp
(1)
Fig. 2 depicts the results obtained by this selection method on sample images form the Facescrub dataset
(it is an extension of Fig. 7 of the main text). The figure also shows, for comparison, the DTN [4] result for
the same image.
4 Detailed Architecture of the Various Networks
In this section we describe the architectures of the networks used in for the emoji and avatar experiments.
4.1 TOS
Network g maps DeepFace’s 256-dimensional representation [5] into 64 × 64 RGB emoji images. Follow-
ing [4], this is done through a network with 9 blocks, each consisting of a convolution, batch-normalization
and ReLU, except the last layer which employs Tanh activation. The odd blocks 1,3,5,7,9 perform upscaling
convolutions with 512-256-128-64-3 filters respectively of spatial size 4 × 4. The even ones perform 1 × 1
convolutions [2]. The odd blocks use a stride of 2 and padding of 1, excluding the first one which does not
use stride or padding.
Network e maps emoji parameterization into the matching 64× 64 RGB emoji. The parameterization is
given as binary vectors in R813 for emojis; Avatar parameterization is in R354. While there are dependencies
among the various dimensions (an emoji cannot have two hairstyles at once), the binary representation is
chosen for its simplicity and generality. e is trained in a fully supervised way, using pairs of matching
parameterization vectors and images in a supervised manner.
The architecture of e employs five upscaling convolutions with 512-256-128-64-3 filters respectively,
each of spatial size 4×4. All layers except the last one are batch normalized followed by a ReLU activation.
The last layer is followed by Tanh activation, generating an RGB image with values in range [−1, 1]. All
the layers use a stride of 2 and padding of 1, excluding the first one which does not use stride or padding.
fab2fc6882872746498b362825184c0fb7d810e4RESEARCH ARTICLE
Right wing authoritarianism is associated with
race bias in face detection
1 Univ. Grenoble Alpes, LPNC, Grenoble, France, 2 CNRS, LPNC UMR 5105, Grenoble, France, 3 IPSY,
Universite Catholique de Louvain, Louvain-la-Neuve, Belgium, 4 The Queensland Brain Institute, The
University of Queensland, St Lucia QLD Australia, 5 Institut Universitaire de France, Paris, France
('3128194', 'Brice Beffara', 'brice beffara')
('2066203', 'Jessica McFadyen', 'jessica mcfadyen')
('2634712', 'Martial Mermillod', 'martial mermillod')
* amelie.bret@univ-grenoble-alpes.fr
faead8f2eb54c7bc33bc7d0569adc7a4c2ec4c3b
fa24bf887d3b3f6f58f8305dcd076f0ccc30272aJMLR: Workshop and Conference Proceedings 39:189–204, 2014
ACML 2014
Interval Insensitive Loss for Ordinal Classification
Vojtˇech Franc
V´aclav Hlav´aˇc
Center for Machine Perception, Department of Cybernetics, Faculty of Electrical Engineering, Czech
Technical University in Prague, Technick a 2, 166 27 Prague 6 Czech Republic
Editor: Dinh Phung and Hang Li
('2742026', 'Kostiantyn Antoniuk', 'kostiantyn antoniuk')antonkos@cmp.felk.cvut.cz
xfrancv@cmp.felk.cvut.cz
hlavac@fel.cvut.cz
fac8cff9052fc5fab7d5ef114d1342daba5e4b82(CV last updated Oct. 5th, 2009.)
www.stat.cmu.edu/~abrock
1-412-478-3609
Citizenship: U.S., Australia (dual)
Education
1994-1998
: Ph.D., Department of Statistics and Department of of Electrical Engineering at
Melbourne University, Advisors: K. Borovkov, R. Evans
1993
: Honours Science Degree (in the Department of Statistics) completed at Melbourne
University (H
1988-92
: Bachelor of Science and Bachelor of Engineering with Honours completed at Mel-
bourne University
Employment
2007+
Carnegie Mellon University
2007-2009
: Senior Analyst, Horton Point LLC (Hedge Fund Management Company)
2006-2007
: Associate Professor, Department of Statistics, Carnegie Mellon Uniuversity
2005-2007
: Affiliated faculty member, Machine Learning Department (formerly known as the
Center for Automated Learning and Discovery), Carnegie Mellon University
2003-2007
Faculty member, Parallel Data Lab (PDL), Carnegie Mellon University
2002-2005
Carnegie Mellon University
1999-2002
Carnegie Mellon University
1998-1999
: Research Fellow, Department of Electrical and Electronic Engineering, The Univer-
sity of Melbourne
1993-1995
Sessional Tutor, The University of Melbourne
('1680307', 'Anthony Brockwell', 'anthony brockwell')anthony.brockwell@gmail.com
faa29975169ba3bbb954e518bc9814a5819876f6Evolution-Preserving Dense Trajectory Descriptors
Stony Brook University, Stony Brook, NY 11794, USA
('2295608', 'Yang Wang', 'yang wang')
('3482497', 'Vinh Tran', 'vinh tran')
('2356016', 'Minh Hoai', 'minh hoai')
{wang33, tquangvinh, minhhoai}@cs.stonybrook.edu
fafe69a00565895c7d57ad09ef44ce9ddd5a6caaApplied Mathematics, 2012, 3, 2071-2079
http://dx.doi.org/10.4236/am.2012.312A286 Published Online December 2012 (http://www.SciRP.org/journal/am)
Gaussian Mixture Models for Human Face Recognition
under Illumination Variations
Mihaylo College of Business and Economics
California State University, Fullerton, USA
Received August 18, 2012; revised September 18, 2012; accepted September 25, 2012
('2046854', 'Sinjini Mitra', 'sinjini mitra')Email: smitra@fullerton.edu
faf5583063682e70dedc4466ac0f74eeb63169e7HolisticPersonProcessing:FacesWithBodiesTelltheWholeStoryHillelAviezerPrincetonUniversityandNewYorkUniversityYaacovTropeNewYorkUniversityAlexanderTodorovPrincetonUniversityFacesandbodiesaretypicallyencounteredsimultaneously,yetlittleresearchhasexploredthevisualprocessingofthefullperson.Specifically,itisunknownwhetherthefaceandbodyareperceivedasdistinctcomponentsorasanintegrated,gestalt-likeunit.Toexaminethisquestion,weinvestigatedwhetheremotionalface–bodycompositesareprocessedinaholistic-likemannerbyusingavariantofthecompositefacetask,ameasureofholisticprocessing.Participantsjudgedfacialexpressionscombinedwithemotionallycongruentorincongruentbodiesthathavebeenshowntoinfluencetherecognitionofemotionfromtheface.Critically,thefaceswereeitheralignedwiththebodyinanaturalpositionormisalignedinamannerthatbreakstheecologicalpersonform.Convergingdatafrom3experimentsconfirmthatbreakingthepersonformreducesthefacilitatinginfluenceofcongruentbodycontextaswellastheimpedinginfluenceofincongruentbodycontextontherecognitionofemotionfromtheface.Theseresultsshowthatfacesandbodiesareprocessedasasingleunitandsupportthenotionofacompositepersoneffectanalogoustotheclassiceffectdescribedforfaces.Keywords:emotionperception,contexteffects,facialandbodyexpressions,holisticperception,com-positeeffectAglanceisusuallysufficientforextractingagreatdealofsocialinformationfromotherpeople(Adolphs,2002).Perceptualcuestocharacteristicssuchasgender,sexualorientation,emotionalex-pression,attractiveness,andpersonalitytraitscanbefoundinboththefaceandthebody(e.g.,facecues,Adolphs,2003;Calder&Young,2005;Ekman,1993;Elfenbein&Ambady,2002;Haxby,Hoffman,&Gobbini,2000;Rule,Ambady,&Hallett,2009;Thornhill&Gangestad,1999;Todorov&Duchaine,2008;Todo-rov,Pakrashi,&Oosterhof,2009;Willis&Todorov,2006;Ze-browitz,Hall,Murphy,&Rhodes,2002;Zebrowitz&Montepare,2008;bodycues,deGelderetal.,2006;Johnson,Gill,Reichman,&Tassinary,2007;Peelen&Downing,2005;Stevenage,Nixon,&Vince,1999;Wallbott,1998).Todate,mostresearchershaveinvestigatedthefaceandthebodyasdiscreteperceptualunits,focusingontheprocessingofeachsourceinisolation.Althoughthisapproachhasprovedex-tremelyfruitfulforcharacterizingtheuniqueperceptualcontribu-tionsofthefaceandbody,surprisinglylittleisknownabouttheprocessingofbothsourcescombined.Theaimofthecurrentstudywastoshedlightontheperceptualprocessingofthefullpersonbyexaminingwhetherthefaceandbodyinconjunctionareprocessedasaholistic“personunit.”Onthebasisofpreviousaccounts,onemaypredictthatfacesandbodiesareprocessedastwovisualcomponentsofsocialinformation(Wallbott,1998).Theseviewsarguethatfacesandbodiesmaydifferinvalue,intensity,andclarity,andconsequentlytheinformationfromeachmustbeweightedandcombinedbythecognitivesysteminordertoreachaconclusionaboutthetarget(Ekman,Friesen,&Ellsworth,1982;Ellison&Massaro,1997;Trope,1986;Wallbott,1998).Accordingtothisapproach,thefaceandbodymayinfluenceeachother.However,theinfluenceisnotsynergistic,andtheperceptionofthefaceandbodyisequaltotheweightedsumoftheirparts(Wallbott,1998).Bycontrast,thehypothesisofferedhereisthatthefaceandbodyaresubcomponentsofalargerperceptualpersonunit.Fromanecologicalperspectivethisseemslikelybecauseundernaturalconditions,thevisualsystemrarelyencountersisolatedfacesandbodies(McArthur&Baron,1983;Russell,1997).Accordingtothisview,thefaceandbodyformaunitaryperceptthatmayencompassdifferentpropertiesthanthetwosourcesofinformationseparately.Inotherwords,theinformationreadoutfromthefullpersonmaybemorethanthesumofthefaceandbodyalone.HolisticProcessingandtheCompositeEffectPastresearchonsocialperceptionexaminingunitizedgestaltprocessinghasfocusedprimarilyontheface.Indeed,ahallmarkoffaceperceptionisholisticprocessingbywhichindividualfacialcomponentsbecomeintegratedintoawhole-faceunit(Farah,Wilson,Drain,&Tanaka,1995;Tanaka&Farah,1993).Althoughisolatedfacialcomponentsdobearspecificinformation(Smith,Cottrell,Gosselin,&Schyns,2005;Whalenetal.,2004),theirarrangementinthenaturalfaceconfigurationresultsinaninte-ThisarticlewaspublishedOnlineFirstFebruary20,2012.HillelAviezer,DepartmentofPsychology,PrincetonUniversity,andDepartmentofPsychology,NewYorkUniversity;YaacovTrope,Depart-mentofPsychology,NewYorkUniversity;AlexanderTodorov,Depart-mentofPsychology,PrincetonUniversity.CorrespondenceconcerningthisarticleshouldbeaddressedtoHillelAviezer,DepartmentofPsychology,PrincetonUniversity,Princeton,NJ08540-1010.E-mail:haviezer@princeton.eduJournalofPersonalityandSocialPsychology©2012AmericanPsychologicalAssociation2012,Vol.103,No.1,20–370022-3514/12/$12.00DOI:10.1037/a002741120
faca1c97ac2df9d972c0766a296efcf101aaf969Sympathy for the Details: Dense Trajectories and Hybrid
Classification Architectures for Action Recognition
Computer Vision Group, Xerox Research Center Europe, Meylan, France
2Centre de Visi´o per Computador, Universitat Aut`onoma de Barcelona, Bellaterra, Spain
3German Aerospace Center, Wessling, Germany
('1799820', 'Adrien Gaidon', 'adrien gaidon')
('2286630', 'Eleonora Vig', 'eleonora vig')
{cesar.desouza, adrien.gaidon}@xrce.xerox.com,
eleonora.vig@dlr.de, antonio@cvc.uab.es
fab60b3db164327be8588bce6ce5e45d5b882db6Maximum A Posteriori Estimation of Distances
Between Deep Features in Still-to-Video Face
Recognition
National Research University Higher School of Economics
Laboratory of Algorithms and Technologies for Network Analysis,
36 Rodionova St., Nizhny Novgorod, Russia
National Research University Higher School of Economics
20 Myasnitskaya St., Moscow, Russia
September 2, 2018
('35153729', 'Andrey V. Savchenko', 'andrey v. savchenko')
('2080292', 'Natalya S. Belova', 'natalya s. belova')
avsavchenko@hse.ru
nbelova@hse.ru
fad895771260048f58d12158a4d4d6d0623f4158Audio-Visual Emotion
Recognition For Natural
Human-Robot Interaction
Dissertation zur Erlangung des akademischen Grades
Doktor der Ingenieurwissenschaften (Dr.-Ing.)
vorgelegt von
an der Technischen Fakultät der Universität Bielefeld
15. März 2010
('32382494', 'Ahmad Rabie', 'ahmad rabie')
fae83b145e5eeda8327de9f19df286edfaf5e60cReadings in Technology and Education: Proceedings of ICICTE 2010
367
TOWARDS AN INTERACTIVE E-LEARNING SYSTEM BASED ON
EMOTIONS AND AFFECTIVE COGNITION
Department of Informatics
Department of Audiovisual Arts
Department of Informatics
Konstantinos Ch. Drossos
Department of Audiovisual Arts
Ionian University
Greece
('25189167', 'Panagiotis Vlamos', 'panagiotis vlamos')
('2284118', 'Andreas Floros', 'andreas floros')
('1761403', 'Michail N. Giannakos', 'michail n. giannakos')
ffea8775fc9c32f573d1251e177cd283b4fe09c9Accepted to be Published in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME) 2018, San Diego, USA
TRANSFORMATION ON COMPUTER–GENERATED FACIAL IMAGE TO AVOID DETECTION
BY SPOOFING DETECTOR
Graduate University for Advanced Studies, Kanagawa, Japan
National Institute of Informatics, Tokyo, Japan
The University of Edinburgh, Edinburgh, UK
('47321045', 'Huy H. Nguyen', 'huy h. nguyen')
('9328269', 'Ngoc-Dung T. Tieu', 'ngoc-dung t. tieu')
('2912817', 'Hoang-Quoc Nguyen-Son', 'hoang-quoc nguyen-son')
('1716857', 'Junichi Yamagishi', 'junichi yamagishi')
('1678602', 'Isao Echizen', 'isao echizen')
{nhhuy, dungtieu, nshquoc, jyamagishi, iechizen}@nii.ac.jp
ff8315c1a0587563510195356c9153729b533c5b432
Zapping Index:Using Smile to Measure
Advertisement Zapping Likelihood
('1803478', 'Songfan Yang', 'songfan yang')
('1784929', 'Mehran Kafai', 'mehran kafai')
('39776603', 'Le An', 'le an')
('1707159', 'Bir Bhanu', 'bir bhanu')
ff44d8938c52cfdca48c80f8e1618bbcbf91cb2aTowards Video Captioning with Naming: a
Novel Dataset and a Multi-Modal Approach
Dipartimento di Ingegneria “Enzo Ferrari”
Universit`a degli Studi di Modena e Reggio Emilia
('2035969', 'Stefano Pini', 'stefano pini')
('3468983', 'Marcella Cornia', 'marcella cornia')
('1843795', 'Lorenzo Baraldi', 'lorenzo baraldi')
('1741922', 'Rita Cucchiara', 'rita cucchiara')
{name.surname}@unimore.it
fffefc1fb840da63e17428fd5de6e79feb726894Fine-Grained Age Estimation in the wild with
Attention LSTM Networks
('47969038', 'Ke Zhang', 'ke zhang')
('49229283', 'Na Liu', 'na liu')
('3451660', 'Xingfang Yuan', 'xingfang yuan')
('46910049', 'Xinyao Guo', 'xinyao guo')
('35038034', 'Ce Gao', 'ce gao')
('2626320', 'Zhenbing Zhao', 'zhenbing zhao')
ff398e7b6584d9a692e70c2170b4eecaddd78357
ffc5a9610df0341369aa75c0331ef021de0a02a9Transferred Dimensionality Reduction
State Key Laboratory on Intelligent Technology and Systems
Tsinghua National Laboratory for Information Science and Technology (TNList)
Tsinghua University, Beijing 100084, China
('39747687', 'Zheng Wang', 'zheng wang')
('1809614', 'Yangqiu Song', 'yangqiu song')
('1700883', 'Changshui Zhang', 'changshui zhang')
ffd81d784549ee51a9b0b7b8aaf20d5581031b74Performance Analysis of Retina and DoG
Filtering Applied to Face Images for Training
Correlation Filters
Everardo Santiago Ram(cid:19)(cid:16)rez1, Jos(cid:19)e (cid:19)Angel Gonz(cid:19)alez Fraga1, Omar (cid:19)Alvarez
1 Facultad de Ciencias, Universidad Aut(cid:19)onoma de Baja California,
Carretera Transpeninsular Tijuana-Ensenada, N(cid:19)um. 3917, Colonia Playitas,
Ensenada, Baja California, C.P. 22860
{everardo.santiagoramirez,angel_fraga,
2 Facultad de Ingenier(cid:19)(cid:16)a, Arquitectura y Dise~no, Universidad Aut(cid:19)onoma de Baja
California, Carretera Transpeninsular Tijuana-Ensenada, N(cid:19)um. 3917, Colonia
Playitas, Ensenada, Baja California, C.P. 22860
('2973536', 'Sergio Omar Infante Prieto', 'sergio omar infante prieto')aomar,everardo.gutierrez}@uabc.edu.mx
sinfante@uabc.edu.mx
ff01bc3f49130d436fca24b987b7e3beedfa404dArticle
Fuzzy System-Based Face Detection Robust to
In-Plane Rotation Based on Symmetrical
Characteristics of a Face
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu
Academic Editor: Angel Garrido
Received: 15 June 2016; Accepted: 29 July 2016; Published: 3 August 2016
('1922686', 'Hyung Gil Hong', 'hyung gil hong')
('2026806', 'Won Oh Lee', 'won oh lee')
('3021526', 'Yeong Gon Kim', 'yeong gon kim')
('4634733', 'Kang Ryoung Park', 'kang ryoung park')
Seoul 100-715, Korea; hell@dongguk.edu (H.G.H.); 215p8@hanmail.net (W.O.L.); csokyg@dongguk.edu (Y.G.K.);
yawara18@hotmail.com (K.W.K.); nguyentiendat@dongguk.edu (D.T.N.)
* Correspondence: parkgr@dongguk.edu; Tel.: +82-10-3111-7022; Fax: +82-2-2277-8735
ff061f7e46a6213d15ac2eb2c49d9d3003612e49Morphable Human Face Modelling
by
Thesis
for fulfillment of the Requirements for the Degree of
Doctor of Philosophy (0190)
Clayton School of Information Technology
Monash University
February, 2008
('1695402', 'Nathan Faggian', 'nathan faggian')
('1695402', 'Nathan Faggian', 'nathan faggian')
('1728337', 'Andrew Paplinski', 'andrew paplinski')
('2696169', 'Jamie Sherrah', 'jamie sherrah')
ff1f45bdad41d8b35435098041e009627e60d208NAGRANI, ZISSERMAN: FROM BENEDICT CUMBERBATCH TO SHERLOCK HOLMES
From Benedict Cumberbatch to Sherlock
Holmes: Character Identification in TV
series without a Script
Visual Geometry Group,
Department of Engineering Science,
University of Oxford, UK
('19263506', 'Arsha Nagrani', 'arsha nagrani')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
arsha@robots.ox.ac.uk/
az@robots.ox.ac.uk/
ff60d4601adabe04214c67e12253ea3359f4e082
ffe4bb47ec15f768e1744bdf530d5796ba56cfc1AFIF4: Deep Gender Classification based on
AdaBoost-based Fusion of Isolated Facial Features and
Foggy Faces
aDepartment of Electrical Engineering and Computer Science, Lassonde School of
Engineering, York University, Canada
bFaculty of Computers and Information, Assiut University, Egypt
('40239027', 'Abdelrahman Abdelhamed', 'abdelrahman abdelhamed')
ffc9d6a5f353e5aec3116a10cf685294979c63d9Eigenphase-based face recognition: a comparison of phase-
information extraction methods
Faculty of Electrical Engineering and Computing,
University of Zagreb, Unska 3, 10 000 Zagreb
('35675021', 'Slobodan Ribarić', 'slobodan ribarić')
('3069572', 'Marijo Maračić', 'marijo maračić')
E-mail: slobodan.ribaric@fer.hr
ff8ef43168b9c8dd467208a0b1b02e223b731254BreakingNews: Article Annotation by
Image and Text Processing
('1780343', 'Arnau Ramisa', 'arnau ramisa')
('47242882', 'Fei Yan', 'fei yan')
('1994318', 'Francesc Moreno-Noguer', 'francesc moreno-noguer')
('1712041', 'Krystian Mikolajczyk', 'krystian mikolajczyk')
ff9195f99a1a28ced431362f5363c9a5da47a37bJournal of Vision (2016) 16(15):28, 1–8
Serial dependence in the perception of attractiveness
University of California
Berkeley, CA, USA
University of California
Berkeley, CA, USA
David Whitney
University of California
Berkeley, CA, USA
Helen Wills Neuroscience Institute, University of
California, Berkeley, CA, USA
Vision Science Group, University of California
Berkeley, CA, USA
The perception of attractiveness is essential for choices
of food, object, and mate preference. Like perception of
other visual features, perception of attractiveness is
stable despite constant changes of image properties due
to factors like occlusion, visual noise, and eye
movements. Recent results demonstrate that perception
of low-level stimulus features and even more complex
attributes like human identity are biased towards recent
percepts. This effect is often called serial dependence.
Some recent studies have suggested that serial
dependence also exists for perceived facial
attractiveness, though there is also concern that the
reported effects are due to response bias. Here we used
an attractiveness-rating task to test the existence of
serial dependence in perceived facial attractiveness. Our
results demonstrate that perceived face attractiveness
was pulled by the attractiveness level of facial images
encountered up to 6 s prior. This effect was not due to
response bias and did not rely on the previous motor
response. This perceptual pull increased as the difference
in attractiveness between previous and current stimuli
increased. Our results reconcile previously conflicting
findings and extend previous work, demonstrating that
sequential dependence in perception operates across
different levels of visual analysis, even at the highest
levels of perceptual interpretation.
Introduction
Humans make aesthetic judgments all the time about
the attractiveness or desirability of objects and scenes.
Aesthetic judgments are not merely about judging
works of art; they are constantly involved in our daily
activity, influencing or determining our choices of food,
object (Creusen & Schoormans, 2005), and mate
preference (Rhodes, Simmons, & Peters, 2005).
Aesthetic judgments are based on perceptual pro-
cessing (Arnheim, 1954; Livingstone & Hubel, 2002;
Solso, 1996). These judgments, like other perceptual
experiences, are thought to be relatively stable in spite
of fluctuations in the raw visual input we receive due to
factors like occlusion, visual noise, and eye movements.
One mechanism that allows the visual system to achieve
this stability is serial dependence. Recent results have
revealed that the perception of visual features such as
orientation (Fischer & Whitney, 2014), numerosity
(Cicchini, Anobile, & Burr, 2014), and facial identity
(Liberman, Fischer, & Whitney, 2014) are systemati-
cally assimilated toward visual input from the recent
past. This perceptual pull has been distinguished from
hysteresis in motor responses or decision processes, and
has been shown to be tuned by the magnitude of the
difference between previous and current visual inputs
(Fischer & Whitney, 2014; Liberman, Fischer, &
Whitney, 2014).
Is aesthetics perception similarly stable like feature
perception? Some previous studies have suggested that
the answer is yes. It has been shown that there is a
positive correlation between observers’ successive
attractiveness ratings of facial images (Kondo, Taka-
hashi, & Watanabe, 2012; Taubert, Van der Burg, &
Alais, 2016). This suggests that there is an assimilative
sequential dependence in attractiveness judgments.
Citation: Xia, Y., Leib, A. Y., & Whitney, D. (2016). Serial dependence in the perception of attractiveness. Journal of Vision,
16(15):28, 1–8, doi:10.1167/16.15.28.
doi: 10 .116 7 /1 6. 15 . 28
Received July 13, 2016; published December 22, 2016
ISSN 1534-7362
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
('27678837', 'Ye Xia', 'ye xia')
('6931574', 'Allison Yamanashi Leib', 'allison yamanashi leib')
ffaad0204f4af763e3390a2f6053c0e9875376beArticle
Non-Convex Sparse and Low-Rank Based Robust
Subspace Segmentation for Data Mining
School of Information Science and Technology, Donghua University, Shanghai 200051, China
City University of Hong Kong, Kowloon 999077, Hong Kong, China
School of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK 74464, USA
Received: 16 June 2017; Accepted: 10 July 2017; Published: 15 July 2017
('1743434', 'Wenlong Cheng', 'wenlong cheng')
('2482149', 'Mingbo Zhao', 'mingbo zhao')
('1691742', 'Naixue Xiong', 'naixue xiong')
('1977592', 'Kwok Tai Chui', 'kwok tai chui')
cheng.python@gmail.com
ktchui3-c@my.cityu.edu.hk
xiongnaixue@gmail.com
* Correspondence: mbzhao4@gmail.com; Tel.: +86-131-0684-8616
ffcbedb92e76fbab083bb2c57d846a2a96b5ae30
ff7bc7a6d493e01ec8fa2b889bcaf6349101676eFacial expression recognition with spatiotemporal local
descriptors
Machine Vision Group, Infotech Oulu and Department of Electrical and
Information Engineering, P. O. Box 4500 FI-90014 University of Oulu, Finland
('1757287', 'Guoying Zhao', 'guoying zhao')
('1714724', 'Matti Pietikäinen', 'matti pietikäinen')
{gyzhao, mkp}@ee.oulu.fi
fffa2943808509fdbd2fc817cc5366752e57664aCombined Ordered and Improved Trajectories for Large Scale Human Action
Recognition
1Vision & Sensing, HCC Lab,
ESTeM, University of Canberra
2IHCC, RSCS, CECS,
Australian National University
('1793720', 'O. V. Ramana Murthy', 'o. v. ramana murthy')
('1717204', 'Roland Goecke', 'roland goecke')
O.V.RamanaMurthy@ieee.org
roland.goecke@ieee.org
ff46c41e9ea139d499dd349e78d7cc8be19f936cInternational Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol.3, Issue.3, May-June. 2013 pp-1339-1342 ISSN: 2249-6645
A Novel Method for Movie Character Identification and its
Facial Expression Recognition
M.Tech, Sri Sunflower College of Engineering and Technology, Lankapalli
Sri Sunflower College of Engineering and Technology, Lankapalli
('6339174', 'N. Praveen', 'n. praveen')
ff5dd6f96e108d8233220cc262bc282229c1a582Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 6, November- December 2012, pp.708-715
Robust Facial Marks Detection Method Using AAM And SURF
B.S. Abdur Rahman University, Chennai-48, India
B.S. Abdur Rahman University, Chennai-48, India
('9401261', 'Ziaul Haque Choudhury', 'ziaul haque choudhury')
('9401261', 'Ziaul Haque Choudhury', 'ziaul haque choudhury')
c5468665d98ce7349d38afb620adbf51757ab86fPose-Encoded Spherical Harmonics for Robust Face
Recognition Using a Single Image
Center for Automation Research, University of Maryland, College Park, MD 20742, USA
2 Vision Technologies Lab, Sarnoff Corporation, Princeton, NJ 08873, USA
('39265975', 'Zhanfeng Yue', 'zhanfeng yue')
('38480590', 'Wenyi Zhao', 'wenyi zhao')
('9215658', 'Rama Chellappa', 'rama chellappa')
c588c89a72f89eed29d42f34bfa5d4cffa530732Attributes2Classname: A discriminative model for attribute-based
unsupervised zero-shot learning
HAVELSAN Inc., 2Bilkent University, 3Hacettepe University
('9424554', 'Berkan Demirel', 'berkan demirel')
('1939006', 'Ramazan Gokberk Cinbis', 'ramazan gokberk cinbis')
('2011587', 'Nazli Ikizler-Cinbis', 'nazli ikizler-cinbis')
bdemirel@havelsan.com.tr, gcinbis@cs.bilkent.edu.tr, nazli@cs.hacettepe.edu.tr
c5d13e42071813a0a9dd809d54268712eba7883fFace Recognition Robust to Head Pose Changes Based on the RGB-D Sensor
West Virginia University, Morgantown, WV
('2997432', 'Cesare Ciaccio', 'cesare ciaccio')
('2671284', 'Lingyun Wen', 'lingyun wen')
('1822413', 'Guodong Guo', 'guodong guo')
cciaccio@mix.wvu.edu, lwen@mix.wvu.edu, guodong.guo@mail.wvu.edu
c50d73557be96907f88b59cfbd1ab1b2fd696d41JournalofElectronicImaging13(3),474–485(July2004).
Semiconductor sidewall shape estimation
Oak Ridge National Laboratory
Oak Ridge, Tennessee 37831-6010
('3078522', 'Philip R. Bingham', 'philip r. bingham')
('3211433', 'Jeffery R. Price', 'jeffery r. price')
('2019731', 'Kenneth W. Tobin', 'kenneth w. tobin')
('1970334', 'Thomas P. Karnowski', 'thomas p. karnowski')
E-mail: binghampr@ornl.gov
c54f9f33382f9f656ec0e97d3004df614ec56434
c574c72b5ef1759b7fd41cf19a9dcd67e5473739Zlatintsi et al. EURASIP Journal on Image and Video Processing (2017) 2017:54
DOI 10.1186/s13640-017-0194-1
EURASIP Journal on Image
and Video Processing
RESEARCH
Open Access
COGNIMUSE: a multimodal video
database annotated with saliency, events,
semantics and emotion with application to
summarization
('2641229', 'Athanasia Zlatintsi', 'athanasia zlatintsi')
('27687205', 'Niki Efthymiou', 'niki efthymiou')
('2861393', 'Katerina Pastra', 'katerina pastra')
('1791187', 'Alexandros Potamianos', 'alexandros potamianos')
('1750686', 'Petros Maragos', 'petros maragos')
('2539459', 'Petros Koutras', 'petros koutras')
('1710606', 'Georgios Evangelopoulos', 'georgios evangelopoulos')
c5a561c662fc2b195ff80d2655cc5a13a44ffd2dUsing Language to Learn Structured Appearance
Models for Image Annotation
('37894231', 'Michael Jamieson', 'michael jamieson')
('1775745', 'Afsaneh Fazly', 'afsaneh fazly')
('1792908', 'Suzanne Stevenson', 'suzanne stevenson')
('1724954', 'Sven Wachsmuth', 'sven wachsmuth')
c5fe40875358a286594b77fa23285fcfb7bda68e
c5c379a807e02cab2e57de45699ababe8d13fb6d Facial Expression Recognition Using Sparse Representation
1School of Physics and Electronic Engineering
Taizhou University
Taizhou 318000
CHINA
2Department of Computer Science
Taizhou University
Taizhou 318000
CHINA
('1695589', 'SHIQING ZHANG', 'shiqing zhang')
('1730594', 'XIAOMING ZHAO', 'xiaoming zhao')
('38909691', 'BICHENG LEI', 'bicheng lei')
tzczsq@163.com, leibicheng@163.com
tzxyzxm@163.com
c5ea084531212284ce3f1ca86a6209f0001de9d1Audio-Visual Speech Processing for
Multimedia Localisation
by
Matthew Aaron Benatan
Submitted in accordance with the requirements
for the degree of Doctor of Philosophy
The University of Leeds
School of Computing
September 2016
c5935b92bd23fd25cae20222c7c2abc9f4caa770Spatiotemporal Multiplier Networks for Video Action Recognition
Graz University of Technology
Graz University of Technology
York University, Toronto
('2322150', 'Christoph Feichtenhofer', 'christoph feichtenhofer')
('1718587', 'Axel Pinz', 'axel pinz')
('1709096', 'Richard P. Wildes', 'richard p. wildes')
feichtenhofer@tugraz.at
axel.pinz@tugraz.at
wildes@cse.yorku.ca
c5421a18583f629b49ca20577022f201692c4f5dFacial Age Classification using Subpattern-based
Approaches
Eastern Mediterranean University, Gazima usa, Northern Cyprus
Mersin 10, Turkey

are
(mPCA)
examined
('3437942', 'Fatemeh Mirzaei', 'fatemeh mirzaei')
('2907423', 'Önsen Toygar', 'önsen toygar')
{fatemeh.mirzaei, onsen.toygar}@emu.edu.tr
c5be0feacec2860982fbbb4404cf98c654142489Semi-Qualitative Probabilistic Networks in Computer
Vision Problems
Troy, NY 12180, USA.
Troy, NY 12180, USA.
Troy, NY 12180, USA.
Troy, NY 12180, USA.
Received: ***
Revised: ***
('1680860', 'Cassio P. de Campos', 'cassio p. de campos')
('1684635', 'Lei Zhang', 'lei zhang')
('1686235', 'Yan Tong', 'yan tong')
('1726583', 'Qiang Ji', 'qiang ji')
Email: decamc@rpi.edu
Email: zhangl2@rpi.edu
Email: tongy2@rpi.edu
Email: jiq@rpi.edu
c5844de3fdf5e0069d08e235514863c8ef900eb7Lam S K et al. / (IJCSE) International Journal on Computer Science and Engineering
Vol. 02, No. 08, 2010, 2659-2665
A Study on Similarity Computations in Template
Matching Technique for Identity Verification
Lam, S. K., Yeong, C. Y., Yew, C. T., Chai, W. S., Suandi, S. A.
Intelligent Biometric Group, School of Electrical and Electronic Engineering
Engineering Campus, Universiti Sains Malaysia
14300 Nibong Tebal, Pulau Pinang, MALAYSIA
Email: shahrel@eng.usm.my
c58b7466f2855ffdcff1bebfad6b6a027b8c5ee1Ultra-Resolving Face Images by Discriminative
Generative Networks(cid:63)
Australian National University
('4092561', 'Xin Yu', 'xin yu'){xin.yu, fatih.porikli}@anu.edu.au
c590c6c171392e9f66aab1bce337470c43b48f39Emotion Recognition by Machine Learning Algorithms using
Psychophysiological Signals
1, 2, 3 BT Convergence Technology Research Department, Electronics and Telecommunications
Research Institute, 138 Gajeongno, Yuseong-gu, Daejeon, 305-700, Republic of Korea
Chungnam National University
('2329242', 'Eun-Hye Jang', 'eun-hye jang')
('1696731', 'Byoung-Jun Park', 'byoung-jun park')
('2030031', 'Sang-Hyeob Kim', 'sang-hyeob kim')
('2615387', 'Jin-Hun Sohn', 'jin-hun sohn')
cleta4u@etri.re.kr, bj_park@etri.re.kr, shk1028@etri.re.kr
Gung-dong, Yuseong-gu, Daejeon, 305-765, Republic of Korea, jhsohn@cnu.ac.kr
c5f1ae9f46dc44624591db3d5e9f90a6a8391111Application of non-negative and local non negative matrix factorization to facial
expression recognition
Dept. of Informatics
Aristotle University of Thessaloniki
GR-541 24, Thessaloniki, Box 451, Greece
('2336758', 'Ioan Buciu', 'ioan buciu')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
{nelu,pitas}@zeus.csd.auth.gr
c53352a4239568cc915ad968aff51c49924a3072Transfer Representation-Learning for Anomaly Detection
Lewis D. Griffin†
University College London, UK
University College London, UK
(cid:63)Rapiscan Systems Ltd, USA
('3451382', 'Thomas Tanay', 'thomas tanay')
('13736095', 'Edward J. Morton', 'edward j. morton')
JERONE.ANDREWS@CS.UCL.AC.UK
THOMAS.TANAY.13@UCL.AC.UK
EMORTON@RAPISCANSYSTEMS.COM
L.GRIFFIN@CS.UCL.AC.UK
c2c5206f6a539b02f5d5a19bdb3a90584f7e6ba4Affective Computing: A Review
National Laboratory of Pattern Recognition (NLPR), Institute of Automation
Chinese Academy of Sciences, P.O.X. 2728, Beijing 100080
('37670752', 'Jianhua Tao', 'jianhua tao')
('1688870', 'Tieniu Tan', 'tieniu tan')
{jhtao, tnt}@nlpr.ia.ac.cn
c2fa83e8a428c03c74148d91f60468089b80c328Optimal Mean Robust Principal Component Analysis
University of Texas at Arlington, Arlington, TX
('1688370', 'Feiping Nie', 'feiping nie')
('40034801', 'Jianjun Yuan', 'jianjun yuan')
('1748032', 'Heng Huang', 'heng huang')
FEIPINGNIE@GMAIL.COM
WRIYJJ@GMAIL.COM
HENG@UTA.EDU
c2c3ff1778ed9c33c6e613417832505d33513c55Multimodal Biometric Person Authentication
Using Fingerprint, Face Features
University of Lac Hong 10 Huynh Van Nghe
DongNai 71000, Viet Nam
Ho Chi Minh City University of Science
227 Nguyen Van Cu, HoChiMinh 70000, Viet Nam
('2009230', 'Tran Binh Long', 'tran binh long')
('2710459', 'Le Hoang Thai', 'le hoang thai')
('1971778', 'Tran Hanh', 'tran hanh')
tblong@lhu.edu.vn
lhthai@fit.hcmus.edu.vn
c27f64eaf48e88758f650e38fa4e043c16580d26Title of the proposed research project: Subspace analysis using Locality Preserving
Projection and its applications for image recognition
Research area: Data manifold learning for pattern recognition
Contact Details:
University: Dhirubhai Ambani Institute of Information and Communication Technology
(DA-IICT), Gandhinagar.
('2050838', 'Gitam C Shikkenawis', 'gitam c shikkenawis')Email Address: 201221004@daiict.ac.in
c23153aade9be0c941390909c5d1aad8924821dbEfficient and Accurate Tracking
for Face Diarization via Periodical Detection
∗Ecole Polytechnique Federal de Lausanne, Switzerland
Idiap Research Institute, Martigny, Switzerland
('39560344', 'Nam Le', 'nam le')
('30790014', 'Alexander Heili', 'alexander heili')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
Email: { nle, aheili, dwu, odobez }@idiap.ch
c207fd762728f3da4cddcfcf8bf19669809ab284Face Alignment Using Boosting and Evolutionary
Search
College of Software Engineering, Southeast University, Nanjing 210096, China
Lab of Science and Technology, Southeast University, Nanjing 210096, China
Human Media Interaction, University of Twente, P.O. Box
7500 AE Enschede, The Netherlands
('39063774', 'Hua Zhang', 'hua zhang')
('2779570', 'Duanduan Liu', 'duanduan liu')
('1688157', 'Mannes Poel', 'mannes poel')
('1745198', 'Anton Nijholt', 'anton nijholt')
reynzhang@sina.com
liuduanduan@seu.edu.cn
{anijholt,mpoel}@cs.utwente.nl
c220f457ad0b28886f8b3ef41f012dd0236cd91aJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Crystal Loss and Quality Pooling for
Unconstrained Face Verification and Recognition
('40497884', 'Rajeev Ranjan', 'rajeev ranjan')
('2068427', 'Ankan Bansal', 'ankan bansal')
('2680836', 'Hongyu Xu', 'hongyu xu')
('2716670', 'Swami Sankaranarayanan', 'swami sankaranarayanan')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('9215658', 'Rama Chellappa', 'rama chellappa')
c254b4c0f6d5a5a45680eb3742907ec93c3a222bA Fusion-based Gender Recognition Method
Using Facial Images
('24033665', 'Benyamin Ghojogh', 'benyamin ghojogh')
('1779028', 'Saeed Bagheri Shouraki', 'saeed bagheri shouraki')
('1782221', 'Hoda Mohammadzade', 'hoda mohammadzade')
('22395643', 'Ensieh Iranmehr', 'ensieh iranmehr')
c2e03efd8c5217188ab685e73cc2e52c54835d1aDeep Tree-structured Face: A Unified Representation for Multi-task Facial
Biometrics
Department of Electrical Engineering and Computer Science
University of Tennessee, Knoxville
('1691576', 'Rui Guo', 'rui guo')
('9120475', 'Liu Liu', 'liu liu')
('40560485', 'Wei Wang', 'wei wang')
('2885826', 'Ali Taalimi', 'ali taalimi')
('1690083', 'Chi Zhang', 'chi zhang')
('1698645', 'Hairong Qi', 'hairong qi')
{rguo1, lliu25, wwang34, ataalimi, czhang24, hqi} @utk.edu
c28461e266fe0f03c0f9a9525a266aa3050229f0Automatic Detection of Facial Feature Points via
HOGs and Geometric Prior Models
1 Computer Vision Center , Universitat Aut`onoma de Barcelona
2 Universitat Oberta de Catalunya
3 Dept. de Matem`atica Aplicada i An`alisi
Universitat de Barcelona
('1863902', 'David Masip', 'david masip')mrojas@cvc.uab.es, dmasipr@uoc.edu, jordi.vitria@ub.edu
c29e33fbd078d9a8ab7adbc74b03d4f830714cd0
c2e6daebb95c9dfc741af67464c98f10391276275-1
MVA2013 IAPR International Conference on Machine Vision Applications, May 20-23, 2013, Kyoto, JAPAN
Efficient Measuring of Facial Action Unit Activation Intensities
using Active Appearance Models
Computer Vision Group, Friedrich Schiller University of Jena, Germany
University Hospital Jena, Germany
('1708249', 'Daniel Haase', 'daniel haase')
('8993584', 'Michael Kemmler', 'michael kemmler')
('1814631', 'Orlando Guntinas-Lichius', 'orlando guntinas-lichius')
('1728382', 'Joachim Denzler', 'joachim denzler')
f60a85bd35fa85739d712f4c93ea80d31aa7de07VisDA: The Visual Domain Adaptation Challenge
Boston University
EECS, University of California Berkeley
('2960713', 'Xingchao Peng', 'xingchao peng')
('39058756', 'Ben Usman', 'ben usman')
('34836903', 'Neela Kaushik', 'neela kaushik')
('50196944', 'Judy Hoffman', 'judy hoffman')
('2774612', 'Dequan Wang', 'dequan wang')
('2903226', 'Kate Saenko', 'kate saenko')
xpeng,usmn,nkaushik,saenko@bu.edu, jhoffman,dqwang@eecs.berkeley.edu
f6f06be05981689b94809130e251f9e4bf932660An Approach to Illumination and Expression Invariant
International Journal of Computer Applications (0975 – 8887)
Volume 91 – No.15, April 2014
Multiple Classifier Face Recognition
Dalton Meitei Thounaojam
National Institute of Technology
Silchar
Assam: 788010
India
National Institute of Technology
Silchar
Assam: 788010
India
Romesh Laishram
Manipur Institute of Technology
Imphal West: 795001
India
f68ed499e9d41f9c3d16d843db75dc12833d988d
f6742010372210d06e531e7df7df9c01a185e241Dimensional Affect and Expression in
Natural and Mediated Interaction
Ritsumeikan, University
Kyoto, Japan
October, 2007
('1709339', 'Michael J. Lyons', 'michael j. lyons')lyons@im.ritsumei.ac.jp
f69de2b6770f0a8de6d3ec1a65cb7996b3c99317Research Journal of Applied Sciences, Engineering and Technology 8(22): 2265-2271, 2014
ISSN: 2040-7459; e-ISSN: 2040-7467
© Maxwell Scientific Organization, 2014
Submitted: September ‎13, ‎2014
Accepted: ‎September ‎20, ‎2014
Published: December 15, 2014
Face Recognition System Based on Sparse Codeword Analysis
St.Joseph s College of Engineering, Old Mamallapuram Road, Kamaraj Nagar, Semmencherry, Chennai
Anna University, Chennai
Tamil Nadu 600119, India
('2508896', 'P. Geetha', 'p. geetha')
('40574934', 'Vasumathi Narayanan', 'vasumathi narayanan')
f6ca29516cce3fa346673a2aec550d8e671929a6International Journal of Engineering and Advanced Technology (IJEAT)
ISSN: 2249 – 8958, Volume-2, Issue-4, April 2013
Algorithm for Face Matching Using Normalized
Cross-Correlation
('2426695', 'C. Saravanan', 'c. saravanan')
('14289238', 'M. Surender', 'm. surender')
f67a73c9dd1e05bfc51219e70536dbb49158f7bcJournal of Computer Science 10 (11): 2292-2298, 2014
ISSN: 1549-3636
© 2014 Nithyashri and Kulanthaivel, This open access article is distributed under a Creative Commons Attribution
(CC-BY) 3.0 license
A GAUSSIAN MIXTURE MODEL FOR CLASSIFYING THE
HUMAN AGE USING DWT AND SAMMON MAP
Sathyabama University, Chennai, India
2Department of Electronics Engineering, NITTTR, Chennai, India
Received 2014-05-08; Revised 2014-05-23; Accepted 2014-11-28
('9513864', 'J. Nithyashri', 'j. nithyashri')
('5014650', 'G. Kulanthaivel', 'g. kulanthaivel')
f6c70635241968a6d5fd5e03cde6907022091d64
f6149fc5b39fa6b33220ccee32a8ee3f6bbcaf4aSyn2Real: A New Benchmark for
Synthetic-to-Real Visual Domain Adaptation
Boston University1, University of Tokyo
University of California Berkeley
('2960713', 'Xingchao Peng', 'xingchao peng')
('39058756', 'Ben Usman', 'ben usman')
('8915348', 'Kuniaki Saito', 'kuniaki saito')
('34836903', 'Neela Kaushik', 'neela kaushik')
('2903226', 'Kate Saenko', 'kate saenko')
f66f3d1e6e33cb9e9b3315d3374cd5f121144213The Journal of Neuroscience, October 30, 2013 • 33(44):17435–17443 • 17435
Behavioral/Cognitive
Top-Down Control of Visual Responses to Fear by the
Amygdala
1Medical Research Council Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom, and 2Wellcome Centre for Imaging Neuroscience,
University College London, London WC1N 3BG, United Kingdom
The visual cortex is sensitive to emotional stimuli. This sensitivity is typically assumed to arise when amygdala modulates visual cortex
via backwards connections. Using human fMRI, we compared dynamic causal connectivity models of sensitivity with fearful faces. This
model comparison tested whether amygdala modulates distinct cortical areas, depending on dynamic or static face presentation. The
ventral temporal fusiform face area showed sensitivity to fearful expressions in static faces. However, for dynamic faces, we found fear
sensitivity in dorsal motion-sensitive areas within hMT⫹/V5 and superior temporal sulcus. The model with the greatest evidence
included connections modulated by dynamic and static fear from amygdala to dorsal and ventral temporal areas, respectively. According
to this functional architecture, amygdala could enhance encoding of fearful expression movements from video and the form of fearful
expressions from static images. The amygdala may therefore optimize visual encoding of socially charged and salient information.
Introduction
Emotional images enhance responses in visual areas, an effect
typically observed in the fusiform gyrus for static fearful faces and
ascribed to backwards connections from amygdala (Morris et al.,
1998; Vuilleumier and Pourtois, 2007). Although support for
amygdala influence comes from structural connectivity (Amaral
and Price, 1984; Catani et al., 2003), functional connectivity
(Morris et al., 1998; Foley et al., 2012), and path analysis (Lim et
al., 2009), directed connectivity measures and formal model
comparison are still needed to show that backwards connections
from amygdala are more likely than other architectures to gener-
ate cortical emotion sensitivity.
Moreover, it is surprising that the putative amygdala feedback
would enhance fusiform cortex responses. According to the pre-
vailing view, a face-selective area in fusiform cortex, the fusiform
face area (FFA), is associated with processing facial identity,
whereas dorsal temporal regions, particularly in the superior
temporal sulcus (STS), are associated with processing facial ex-
pression (Haxby et al., 2000). An alternative position is that fusi-
form and STS areas both contribute to facial expression
processing but contribute to encoding structural forms and dy-
namic features, respectively (Calder and Young, 2005; Calder,
2011). In this case, static fearful expressions may enhance FFA
Received July 11, 2013; revised Sept. 7, 2013; accepted Sept. 12, 2013.
Author contributions: N.F., R.N.H., K.J.F., and A.J.C. designed research; N.F. performed research; N.F. analyzed
data; N.F., R.N.H., K.J.F., and A.J.C. wrote the paper.
This work was supported by the United Kingdom Economic and Social Research Council Grant RES-062-23-2925
to N.F. and the Medical Research Council Grant MC_US_A060_5PQ50 to A.J.C. and Grant MC_US_A060_0046 to
R.N.H. We thank Christopher Fox for supplying the dynamic object stimuli and James Rowe and Francesca Carota for
contributing useful comments.
The authors declare no competing financial interests.
DOI:10.1523/JNEUROSCI.2992-13.2013
Copyright © 2013 the authors
0270-6474/13/3317435-09$15.00/0
encoding of structural cues associated with emotional expres-
sion. We therefore characterized the conditions under which
amygdala mediates fear sensitivity in fusiform cortex, compared
with dorsal temporal areas (Sabatinelli et al., 2011).
We asked whether dynamic and static fearful expressions en-
hance responses in dorsal temporal and ventral fusiform areas, re-
spectively. One dorsal temporal area, hMT⫹/V5, is sensitive to low
level and facial motion and may be homologous to the middle tem-
poral (MT), medial superior temporal (MST), and fundus of the
super temporal (FST) areas in the macaque (Kolster et al., 2010).
Another dorsal area, the posterior STS, is responsive generally to
biological motion (Giese and Poggio, 2003). Compared with dorsal
areas, the fusiform gyrus shows less sensitivity to facial motion
(Schultz and Pilz, 2009; Trautmann et al., 2009; Pitcher et al., 2011;
Foley et al., 2012; Schultz et al., 2012). Despite its association with
facial identity processing, many studies have shown that FFA con-
tributes to processing facial expressions (Ganel et al., 2005; Fox et al.,
2009b; Cohen Kadosh et al., 2010; Harris et al., 2012) and may have
a general role in processing facial form (O’Toole et al., 2002; Calder,
2011). Sensitivity to static fearful expressions in the FFA may reflect
this role in processing static form. If so, then dynamic fearful expres-
sions may evoke fear sensitivity in dorsal temporal areas instead,
reflecting the role of these areas to processing motion.
Our fMRI results confirmed our hypothesis that dorsal
motion-sensitive areas showed fear sensitivity for dynamic facial
expressions, whereas the FFA showed fear sensitivity for static
expressions. To explore connectivity mechanisms that mediate
fear sensitivity, we used dynamic causal modeling (DCM) to ex-
plore 508 plausible connectivity architectures. Our Bayesian
model comparison identified the most likely model, which
showed that dynamic and static fear modulated connections
from amygdala to dorsal or ventral areas, respectively. Amygdala
therefore may control how behaviorally relevant information is
visually coded in a context-sensitive fashion.
('3162581', 'Nicholas Furl', 'nicholas furl')
('3162581', 'Nicholas Furl', 'nicholas furl')
Unit, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom. E-mail: nick.furl@mrc-cbu.cam.ac.uk.
f6ce34d6e4e445cc2c8a9b8ba624e971dd4144caCross-label Suppression: A Discriminative and Fast
Dictionary Learning with Group Regularization
April 24, 2017
('9293691', 'Xiudong Wang', 'xiudong wang')
('2080215', 'Yuantao Gu', 'yuantao gu')
f6abecc1f48f6ec6eede4143af33cc936f14d0d0
f61d5f2a082c65d5330f21b6f36312cc4fab8a3bMulti-Level Variational Autoencoder:
Learning Disentangled Representations from
Grouped Observations
OVAL Group
University of Oxford
Machine Intelligence and Perception Group
Microsoft Research
Cambridge, UK
('3365029', 'Diane Bouchacourt', 'diane bouchacourt')
('2870603', 'Ryota Tomioka', 'ryota tomioka')
('2388416', 'Sebastian Nowozin', 'sebastian nowozin')
diane@robots.ox.ac.uk
{ryoto,Sebastian.Nowozin}@microsoft.com
f6fa97fbfa07691bc9ff28caf93d0998a767a5c1k2-means for fast and accurate large scale clustering
Computer Vision Lab
D-ITET
ETH Zurich
Computer Vision Lab
D-ITET
ETH Zurich
ESAT, KU Leuven
D-ITET, ETH Zurich
('2794259', 'Eirikur Agustsson', 'eirikur agustsson')
('1732855', 'Radu Timofte', 'radu timofte')
('1681236', 'Luc Van Gool', 'luc van gool')
aeirikur@vision.ee.ethz.ch
timofter@vision.ee.ethz.ch
vangool@vision.ee.ethz.ch
f6cf2108ec9d0f59124454d88045173aa328bd2eRobust user identification based on facial action units
unaffected by users’ emotions
Aalen University, Germany
('3114281', 'Ricardo Buettner', 'ricardo buettner')ricardo.buettner@hs-aalen.de
f68f20868a6c46c2150ca70f412dc4b53e6a03c2157
Differential Evolution to Optimize
Hidden Markov Models Training:
Application to Facial Expression
Recognition
Ars`ene Simbabawe
MISC Laboratory, Constantine 2 University, Constantine, Algeria
The base system in this paper uses Hidden Markov
Models (HMMs) to model dynamic relationships among
facial features in facial behavior interpretation and un-
derstanding field. The input of HMMs is a new set
of derived features from geometrical distances obtained
from detected and automatically tracked facial points.
Numerical data representation which is in the form of
multi-time series is transformed to a symbolic repre-
sentation in order to reduce dimensionality, extract the
most pertinent information and give a meaningful repre-
sentation to humans. The main problem of the use of
HMMs is that the training is generally trapped in local
minima, so we used the Differential Evolution (DE)
algorithm to offer more diversity and so limit as much as
possible the occurrence of stagnation. For this reason,
this paper proposes to enhance HMM learning abilities
by the use of DE as an optimization tool, instead of the
classical Baum and Welch algorithm. Obtained results
are compared against the traditional learning approach
and significant improvements have been obtained.
Keywords: facial expressions, occurrence order, Hidden
Markov Model, Baum-Welch, optimization, differential
evolution
1. Introduction
Analyzing the dynamics of facial features and
(or) the changes in the appearance of facial fea-
tures (eyes, eyebrows and mouth) is a very im-
portant step in facial expression understanding
and interpretation. Many researchers attempt to
study the dynamic facial behavior. Timing, du-
ration, speed and occurrence order of face/body
actions are crucial parameters related to dy-
namic behavior (Ekman, & Rosenberg, 2005).
For instance, facial expression temporal dynam-
ics are essential for recognition of either full ex-
pressions (Kotsia & Pitas, 2007; Littlewort &
al, 2006), or components of expressions such
as facial Action Units (AUs) (Pantic & Patras,
2006; Valstar & Pantic, 2007). They are essen-
tial for categorization of complex psychologi-
cal states like various types of pain and mood
(Williams, 2002) and are highly important cues
for distinguishing posed from spontaneous fa-
cial expressions (Cohn & Schmidt, 2004; Val-
star & al, 2006). Timing, duration and speed
have been analyzed in several studies (Cohn &
Schmidt, 2004; Valstar & al, 2006; Valstar & al
2007). However, little attention has been given
to occurrence order (Valstar & al, 2006; Valstar
& al 2007).
Several efforts have been recently reported on
automatic analysis of facial expression data
(Zeng & al, 2009; Sandbach & al, 2012; Gunes
that most recent methods employ probabilistic
(Hidden Markov Models, Dynamic Bayesian
Network), statistical (Support Vector Machine),
and ensemble learning techniques (Gentle-
-Boost), which seem to be particularly suitable
for automatic facial expression recognition from
face image sequences. Because we want to ex-
HMM (Koelstra & al, 2010; Cohen & al, 2003)
and DBN (Tong & al, 2007; Tong & al, 2010)
can be used.
The presented work in this paper is a part of
a project which aims to construct “An Optimal
('2654160', 'Khadoudja Ghanem', 'khadoudja ghanem')
('1749675', 'Amer Draa', 'amer draa')
('2483552', 'Elvis Vyumvuhore', 'elvis vyumvuhore')
f6e00d6430cbbaa64789d826d093f7f3e323b082Visual Object Recognition
University of Texas at Austin
RWTH Aachen University
SYNTHESIS LECTURES ON COMPUTER
VISION # 1
('1794409', 'Kristen Grauman', 'kristen grauman')
('1789756', 'Bastian Leibe', 'bastian leibe')
e9a5a38e7da3f0aa5d21499149536199f2e0e1f7Article
A Bayesian Scene-Prior-Based Deep Network Model
for Face Verification
North China University of Technology
Curtin University, Perth, WA 6102, Australia
† These authors contributed equally to this work.
Received: 12 May 2018; Accepted: 8 June 2018 ; Published: 11 June 2018
('2104779', 'Huafeng Wang', 'huafeng wang')
('2239474', 'Haixia Pan', 'haixia pan')
('3229158', 'Wenfeng Song', 'wenfeng song')
('1713220', 'Wanquan Liu', 'wanquan liu')
('47311804', 'Ning Song', 'ning song')
('2361868', 'Yuehai Wang', 'yuehai wang')
Beijing 100144, China; wangyuehai@ncut.edu.cn
2 Department of Software, Beihang University, Beijing 100191, China; swfbuaa@163.com
* Correspondence: wanghuafeng@ncut.edu.cn (H.W.); W.Liu@curtin.edu.au (W.L.); zy1621125@buaa.edu.cn
(N.S.); haixiapan@buaa.edu.cn (H.P.); Tel.: +86-189-1192-4121 (H.W.)
e9ed17fd8bf1f3d343198e206a4a7e0561ad7e66International Journal of Enhanced Research in Science Technology & Engineering, ISSN: 2319-7463
Vol. 3 Issue 1, January-2014, pp: (362-365), Impact Factor: 1.252, Available online at: www.erpublications.com
Cognitive Learning for Social Robot through
Facial Expression from Video Input
1Department of Automation & Robotics, 2Department of Computer Science & Engg.
('26944751', 'Neeraj Rai', 'neeraj rai')
('2586264', 'Deepak Rai', 'deepak rai')
('26477055', 'Ajay Kumar Garg', 'ajay kumar garg')
e988be047b28ba3b2f1e4cdba3e8c94026139fcfMulti-Task Convolutional Neural Network for
Pose-Invariant Face Recognition
('2399004', 'Xi Yin', 'xi yin')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
e9d43231a403b4409633594fa6ccc518f035a135Deformable Part Models with CNN Features
Kokkinos1,2
1 Ecole Centrale Paris,2 INRIA, 3TTI-Chicago (cid:63)
('2381485', 'Stavros Tsogkas', 'stavros tsogkas')
('2776496', 'George Papandreou', 'george papandreou')
e90e12e77cab78ba8f8f657db2bf4ae3dabd5166Nonconvex Sparse Spectral Clustering by Alternating Direction Method of
Multipliers and Its Convergence Analysis
National University of Singapore
Key Laboratory of Machine Perception (MOE), School of EECS, Peking University
Cooperative Medianet Innovation Center, Shanghai Jiao Tong University
AI Institute
('33224509', 'Canyi Lu', 'canyi lu')
('33221685', 'Jiashi Feng', 'jiashi feng')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
canyilu@gmail.com, elefjia@nus.edu.sg, zlin@pku.edu.cn, eleyans@nus.edu.sg
e9c008d31da38d9eef67a28d2c77cb7daec941fbNoisy Softmax: Improving the Generalization Ability of DCNN via Postponing
the Early Softmax Saturation
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications
School of Computer Science, Beijing University of Posts and Telecommunications, Beijing China
('3450321', 'Binghui Chen', 'binghui chen')
('1774956', 'Weihong Deng', 'weihong deng')
('8491162', 'Junping Du', 'junping du')
chenbinghui@bupt.edu.cn, whdeng@bupt.edu.cn, junpingd@bupt.edu.cn
e9e40e588f8e6510fa5537e0c9e083ceed5d07adFast Face Detection Using Graphics Processor
National Institute of Technology Karnataka
Surathkal, India
('36598334', 'K.Vinay Kumar', 'k.vinay kumar')
e9bb045e702ee38e566ce46cc1312ed25cb59ea7Integrating Geometric and Textural Features for
Facial Emotion Classification using SVM
Frameworks
1 Department of Computer Science and Engineering,
Indian Institute of Technology, Roorkee
2 Department of Electronics and Electrical Communication Engineering,
Indian Institute of Technology, Kharagpur
('19200118', 'Samyak Datta', 'samyak datta')
('3165117', 'Debashis Sen', 'debashis sen')
('1726184', 'R. Balasubramanian', 'r. balasubramanian')
e9fcd15bcb0f65565138dda292e0c71ef25ea8bbRepositorio Institucional de la Universidad Autónoma de Madrid
https://repositorio.uam.es
Esta es la versión de autor de la comunicación de congreso publicada en:
This is an author produced version of a paper published in:
Highlights on Practical Applications of Agents and Multi-Agent Systems:
International Workshops of PAAMS. Communications in Computer and
Information Science, Volumen 365. Springer, 2013. 223-230
DOI: http://dx.doi.org/10.1007/978-3-642-38061-7_22
Copyright: © 2013 Springer-Verlag
El acceso a la versión del editor puede requerir la suscripción del recurso
Access to the published version may require subscription
e9f1cdd9ea95810efed306a338de9e0de25990a0FEPS: An Easy-to-Learn Sensory Substitution System to
Perceive Facial Expressions
Electrical and Computer Engineering
University of Memphis
Memphis, TN 38152, USA
('2497319', 'M. Iftekhar Tanveer', 'm. iftekhar tanveer')
('2464507', 'Sreya Ghosh', 'sreya ghosh')
('33019079', 'A.K.M. Mahbubur Rahman', 'a.k.m. mahbubur rahman')
('1828610', 'Mohammed Yeasin', 'mohammed yeasin')
{mtanveer,aanam,sghosh,arahman,myeasin}@memphis.edu
e9363f4368b04aeaa6d6617db0a574844fc59338BENCHIP: Benchmarking Intelligence
Processors
1ICT CAS,2Cambricon,3Alibaba Infrastructure Service, Alibaba Group
4IFLYTEK,5JD,6RDA Microelectronics,7AMD
('2631042', 'Jinhua Tao', 'jinhua tao')
('1678776', 'Zidong Du', 'zidong du')
('50770616', 'Qi Guo', 'qi guo')
('4304175', 'Huiying Lan', 'huiying lan')
('48571185', 'Lei Zhang', 'lei zhang')
('7523063', 'Shengyuan Zhou', 'shengyuan zhou')
('49046597', 'Cong Liu', 'cong liu')
('49343896', 'Shan Tang', 'shan tang')
('38253244', 'Allen Rush', 'allen rush')
('47482936', 'Willian Chen', 'willian chen')
('39419985', 'Shaoli Liu', 'shaoli liu')
('7377735', 'Yunji Chen', 'yunji chen')
('7934735', 'Tianshi Chen', 'tianshi chen')
f1250900074689061196d876f551ba590fc0a064Learning to Recognize Actions from Limited Training
Examples Using a Recurrent Spiking Neural Model
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
2Intel Labs, Hillsboro, OR, USA 97124
('9352814', 'Priyadarshini Panda', 'priyadarshini panda')
('1753812', 'Narayan Srinivasa', 'narayan srinivasa')
*Correspondence: narayan.srinivasa@intel.com
f1b4583c576d6d8c661b4b2c82bdebf3ba3d7e53Faster Than Real-time Facial Alignment: A 3D Spatial Transformer Network
Approach in Unconstrained Poses
Carnegie Mellon University
Pittsburgh, PA
('47894545', 'Chenchen Zhu', 'chenchen zhu')
('1769788', 'Khoa Luu', 'khoa luu')
('1794486', 'Marios Savvides', 'marios savvides')
cbhagava@andrew.cmu.edu, zcckernel@cmu.edu, kluu@andrew.cmu.edu, msavvid@ri.cmu.edu
f16a605abb5857c39a10709bd9f9d14cdaa7918fFast greyscale road sign model matching
and recognition
Centre de Visió per Computador
Edifici O – Campus UAB, 08193 Bellaterra, Barcelona, Catalonia, Spain
('7855312', 'Sergio Escalera', 'sergio escalera')
('1724155', 'Petia Radeva', 'petia radeva')
{sescalera,petia}@cvc.uab.es
f1aa120fb720f6cfaab13aea4b8379275e6d40a2InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image
Max-Planck-Institute for Informatics
University of Erlangen-Nuremberg 3 University of Bath
Figure 1. Our single-shot deep inverse face renderer InverseFaceNet obtains a high-quality geometry, reflectance and illumination estimate
from just a single input image. We jointly recover the face pose, shape, expression, reflectance and incident scene illumination. From left to
right: input photo, our estimated face model, its geometry, and the pointwise Euclidean error compared to Garrido et al. [14].
('3022958', 'Hyeongwoo Kim', 'hyeongwoo kim')
('34105638', 'Justus Thies', 'justus thies')
('1699058', 'Michael Zollhöfer', 'michael zollhöfer')
('1819028', 'Christian Richardt', 'christian richardt')
('1680185', 'Christian Theobalt', 'christian theobalt')
('9102722', 'Ayush Tewari', 'ayush tewari')
f1748303cc02424704b3a35595610890229567f9
f1ba2fe3491c715ded9677862fea966b32ca81f0ISSN: 2321-7782 (Online)
Volume 1, Issue 7, December 2013
International Journal of Advance Research in
Computer Science and Management Studies
Research Paper
Available online at: www.ijarcsms.com
Face Tracking and Recognition in Videos:
HMM Vs KNN
Assistant Professor
Department of Computer Engineering
MIT College of Engineering (Pune University
Pune - India
f1d090fcea63d9f9e835c49352a3cd576ec899c1Iosifidis, A., Tefas, A., & Pitas, I. (2015). Single-Hidden Layer Feedforward
Neual Network Training Using Class Geometric Information. In . J. J.
Computational Intelligence: International Joint Conference, IJCCI 2014
Rome, Italy, October 22-24, 2014 Revised Selected Papers. (Vol. III, pp.
351-364). (Studies in Computational Intelligence; Vol. 620). Springer. DOI:
10.1007/978-3-319-26393-9_21
Peer reviewed version
Link to published version (if available):
10.1007/978-3-319-26393-9_21
Link to publication record in Explore Bristol Research
PDF-document
University of Bristol - Explore Bristol Research
General rights
This document is made available in accordance with publisher policies. Please cite only the published
version using the reference above. Full terms of use are available:
http://www.bristol.ac.uk/pure/about/ebr-terms.html
('1685469', 'A. Rosa', 'a. rosa')
('9246794', 'J. M. Cadenas', 'j. m. cadenas')
('2092535', 'A. Dourado', 'a. dourado')
('39545211', 'K. Madani', 'k. madani')
f113aed343bcac1021dc3e57ba6cc0647a8f5ce1International Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611
A Survey on Mining of Weakly Labeled Web Facial
Images and Annotation
Pune Institute of Computer Technology, Pune, India
Pune Institute of Computer Technology, Pune, India
the
the proposed system which
f19777e37321f79e34462fc4c416bd56772031bfInternational Journal of Scientific & Engineering Research, Volume 3, Issue 6, June-2012 1
ISSN 2229-5518
Literature Review of Image Compression Algorithm
Dr. B. Chandrasekhar
Padmaja.V.K
Jawaharlal Technological University, Anantapur
email: padmaja_vk@yahoo.co.in email:: drchandrasekhar@gmail.com
f19ab817dd1ef64ee94e94689b0daae0f686e849TECHNISCHE UNIVERSIT¨AT M ¨UNCHEN
Lehrstuhl f¨ur Mensch-Maschine-Kommunikation
Blickrichtungsunabh¨angige Erkennung von
Personen in Bild- und Tiefendaten
Andre St¨ormer
Vollst¨andiger Abdruck der von der Fakult¨at f¨ur Elektrotechnik und Informationstechnik
der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines
Doktor-Ingenieurs (Dr.-Ing.)
genehmigten Dissertation.
Vorsitzender:
Univ.-Prof. Dr.-Ing. Thomas Eibert
Pr¨ufer der Dissertation:
1. Univ.-Prof. Dr.-Ing. habil. Gerhard Rigoll
2. Univ.-Prof. Dr.-Ing. Horst-Michael Groß,
Technische Universit¨at Ilmenau
Die Dissertation wurde am 16.06.2009 bei der Technischen Universit¨at M¨unchen einge-
reicht und durch die Fakult¨at f¨ur Elektrotechnik und Informationstechnik am 30.10.2009
angenommen.
e76798bddd0f12ae03de26b7c7743c008d505215
e7cac91da51b78eb4a28e194d3f599f95742e2a2RESEARCH ARTICLE
Positive Feeling, Negative Meaning:
Visualizing the Mental Representations of In-
Group and Out-Group Smiles
Saarland University, Saarbr cken, Germany, 2 Utrecht University, Utrecht, the Netherlands
Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
☯ These authors contributed equally to this work.
('34533048', 'Andrea Paulus', 'andrea paulus')
('40358273', 'Michaela Rohr', 'michaela rohr')
('2365875', 'Ron Dotsch', 'ron dotsch')
('3905267', 'Dirk Wentura', 'dirk wentura')
* a.paulus@mx.uni-saarland.de
e793f8644c94b81b7a0f89395937a7f8ad428a89LPM for Action Recognition in Temporally
Untrimmed Videos
School of Electrical Engineering and Computer Scinece
University of Ottawa, Ottawa, On, Canada
('36047295', 'Feng Shi', 'feng shi')
('1745632', 'Emil Petriu', 'emil petriu')
{fshi098, laganier, petriu}@site.uottawa.ca
e726174d516605f80ff359e71f68b6e8e6ec6d5dInstitute of Information Science
Beijing Jiaotong University
Beijing, 100044 P.R. China
A novel Patched Locality Preserving Projections for 3D face recognition was pre-
sented in this paper. In this paper, we firstly patched each image to get the spatial infor-
mation, and then Gabor filter was used extract intrinsic discriminative information em-
bedded in each patch. Finally Locality Preserving Projections, which was improved by
Principle Components Analysis, was utilized to the corresponding patches to obtain lo-
cality preserving information. The feature was constructed by connecting all these pro-
jections. Recognition was achieved by using a Nearest Neighbor classifier finally. The
novelty of this paper came from: (1) The method was robust to changes in facial expres-
sions and poses, because Gabor filters promoted their useful properties, such as invari-
ance to rotation, scale and translations, in feature extraction; (2) The method not only
preserved spatial information, but also preserved locality information of the correspond-
ing patches. Experiments demonstrated the efficiency and effectiveness of the new
method. The experimental results showed that the new algorithm outperformed the other
popular approaches reported in the literature and achieved a much higher accurate recog-
nition rate.
Keywords: 3D face recognition, Gabor filters, locality preserving projections, principle
components analysis, nearest neighbor
1. INTRODUCTION
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 26, 2297-2307 (2010)
Short Paper__________________________________________________
3D Face Recognition Using Patched
Locality Preserving Projections*
Face recognition is a very challenging subject. So far, studies in 2D face recognition
have gained significant development, such as Principal Component Analysis (PCA) [1],
Linear Discriminant Analysis (LDA) [2], and Independent Component Analysis (ICA) [3]
and so on. But it still bears limitations mostly due to pose variation, illumination, and
facial expression. 3D face recognition stood out due to the use of face depth information
which can overcome such limitations. Recently with the development of 3D acquisition
system, 3D face recognition has attracted more and more interest and a great deal of re-
search effort has been devoted to this topic [4-7].
Many methods have been proposed for 3D face recognition over the last two dec-
ades. Beumier et al. [8] proposed two methods of surface matching. Central and lateral
profiles were compared in the curvature space to achieve recognition. However, the me-
Received October 19, 2009; revised January 8, 2010; accepted March 5, 2010.
Communicated by Tyng-Luh Liu.
* This work was also partially supported by the National Natural Science Foundation of China under Grant No.
60973060 and the Doctorial Foundation of Ministry of Education of China under Grant No. 200800040008.
2297
('3282147', 'Xue-Qiao Wang', 'xue-qiao wang')
('2383779', 'Qiu-Qi Ruan', 'qiu-qi ruan')
e78394213ae07b682ce40dc600352f674aa4cb05Expression-invariant three-dimensional face recognition
Computer Science Department,
Technion Israel Institute of Technology
Haifa 32000, Israel
One of the hardest problems in face recognition is dealing with facial expressions. Finding an
expression-invariant representation of the face could be a remedy for this problem. We suggest
treating faces as deformable surfaces in the context of Riemannian geometry, and propose to ap-
proximate facial expressions as isometries of the facial surface. This way, we can define geometric
invariants of a given face under different expressions. One such invariant is constructed by iso-
metrically embedding the facial surface structure into a low-dimensional flat space. Based on this
approach, we built an accurate three-dimensional face recognition system that is able to distinguish
between identical twins under various facial expressions. In this chapter we show how under the
near-isometric model assumption, the difficult problem of face recognition in the presence of facial
expressions can be solved in a relatively simple way.
0.1 Introduction
It is well-known that some characteristics or behavior patterns of the human body are strictly
individual and can be observed in two different people with a very low probability – a few such
examples include the DNA code, fingerprints, structure of retinal veins and iris, individual’s written
signature or face. The term biometrics refers to a variety of methods that attempt to uniquely
identify a person according to a set of such features.
While many of today’s biometric technologies are based on the discoveries of the last century (like
the DNA, for example), some of them have been exploited from the dawn of the human civilization
[17]. One of the oldest written testimonies of a biometric technology and the first identity theft
dates back to biblical times, when Jacob fraudulently used the identity of his twin brother Esau to
benefit from their father’s blessing. The Genesis book describes a combination of hand scan and
voice recognition that Isaac used to attempt to verify his son’s identity, without knowing that the
smooth-skinned Jacob had wrapped his hands in kidskin:
“And Jacob went near unto Isaac his father; and he felt him, and said, ’The voice is Jacob’s
voice, but the hands are the hands of Esau’. And he recognized him not, because his hands
were hairy, as his brother Esau’s hands.”
The false acceptance which resulted from this very inaccurate biometric test had historical conse-
quences of unmatched proportions.
Face recognition is probably the most natural biometric method. The remarkable ability of the
human vision to recognize faces is widely used for biometric authentication from prehistoric times.
These days, almost every identification document contains a photograph of its bearer, which allows
the respective officials to verify a person’s identity by comparing his actual face with the one on the
photo.
Unlike many other biometrics, face recognition does not require physical contact with the individ-
ual (like fingerprint recognition) or taking samples of the body (like DNA-based identification) or the
individual’s behavior (like signature recognition). For these reasons, face recognition is considered a
natural, less intimidating, and widely accepted biometric identification method [4, 47], and as such,
has the potential of becoming the leading biometric technology. The great technological challenge is
to perform face recognition automatically, by means of computer algorithms that work without any
('1731883', 'Alexander M. Bronstein', 'alexander m. bronstein')
('1732570', 'Michael M. Bronstein', 'michael m. bronstein')
('1692832', 'Ron Kimmel', 'ron kimmel')
Email: alexbron@ieee.org
bronstein@ieee.org
ron@cs.technion.ac.il
e7b2b0538731adaacb2255235e0a07d5ccf09189Learning Deep Representations with
Probabilistic Knowledge Transfer
Aristotle University of Thessaloniki, Thessaloniki 541 24, Greece
('3200630', 'Nikolaos Passalis', 'nikolaos passalis')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
passalis@csd.auth.gr, tefas@aiia.csd.auth.gr
e726acda15d41b992b5a41feabd43617fab6dc23
e74816bc0803460e20edbd30a44ab857b06e288eSemi-Automated Annotation of Discrete States
in Large Video Datasets
Lex Fridman
Massachusetts Institute of Technology
Massachusetts Institute of Technology
('1901227', 'Bryan Reimer', 'bryan reimer')fridman@mit.edu
reimer@mit.edu
e7b6887cd06d0c1aa4902335f7893d7640aef823Modelling of Facial Aging and Kinship: A Survey ('34291068', 'Markos Georgopoulos', 'markos georgopoulos')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1694605', 'Maja Pantic', 'maja pantic')
e73b9b16adcf4339ff4d6723e61502489c50c2d9Informatics Engineering, an International Journal (IEIJ) ,Vol.2, No.1, March 2014
AN EFFICIENT FEATURE EXTRACTION METHOD WITH
LOCAL REGION ZERNIKE MOMENT FOR FACIAL
RECOGNITION OF IDENTICAL TWINS
1Department of Electrical,Computer and Biomedical Engineering, Qazvin branch, Islamic
Amirkabir University of Technology, Tehran
Azad University, Qazvin, Iran
Iran
('1692435', 'Karim Faez', 'karim faez')
cbca355c5467f501d37b919d8b2a17dcb39d3ef9CANSIZOGLU, JONES: SUPER-RESOLUTION OF VERY LR FACES FROM VIDEOS
Super-resolution of Very Low-Resolution
Faces from Videos
Esra Ataer-Cansizoglu
Mitsubishi Electric Research Labs
(MERL)
Cambridge, MA, USA
('1961683', 'Michael Jones', 'michael jones')cansizoglu@merl.com
mjones@merl.com
cbbd13c29d042743f0139f1e044b6bca731886d0Not-So-CLEVR: learning same–different relations strains
feedforward neural networks
†equal contributions
Department of Cognitive, Linguistic & Psychological Sciences
Carney Institute for Brain Science
Brown University, Providence, RI 02912, USA
('5546699', 'Junkyung Kim', 'junkyung kim')
cbcf5da9f09b12f53d656446fd43bc6df4b2fa48ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 2, Issue 6, December 2012
Face Recognition using Gray level Co-occurrence
Matrix and Snap Shot Method of the Eigen Face
Sri Chandrasekharendra Saraswathi Viswa Mahavidyalaya University, Kanchipuram, India
M. Madhu, R. Amutha
SSN College of Engineering, Chennai, India
cba45a87fc6cf12b3b0b6f57ba1a5282ef7fee7aEmotion AI, Real-Time Emotion Detection using CNN
M.S. Computer Science
Stanford University
B.S. Computer Science
Stanford University
tanner12@stanford.edu
bakis@stanford.edu
cb004e9706f12d1de83b88c209ac948b137caae0Face Aging Effect Simulation using Hidden Factor
Analysis Joint Sparse Representation
('1787137', 'Hongyu Yang', 'hongyu yang')
('31454775', 'Di Huang', 'di huang')
('40013375', 'Yunhong Wang', 'yunhong wang')
('46506697', 'Heng Wang', 'heng wang')
('2289713', 'Yuanyan Tang', 'yuanyan tang')
cb2917413c9b36c3bb9739bce6c03a1a6eb619b3MiCT: Mixed 3D/2D Convolutional Tube for Human Action Recognition
University of Science and Technology of China
2Microsoft Research Asia
('49455479', 'Yizhou Zhou', 'yizhou zhou')
('48305246', 'Xiaoyan Sun', 'xiaoyan sun')
('2057216', 'Zheng-Jun Zha', 'zheng-jun zha')
('8434337', 'Wenjun Zeng', 'wenjun zeng')
zyz0205@mail.ustc.edu.cn, zhazj@ustc.edu.cn
{xysun,wezeng}@microsoft.com
cb9092fe74ea6a5b2bb56e9226f1c88f96094388
cb13e29fb8af6cfca568c6dc523da04d1db1fff5Paper accepted to Frontiers in Psychology
Received: 02 Dec 2017
Accepted: 12 June 2018
DOI: 10.3389/fpsyg.2018.01128
A Survey of Automatic Facial
Micro-expression Analysis:
Databases, Methods and Challenges
Multimedia University, Faculty of Engineering, Cyberjaya, 63100 Selangor, Malaysia
Multimedia University, Faculty of Computing and Informatics, Cyberjaya
Selangor, Malaysia
University of Nottingham, School of Psychology, University Park, Nottingham NG
2RD, United Kingdom
Multimedia University, Research Institute for Digital Security, Cyberjaya
Selangor, Malaysia
Monash University Malaysia, School of Information Technology, Sunway
Selangor, Malaysia
Correspondence*:
('2154760', 'Yee-Hui Oh', 'yee-hui oh')
('2339975', 'John See', 'john see')
('35256518', 'Anh Cat Le Ngo', 'anh cat le ngo')
('6633183', 'Raphael C.-W. Phan', 'raphael c.-w. phan')
('34287833', 'Vishnu Monn Baskaran', 'vishnu monn baskaran')
('2339975', 'John See', 'john see')
johnsee@mmu.edu.my
cb08f679f2cb29c7aa972d66fe9e9996c8dfae00JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
Action Understanding
with Multiple Classes of Actors
('2026123', 'Chenliang Xu', 'chenliang xu')
('2228109', 'Caiming Xiong', 'caiming xiong')
('3587688', 'Jason J. Corso', 'jason j. corso')
cb84229e005645e8623a866d3d7956c197f85e11IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. X, NO. X, MONTH 201X
Disambiguating Visual Verbs
('2921001', 'Spandana Gella', 'spandana gella')
('2505673', 'Frank Keller', 'frank keller')
('1747893', 'Mirella Lapata', 'mirella lapata')
cb1b5e8b35609e470ce519303915236b907b13b6On the Vulnerability of ECG Verification to Online Presentation Attacks
University of Connecticut
Electrical & Computer Engineering
University of Florida
Electrical & Computer Engineering
('3445153', 'Nima Karimian', 'nima karimian')
('2171076', 'Damon L. Woodard', 'damon l. woodard')
('2925373', 'Domenic Forte', 'domenic forte')
nima@engr.uconn.edu
dwoodard, dforte@ece.ufl.edu
cbb27980eb04f68d9f10067d3d3c114efa9d0054An Attention Model for group-level emotion recognition
Indian Institute of Technology
Roorkee
Roorkee, India
Indian Institute of Technology
Roorkee
Roorkee, India
Indian Institute of Technology
Roorkee
Roorkee, India
École de Technologie Supérieure
Montreal, Canada
École de Technologie Supérieure
Montreal, Canada
('51127375', 'Aarush Gupta', 'aarush gupta')
('51134535', 'Dakshit Agrawal', 'dakshit agrawal')
('51118849', 'Hardik Chauhan', 'hardik chauhan')
('3055538', 'Jose Dolz', 'jose dolz')
('3048367', 'Marco Pedersoli', 'marco pedersoli')
agupta1@cs.iitr.ac.in
dagrawal@cs.iitr.ac.in
haroi.uee2014@iitr.ac.in
jose.dolz@livia.etsmtl.ca
Marco.Pedersoli@etsmtl.ca
cbe859d151466315a050a6925d54a8d3dbad591fGAZE SHIFTS AS DYNAMICAL RANDOM SAMPLING
Dipartimento di Scienze dell’Informazione
Universit´a di Milano
Via Comelico 39/41
20135 Milano, Italy
('1715361', 'Giuseppe Boccignone', 'giuseppe boccignone')
('3241931', 'Mario Ferraro', 'mario ferraro')
boccignone@dsi.unimi.it
f86ddd6561f522d115614c93520faad122eb3b56PACS2016
Beyond AlphaGo
October 27-28, 2016
Visual Imagination from Texts
School of Computer Science and Engineering
Seoul National University
Seoul 151-744, Korea
('3434480', 'Hanock Kwak', 'hanock kwak')
('1692756', 'Byoung-Tak Zhang', 'byoung-tak zhang')
Email: (hnkwak, btzhang)@bi.snu.ac.kr
f8015e31d1421f6aee5e17fc3907070b8e0a5e59April 19, 2016
DRAFT
Towards Usable Multimedia Event Detection
from Web Videos
April, 2016
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Alexander G. Hauptmann, Chair
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy.
('34692532', 'Zhenzhong Lan', 'zhenzhong lan')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
('14517812', 'Leonid Sigal', 'leonid sigal')
('34692532', 'Zhenzhong Lan', 'zhenzhong lan')
f842b13bd494be1bbc1161dc6df244340b28a47fAn Improved Face Recognition Technique Based
on Modular Multi-directional Two-dimensional
Principle Component Analysis Approach
Hanshan Normal University, Chaozhou, 521041, China
Hanshan Normal University, Chaozhou, 521041, China
('48477766', 'Xiaoqing Dong', 'xiaoqing dong')
('2747115', 'Hongcai Chen', 'hongcai chen')
Email: dxqzq110@163.com
Email: czhschc@126.com
f83dd9ff002a40228bbe3427419b272ab9d5c9e4Facial Features Matching using a Virtual Structuring Element
Intelligent Systems Lab Amsterdam,
University of Amsterdam
Kruislaan 403, 1098 SJ Amsterdam, The Netherlands
('9301018', 'Roberto Valenti', 'roberto valenti')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1695527', 'Theo Gevers', 'theo gevers')
f8c94afd478821681a1565d463fc305337b02779
www.semargroup.org,
www.ijsetr.com

ISSN 2319-8885
Vol.03,Issue.25
September-2014,
Pages:5079-5085
Design and Implementation of Robust Face Recognition System for
Uncontrolled Pose and Illumination Changes
2
1PG Scholar, Dept of ECE, LITAM, JNTUK, Andhrapradesh, India, Email: bhaskar.t60@gmail.com.
2Assistant Professor, Dept of ECE, LITAM, JNTUK, Andhrapradesh, India, Email: venky999v@gmail.com.
f8f2d2910ce8b81cb4bbf84239f9229888158b34Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
A Generative Model for Recognizing
Mixed Group Activities in Still Images
School of Computer, Beijing Institute of Technology, Beijing, China
School of Computing and Communications, University of Technology Sydney, Sydney, Australia
('32056779', 'Zheng Zhou', 'zheng zhou')
('1780081', 'Kan Li', 'kan li')
('1706670', 'Xiangjian He', 'xiangjian he')
('3225703', 'Mengmeng Li', 'mengmeng li')
{zz24, likan}@bit.edu.cn, xiangjian.he@uts.edu.au, limengmeng93@163.com
f8ec92f6d009b588ddfbb47a518dd5e73855547dJ Inf Process Syst, Vol.10, No.3, pp.443~458, September 2014

ISSN 1976-913X (Print)
ISSN 2092-805X (Electronic)
Extreme Learning Machine Ensemble Using
Bagging for Facial Expression Recognition
('32322842', 'Deepak Ghimire', 'deepak ghimire')
('2034182', 'Joonwhoan Lee', 'joonwhoan lee')
f869601ae682e6116daebefb77d92e7c5dd2cb15
f8ddb2cac276812c25021b5b79bf720e97063b1eA Comprehensive Empirical Study on Linear Subspace Methods for Facial
Expression Analysis
Queen Mary, University of London
Mile End Road, London E1 4NS
('10795229', 'Caifeng Shan', 'caifeng shan')
('2073354', 'Shaogang Gong', 'shaogang gong')
('2803283', 'Peter W. McOwan', 'peter w. mcowan')
{cfshan, sgg, pmco}@dcs.qmul.ac.uk
f8ed5f2c71e1a647a82677df24e70cc46d2f12a8International Journal of Scientific & Engineering Research, Volume 2, Issue 12, December-2011 1
ISSN 2229-5518
Artificial Neural Network Design and Parameter
Optimization for Facial Expressions Recognition
f8f872044be2918de442ba26a30336d80d200c42IJSRD - International Journal for Scientific Research & Development| Vol. 3, Issue 03, 2015 | ISSN (online): 2321-0613
Facial Emotion Recognition Techniques: A Survey
1,2Department of Computer Science and Engineering
Dr C V Raman Institute of Science and Technology
defense
systems,
surveillance
f8a5bc2bd26790d474a1f6cc246b2ba0bcde9464ORIGINAL RESEARCH
published: 19 December 2017
doi: 10.3389/fpsyg.2017.02181
KDEF-PT: Valence, Emotional
Intensity, Familiarity and
Attractiveness Ratings of Angry,
Neutral, and Happy Faces
Instituto Universitário de Lisboa (ISCTE-IUL), CIS – IUL, Lisboa, Portugal
The Karolinska Directed Emotional Faces (KDEF)
is one of the most widely used
human facial expressions database. Almost a decade after the original validation study
(Goeleven et al., 2008), we present subjective rating norms for a sub-set of 210 pictures
which depict 70 models (half female) each displaying an angry, happy and neutral facial
expressions. Our main goals were to provide an additional and updated validation
to this database, using a sample from a different nationality (N = 155 Portuguese
students, M = 23.73 years old, SD = 7.24) and to extend the number of subjective
dimensions used to evaluate each image. Specifically, participants reported emotional
labeling (forced-choice task) and evaluated the emotional intensity and valence of the
expression, as well as the attractiveness and familiarity of the model (7-points rating
scales). Overall, results show that happy faces obtained the highest ratings across
evaluative dimensions and emotion labeling accuracy. Female (vs. male) models were
perceived as more attractive, familiar and positive. The sex of the model also moderated
the accuracy of emotional
labeling and ratings of different facial expressions. Each
picture of the set was categorized as low, moderate, or high for each dimension.
Normative data for each stimulus (hits proportion, means, standard deviations, and
confidence intervals per evaluative dimension) is available as supplementary material
(available at https://osf.io/fvc4m/).
Keywords: facial expressions, normative data, subjective ratings, emotion labeling, sex differences
INTRODUCTION
The human face conveys important information for social interaction. For example, it is a major
source for forming first impressions, and to make fast and automatic personality trait inferences
(for a review, see Zebrowitz, 2017). Indeed, facial expressions have been the most studied non-
verbal emotional cue (for a review, see Schirmer and Adolphs, 2017). In addition to their physical
component (i.e., morphological changes in the face such as frowning or opening the mouth),
emotional facial expressions also have an affective component that conveys information about the
internal feelings of the person expressing it (for a review, see Calvo and Nummenmaa, 2016).
Moreover, facial expressions communicate a social message that informs about the behavioral
intentions of the expresser, which in turn prompt responses in the perceiver such approach and
avoidance reactions (for a review, see Paulus and Wentura, 2016).
Edited by:
Sergio Machado,
Salgado de Oliveira University, Brazil
Reviewed by:
Pietro De Carli,
Dipartimento di Psicologia dello
Sviluppo e della Socializzazione,
Università degli Studi di Padova, Italy
Sylvie Berthoz,
Institut National de la Santé et de la
Recherche Médicale, France
*Correspondence:
Specialty section:
This article was submitted to
Quantitative Psychology
and Measurement,
a section of the journal
Frontiers in Psychology
Received: 18 July 2017
Accepted: 30 November 2017
Published: 19 December 2017
Citation:
Garrido MV and Prada M (2017)
KDEF-PT: Valence, Emotional
Intensity, Familiarity
and Attractiveness Ratings of Angry,
Neutral, and Happy Faces.
Front. Psychol. 8:2181.
doi: 10.3389/fpsyg.2017.02181
Frontiers in Psychology | www.frontiersin.org
December 2017 | Volume 8 | Article 2181
('28239829', 'Margarida V. Garrido', 'margarida v. garrido')
('38831356', 'Marília Prada', 'marília prada')
('28239829', 'Margarida V. Garrido', 'margarida v. garrido')
margarida.garrido@iscte-iul.pt
f87b22e7f0c66225824a99cada71f9b3e66b5742Robust Emotion Recognition from Low Quality and Low Bit Rate Video:
A Deep Learning Approach
Beckman Institute, University of Illinois at Urbana-Champaign
Texas AandM University
University of Missouri, Kansas City
§ Snap Inc, USA
University of Washington
('50563570', 'Bowen Cheng', 'bowen cheng')
('2969311', 'Zhangyang Wang', 'zhangyang wang')
('4622305', 'Zhaobin Zhang', 'zhaobin zhang')
('49970050', 'Zhu Li', 'zhu li')
('1771885', 'Ding Liu', 'ding liu')
('1706007', 'Jianchao Yang', 'jianchao yang')
('47156875', 'Shuai Huang', 'shuai huang')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
{bcheng9, dingliu2, t-huang1}@illinois.edu
atlaswang@tamu.edu
{zzktb@mail., lizhu@}umkc.edu
jianchao.yang@snap.com
shuaih@uw.edu
cef841f27535c0865278ee9a4bc8ee113b4fb9f3
ce6d60b69eb95477596535227958109e07c61e1eUnconstrained Face Verification Using Fisher Vectors
Computed From Frontalized Faces
Center for Automation Research
University of Maryland, College Park, MD
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
('2716670', 'Swami Sankaranarayanan', 'swami sankaranarayanan')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
{pullpull, swamiviv, pvishalm, rama}@umiacs.umd.edu
ceb763d6657a07b47e48e8a2956bcfdf2cf10818International Journal of Computational Science and Information Technology (IJCSITY) Vol.2, No.1, February 2014
AN EFFICIENT FEATURE EXTRACTION METHOD
WITH PSEUDO-ZERNIKE MOMENT FOR FACIAL
RECOGNITION OF IDENTICAL TWINS
1Department of Electrical, Computer and Biomedical Engineering, Qazvin branch,
Amirkabir University of Technology, Tehran
Islamic Azad University, Qazvin, Iran
Iran
('13302047', 'Hoda Marouf', 'hoda marouf')
('1692435', 'Karim Faez', 'karim faez')
cefd9936e91885ba7af9364d50470f6cb54315a4The Journal of Neuroscience, December 8, 2010 • 30(49):16601–16608 • 16601
Behavioral/Systems/Cognitive
Expectation and Surprise Determine Neural Population
Responses in the Ventral Visual Stream
and 2Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708
Psychology, University of Illinois, Beckman Institute, Urbana-Champaign, Illinois 61801, University of
Oxford, Oxford OX1 3UD, United Kingdom
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by
bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two
computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and
provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and
forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-
selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and
prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature
detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently
varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high
face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was
indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling,
we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented
with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation
and surprise rather than by stimulus features per se.
Introduction
“Predictive coding” models of visual cognition propose that per-
ceptual inference proceeds as an iterative matching process of
top-down predictions against bottom-up evidence along the vi-
sual cortical hierarchy (Mumford, 1992; Rao and Ballard, 1999;
Lee and Mumford, 2003; Friston, 2005; Spratling, 2008). Specif-
ically, each stage of the visual cortical hierarchy is thought to
harbor two computationally distinct classes of processing unit:
representational units that encode the conditional probability of
a stimulus (“expectation”) and provide predictions regarding ex-
pected inputs to the next lower level; and error units that encode
the mismatch between predictions and bottom-up evidence
(“surprise”), and forward this prediction error to the next higher
level, where representations are adjusted to eliminate prediction
error (Friston, 2005). These assumptions contrast sharply with
more traditional views that cast visual neurons primarily as fea-
ture detectors (Hubel and Wiesel, 1965; Riesenhuber and Poggio,
2000), but explicit empirical tests adjudicating between these ri-
val conceptions are lacking.
Received June 1, 2010; revised Sept. 21, 2010; accepted Sept. 28, 2010.
This work was supported by funds granted by the Cognitive Neurology and Alzheimer’s Disease Center
Northwestern University) to T.E. We thank Vincent De Gardelle for helpful comments on an earlier version of
this manuscript.
DOI:10.1523/JNEUROSCI.2770-10.2010
Copyright © 2010 the authors
0270-6474/10/3016601-08$15.00/0
Here, we exploited the fact that the two models make diver-
gent predictions regarding determinants of neural population
responses in category-selective visual regions, like the fusiform
face area (FFA) (Kanwisher et al., 1997). Predictive coding sug-
gests that FFA population responses should reflect a summation
of activity related to representational units (“face expectation”)
and error units (“face surprise”), whereas feature detection mod-
els suppose the population response to be driven by physical
stimulus characteristics (“face features”) alone. We adjudicated
between these hypotheses by acquiring functional magnetic res-
onance imaging (fMRI) data from the FFA while independently
varying both stimulus features (faces vs houses) and subjects’
perceptual expectations regarding those features (low vs medium
vs high face expectation) (Fig. 1A,C). Of note, both the feature
detection and predictive coding views also allow for visual neural
responses to be scaled by attention. Therefore, the above manip-
ulations were orthogonal to the task demands (the detection of
occasional inverted “target” stimuli) (Fig. 1B) to control for po-
tential differences in attention across the conditions of interest.
According to predictive coding, FFA activity in this experi-
ment should vary as an additive function of face expectation
(high ⬎ low) (Fig. 2A, left) and face surprise (unexpected ⬎
expected faces) (Fig. 2A, middle). This would result in an inter-
action between stimulus and expectation factors (Fig. 2A right
panel), whereby FFA responses to face and house stimuli should
be similar under high face expectation, because both of these
conditions would be associated with activity related to face ex-
('1900710', 'Tobias Egner', 'tobias egner')
('2372244', 'Christopher Summerfield', 'christopher summerfield')
('1900710', 'Tobias Egner', 'tobias egner')
Box 90999, Durham, NC 27708. E-mail: tobias.egner@duke.edu.
ce85d953086294d989c09ae5c41af795d098d5b2This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Bilinear Analysis for Kernel Selection and
Nonlinear Feature Extraction
('1718245', 'Shu Yang', 'shu yang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1720735', 'Chao Zhang', 'chao zhang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
ce5eac297174c17311ee28bda534faaa1d559baeAutomatic analysis of malaria infected red
blood cell digitized microscope images
A dissertation submitted in partial fulfilment
of the requirements for the degree of
Doctor of Philosophy
of
University College London
Department of Computer Science
University College London
Supervisor: Prof. Bernard F. Buxton
February 2016
('2768033', 'Houari Abdallahi', 'houari abdallahi')
ce691a37060944c136d2795e10ed7ba751cd8394
ce3f3088d0c0bf236638014a299a28e492069753
ceaa5eb51f761b5f84bd88b58c8f484fcd2a22d6UC San Diego
UC San Diego Electronic Theses and Dissertations
Title
Inhibitions of ascorbate fatty acid derivatives on three rabbit muscle glycolytic enzymes
Permalink
https://escholarship.org/uc/item/8x33n1gj
Author
Pham, Duyen-Anh
Publication Date
2011-01-01
Peer reviewed|Thesis/dissertation
eScholarship.org
Powered by the California Digital Library
University of California
ce450e4849490924488664b44769b4ca57f1bc1aProcedural Generation of Videos to Train Deep Action Recognition Networks
1Computer Vision Group, NAVER LABS Europe, Meylan, France
2Centre de Visi´o per Computador, Universitat Aut`onoma de Barcelona, Bellaterra, Spain
Toyota Research Institute, Los Altos, CA, USA
('1799820', 'Adrien Gaidon', 'adrien gaidon')
('3407519', 'Yohann Cabon', 'yohann cabon')
{cesar.desouza, yohann.cabon}@europe.naverlabs.com, adrien.gaidon@tri.global, antonio@cvc.uab.es
ceeb67bf53ffab1395c36f1141b516f893bada27Face Alignment by Local Deep Descriptor Regression
University of Maryland
College Park, MD
University of Maryland
College Park, MD
University of Maryland
College Park, MD
Rutgers University
New Brunswick, NJ 08901
('40080979', 'Amit Kumar', 'amit kumar')
('26988560', 'Rajeev Ranjan', 'rajeev ranjan')
('9215658', 'Rama Chellappa', 'rama chellappa')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
akumar14@umd.edu
rranjan1@umd.edu
rama@umiacs.umd.edu
vishal.m.patel@rutgers.edu
ce032dae834f383125cdd852e7c1bc793d4c3ba3Motion Interchange Patterns for Action
Recognition in Unconstrained Videos
The Weizmann Institute of Science, Israel
Tel-Aviv University, Israel
The Open University, Israel
('3294355', 'Orit Kliper-Gross', 'orit kliper-gross')
('2916582', 'Yaron Gurovich', 'yaron gurovich')
('1756099', 'Tal Hassner', 'tal hassner')
('1776343', 'Lior Wolf', 'lior wolf')
ce9e1dfa7705623bb67df3a91052062a0a0ca456Deep Feature Interpolation for Image Content Changes
Kilian Weinberger1
Cornell University
George Washington University
*Authors contributed equally
('3222840', 'Paul Upchurch', 'paul upchurch')
('1791337', 'Kavita Bala', 'kavita bala')
ce9a61bcba6decba72f91497085807bface02dafEigen-Harmonics Faces: Face Recognition under Generic Lighting
1Graduate School, CAS, Beijing, China, 100080
2ICT-ISVISION Joint R&D Laboratory for Face Recognition, CAS, Beijing, China, 100080
Emails: {lyqing, sgshan, wgao}jdl.ac.cn
('2343895', 'Laiyun Qing', 'laiyun qing')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1698902', 'Wen Gao', 'wen gao')
cef6cffd7ad15e7fa5632269ef154d32eaf057afEmotion Detection Through Facial Feature
Recognition
through consistent
('4959365', 'James Pao', 'james pao')jpao@stanford.edu
cebfafea92ed51b74a8d27c730efdacd65572c40JANUARY 2006
31
Matching 2.5D Face Scans to 3D Models
('2637547', 'Xiaoguang Lu', 'xiaoguang lu')
('6680444', 'Anil K. Jain', 'anil k. jain')
('2205218', 'Dirk Colbry', 'dirk colbry')
ce56be1acffda599dec6cc2af2b35600488846c9Inferring Sentiment from Web Images with Joint Inference on Visual and Social
Cues: A Regulated Matrix Factorization Approach
Arizona State University, Tempe AZ
IBM Almaden Research Center, San Jose CA
('33513248', 'Yilin Wang', 'yilin wang'){ywang370,rao,baoxin.li}@asu.edu yuhenghu@us.ibm.com
ce54e891e956d5b502a834ad131616786897dc91International Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611
Face Recognition Using LTP Algorithm
1ECE & KUK
2Assistant Professor (ECE)
Volume 4 Issue 12, December 2015
Licensed Under Creative Commons Attribution CC BY
www.ijsr.net
 Variation in luminance: Third main challenge that
appears in face recognition process is the luminance. Due
to variation in the luminance the representation get varied
from the original image. The person with same poses
expression and seen from same viewpoint can be appear
very different due to variation in lightening.
('1781253', 'Richa Sharma', 'richa sharma')
('1887206', 'Rohit Arora', 'rohit arora')
ce6f459462ea9419ca5adcc549d1d10e616c0213A Survey on Face Identification Methodologies in
Videos
Student, M.Tech CSE ,Department of Computer Science
Engineering, G.H.Raisoni College of Engineering
Technology for Women, Nagpur, Maharashtra, India.
('2776196', 'Deepti Yadav', 'deepti yadav')
ce933821661a0139a329e6c8243e335bfa1022b1Temporal Modeling Approaches for Large-scale
Youtube-8M Video Understanding
Baidu IDL and Tsinghua University
('9921390', 'Fu Li', 'fu li')
('2551285', 'Chuang Gan', 'chuang gan')
('3025977', 'Xiao Liu', 'xiao liu')
('38812373', 'Yunlong Bian', 'yunlong bian')
('1716690', 'Xiang Long', 'xiang long')
('2653177', 'Yandong Li', 'yandong li')
('2027571', 'Zhichao Li', 'zhichao li')
('1743129', 'Jie Zhou', 'jie zhou')
('35247507', 'Shilei Wen', 'shilei wen')
e03bda45248b4169e2a20cb9124ae60440cad2deLearning a Dictionary of Shape-Components in Visual Cortex:
Comparison with Neurons, Humans and Machines
by
Ing´enieur de l’Ecole Nationale Sup´erieure
des T´el´ecommunications de Bretagne, 2000
and
MS, Universit´e de Rennes, 2000
Submitted to the Department of Brain and Cognitive Sciences
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
June 2006
c(cid:13) Massachusetts Institute of Technology 2006. All rights reserved
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Brain and Cognitive Sciences
April 24, 2006
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tomaso Poggio
Eugene McDermott Professor in the Brain Sciences and Human Behavior
Thesis Supervisor
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Matt Wilson
Professor of Neurobiology and
Chairman, Department Graduate Committee
('1981539', 'Thomas Serre', 'thomas serre')
e03e86ac61cfac9148b371d75ce81a55e8b332caUnsupervised Learning using Sequential
Verification for Action Recognition
cid:63)The Robotics Institute, Carnegie Mellon University
†Facebook AI Research
('1806773', 'Ishan Misra', 'ishan misra')
('1709305', 'Martial Hebert', 'martial hebert')
('1699161', 'C. Lawrence Zitnick', 'c. lawrence zitnick')
e0dedb6fc4d370f4399bf7d67e234dc44deb4333Supplementary Material: Multi-Task Video Captioning with Video and
Entailment Generation
UNC Chapel Hill
1 Experimental Setup
1.1 Datasets
1.1.1 Video Captioning Datasets
YouTube2Text or MSVD The Microsoft Re-
search Video Description Corpus (MSVD) or
YouTube2Text (Chen and Dolan, 2011) is used
for our primary video captioning experiments. It
has 1970 YouTube videos in the wild with many
diverse captions in multiple languages for each
video. Caption annotations to these videos are
collected using Amazon Mechanical Turk (AMT).
All our experiments use only English captions. On
average, each video has 40 captions, and the over-
all dataset has about 80, 000 unique video-caption
pairs. The average clip duration is roughly 10 sec-
onds. We used the standard split as stated in Venu-
gopalan et al. (2015), i.e., 1200 videos for training,
100 videos for validation, and 670 for testing.
MSR-VTT MSR-VTT is a recent collection of
10, 000 video clips of 41.2 hours duration (i.e.,
average duration of 15 seconds), which are an-
notated by AMT workers. It has 200, 000 video
clip-sentence pairs covering diverse content from
a commercial video search engine. On average,
each clip is annotated with 20 natural language
captions. We used the standard split as provided
in (Xu et al., 2016), i.e., 6, 513 video clips for
training, 497 for validation, and 2, 990 for testing.
M-VAD M-VAD is a movie description dataset
with 49, 000 video clips collected from 92 movies,
with the average clip duration being 6 seconds.
Alignment of descriptions to video clips is done
through an automatic procedure using Descrip-
tive Video Service (DVS) provided for the movies.
Each video clip description has only 1 or 2 sen-
tences, making most evaluation metrics (except
paraphrase-based METEOR) infeasible. Again,
we used the standard train/val/test split as pro-
vided in Torabi et al. (2015).
1.1.2 Video Prediction Dataset
For our unsupervised video representation learn-
ing task, we use the UCF-101 action videos
dataset (Soomro et al., 2012), which contains
13, 320 video clips of 101 action categories and
with an average clip length of 7.21 seconds each.
This dataset suits our video captioning task well
because both contain short video clips of a sin-
gle action or few actions, and hence using future
frame prediction on UCF-101 helps learn more ro-
bust and context-aware video representations for
our short clip video captioning task. We use the
standard split of 9, 500 videos for training (we
don’t need any validation set in our setup because
we directly tune on the validation set of the video
captioning task).
the
three
video
captioning
1.2 Pre-trained Visual Frame Features
For
datasets
(Youtube2Text, MSR-VTT, M-VAD) and the
unsupervised video prediction dataset (UCF-101),
we fix our sampling rate to 3f ps to bring uni-
formity in the temporal representation of actions
across all videos. These sampled frames are then
converted into features using several state-of-the-
art pre-trained models on ImageNet (Deng et al.,
2009) – VGGNet
(Simonyan and Zisserman,
2015), GoogLeNet (Szegedy et al., 2015; Ioffe
and Szegedy, 2015), and Inception-v4 (Szegedy
et al., 2016). For VGGNet, we use its f c7 layer
features with dimension 4096. For GoogLeNet
and Inception-v4, we use the layer before the fully
connected layer with dimensions 1024 and 1536,
respectively. We follow standard preprocessing
and convert all the natural language descriptions
to lower case and tokenize the sentences and
remove punctuations.
('10721120', 'Ramakanth Pasunuru', 'ramakanth pasunuru')
('7736730', 'Mohit Bansal', 'mohit bansal')
{ram, mbansal}@cs.unc.edu
e096b11b3988441c0995c13742ad188a80f2b461Noname manuscript No.
(will be inserted by the editor)
DeepProposals: Hunting Objects and Actions by Cascading
Deep Convolutional Layers
Van Gool
Received: date / Accepted: date
('3060081', 'Amir Ghodrati', 'amir ghodrati')
e0638e0628021712ac76e3472663ccc17bd8838c VOL. 9, NO. 2, FEBRUARY 2014 ISSN 1819-6608
ARPN Journal of Engineering and Applied Sciences
©2006-2014 Asian Research Publishing Network (ARPN). All rights reserved.
www.arpnjournals.com
SIGN LANGUAGE RECOGNITION: STATE OF THE ART
Sharda University, Greater Noida, India
('27105713', 'Ashok K Sahoo', 'ashok k sahoo')
('40867787', 'Gouri Sankar Mishra', 'gouri sankar mishra')
('3017041', 'Kiran Kumar Ravulakollu', 'kiran kumar ravulakollu')
E-Mail: ashoksahoo2000@yahoo.com
e0c081a007435e0c64e208e9918ca727e2c1c44e
e0d878cc095eaae220ad1f681b33d7d61eb5e425Article
Temporal and Fine-Grained Pedestrian Action
Recognition on Driving Recorder Database
National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
Keio University, Yokohama 223-8522, Japan
Received: 5 January 2018; Accepted: 8 February 2018; Published: 20 February 2018
('1730200', 'Hirokatsu Kataoka', 'hirokatsu kataoka')
('1732705', 'Yutaka Satoh', 'yutaka satoh')
('1716469', 'Yoshimitsu Aoki', 'yoshimitsu aoki')
('6881850', 'Shoko Oikawa', 'shoko oikawa')
('1720770', 'Yasuhiro Matsui', 'yasuhiro matsui')
yu.satou@aist.go.jp
aoki@elec.keio.ac.jp
Tokyo Metropolitan University, Tokyo 192-0364, Japan; shoko_o@hotmail.com
4 National Traffic Safety and Environment Laboratory, Tokyo 182-0012, Japan; ymatsui@ntsel.go.jp
* Correspondence: hirokatsu.kataoka@aist.go.jp; Tel.: +81-29-861-2267
e00d4e4ba25fff3583b180db078ef962bf7d6824Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 20 March 2017 doi:10.20944/preprints201703.0152.v1
Article
Face Verification with Multi-Task and Multi-Scale
Features Fusion
('39198322', 'Xiaojun Lu', 'xiaojun lu')
('39683642', 'Yue Yang', 'yue yang')
('8030754', 'Weilin Zhang', 'weilin zhang')
('36286794', 'Qi Wang', 'qi wang')
('37622915', 'Yang Wang', 'yang wang')
1 College of Sciences, Northeastern University, Shenyang 110819, China; luxiaojun@mail.neu.edu.cn (X.L.);
YangY1503@163.com (Y.Y.); wangy_neu@163.com (Y.W.)
2 New York University Shanghai, 1555 Century Ave, Pudong, Shanghai 200122, China; wz723@nyu.edu
* Correspondence: wangqimath@mail.neu.edu.cn; Tel.: +86-024-8368-7680
e01bb53b611c679141494f3ffe6f0b91953af658FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors
Nanjing University of Science and Technology
2Youtu Lab, Tencent
Michigan State University
University of Adelaide
Figure 1: Visual results of different super-resolution methods on scale factor 8.
('50579509', 'Yu Chen', 'yu chen')
('49499405', 'Jian Yang', 'jian yang')
e0bfcf965b402f3f209f26ae20ee88bc4d0002abAI Thinking for Cloud Education Platform with Personalized Learning
University of Texas at San Antonio
University of Texas at San Antonio
University of Texas at San Antonio
University of Texas at San Antonio
University of Texas at San Antonio
('2055316', 'Paul Rad', 'paul rad')
('2918902', 'Mehdi Roopaei', 'mehdi roopaei')
('1716725', 'Nicole Beebe', 'nicole beebe')
('9324267', 'Mehdi Shadaram', 'mehdi shadaram')
('1839489', 'Yoris A. Au', 'yoris a. au')
Paul.rad@utsa.edu
Mehdi.roopaei@utsa.edu
Nicole.beebe@utsa.edu
Mehdi.shadaram@utsa.edu
Yoris.au@utsa.edu
e0939b4518a5ad649ba04194f74f3413c793f28eTechnical Report
UCAM-CL-TR-636
ISSN 1476-2986
Number 636
Computer Laboratory
Mind-reading machines:
automated inference
of complex mental states
July 2005
15 JJ Thomson Avenue
Cambridge CB3 0FD
United Kingdom
phone +44 1223 763500
http://www.cl.cam.ac.uk/
e0ed0e2d189ff73701ec72e167d44df4eb6e864dRecognition of static and dynamic facial expressions: a study review
Estudos de Psicologia, 18(1), janeiro-março/2013, 125-130
Federal University of Para ba
('39169435', 'Nelson Torro Alves', 'nelson torro alves')
e00d391d7943561f5c7b772ab68e2bb6a85e64c4Robust continuous clustering
University of Maryland, College Park, MD 20740; and bIntel Labs, Santa Clara, CA
Edited by David L. Donoho, Stanford University, Stanford, CA, and approved August 7, 2017 (received for review January
Clustering is a fundamental procedure in the analysis of scientific
data. It is used ubiquitously across the sciences. Despite decades
of research, existing clustering algorithms have limited effective-
ness in high dimensions and often require tuning parameters for
different domains and datasets. We present a clustering algo-
rithm that achieves high accuracy across multiple domains and
scales efficiently to high dimensions and large datasets. The pre-
sented algorithm optimizes a smooth continuous objective, which
is based on robust statistics and allows heavily mixed clusters to
be untangled. The continuous nature of the objective also allows
clustering to be integrated as a module in end-to-end feature
learning pipelines. We demonstrate this by extending the algo-
rithm to perform joint clustering and dimensionality reduction
by efficiently optimizing a continuous global objective. The pre-
sented approach is evaluated on large datasets of faces, hand-
written digits, objects, newswire articles, sensor readings from
the Space Shuttle, and protein expression levels. Our method
achieves high accuracy across all datasets, outperforming the best
prior algorithm by a factor of 3 in average rank.
clustering | data analysis | unsupervised learning
Clustering is one of the fundamental experimental procedures
in data analysis. It is used in virtually all natural and social
sciences and has played a central role in biology, astronomy,
psychology, medicine, and chemistry. Data-clustering algorithms
have been developed for more than half a century (1). Significant
advances in the last two decades include spectral clustering (2–4),
generalizations of classic center-based methods (5, 6), mixture
models (7, 8), mean shift (9), affinity propagation (10), subspace
clustering (11–13), nonparametric methods (14, 15), and feature
selection (16–20).
Despite these developments, no single algorithm has emerged
to displace the k-means scheme and its variants (21). This
is despite the known drawbacks of such center-based meth-
ods, including sensitivity to initialization, limited effectiveness in
high-dimensional spaces, and the requirement that the number
of clusters be set in advance. The endurance of these methods
is in part due to their simplicity and in part due to difficulties
associated with some of the new techniques, such as additional
hyperparameters that need to be tuned, high computational cost,
and varying effectiveness across domains. Consequently, scien-
tists who analyze large high-dimensional datasets with unknown
distribution must maintain and apply multiple different cluster-
ing algorithms in the hope that one will succeed. Books have
been written to guide practitioners through the landscape of
data-clustering techniques (22).
We present a clustering algorithm that is fast, easy to use, and
effective in high dimensions. The algorithm optimizes a clear
continuous objective, using standard numerical methods that
scale to massive datasets. The number of clusters need not be
known in advance.
The operation of the algorithm can be understood by contrast-
ing it with other popular clustering techniques. In center-based
algorithms such as k-means (1, 24), a small set of putative cluster
centers is initialized from the data and then iteratively refined. In
affinity propagation (10), data points communicate over a graph
structure to elect a subset of the points as representatives. In the
presented algorithm, each data point has a dedicated representa-
tive, initially located at the data point. Over the course of the algo-
rithm, the representatives move and coalesce into easily separable
clusters. The progress of the algorithm is visualized in Fig. 1.
Our formulation is based on recent convex relaxations for clus-
tering (25, 26). However, our objective is deliberately not convex.
We use redescending robust estimators that allow even heavily
mixed clusters to be untangled by optimizing a single contin-
uous objective. Despite the nonconvexity of the objective, the
optimization can still be performed using standard linear least-
squares solvers, which are highly efficient and scalable. Since the
algorithm expresses clustering as optimization of a continuous
objective based on robust estimation, we call it robust continu-
ous clustering (RCC).
One of the characteristics of the presented formulation is that
clustering is reduced to optimization of a continuous objective.
This enables the integration of clustering in end-to-end fea-
ture learning pipelines. We demonstrate this by extending RCC
to perform joint clustering and dimensionality reduction. The
extended algorithm, called RCC-DR, learns an embedding of
the data into a low-dimensional space in which it is clustered.
Embedding and clustering are performed jointly, by an algorithm
that optimizes a clear global objective.
We evaluate RCC and RCC-DR on a large number of datasets
from a variety of domains. These include image datasets, docu-
ment datasets, a dataset of sensor readings from the Space Shut-
tle, and a dataset of protein expression levels in mice. Exper-
iments demonstrate that our method significantly outperforms
prior state-of-the-art techniques. RCC-DR is particularly robust
across datasets from different domains, outperforming the best
prior algorithm by a factor of 3 in average rank.
Formulation
We consider the problem of clustering a set of n data points.
The input is denoted by X = [x1, x2, . . . , xn ], where xi ∈ RD.
Our approach operates on a set of representatives U =
[u1, u2, . . . , un ], where ui ∈ RD. The representatives U are ini-
tialized at the corresponding data points X. The optimization
operates on the representation U, which coalesces to reveal the
cluster structure latent in the data. Thus, the number of clusters
Significance
Clustering is a fundamental experimental procedure in data
analysis. It is used in virtually all natural and social sciences
and has played a central role in biology, astronomy, psychol-
ogy, medicine, and chemistry. Despite the importance and
ubiquity of clustering, existing algorithms suffer from a vari-
ety of drawbacks and no universal solution has emerged. We
present a clustering algorithm that reliably achieves high accu-
racy across domains, handles high data dimensionality, and
scales to large datasets. The algorithm optimizes a smooth
global objective, using efficient numerical methods. Experi-
ments demonstrate that our method outperforms state-of-
the-art clustering algorithms by significant factors in multiple
domains.
Author contributions: S.A.S. and V.K. designed research, performed research, analyzed
data, and wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
Freely available online through the PNAS open access option.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.
1073/pnas.1700770114/-/DCSupplemental.
9814–9819 | PNAS | September 12, 2017 | vol. 114 | no. 37
www.pnas.org/cgi/doi/10.1073/pnas.1700770114
('49485254', 'Sohil Atul Shah', 'sohil atul shah')
('1770944', 'Vladlen Koltun', 'vladlen koltun')
1To whom correspondence should be addressed. Email: sohilas@umd.edu.
e0765de5cabe7e287582532456d7f4815acd74c1
e065a2cb4534492ccf46d0afc81b9ad8b420c5ecSFace: An Efficient Network for Face Detection
in Large Scale Variations
College of Software, Beihang University
Megvii Inc. (Face++)†
('38504661', 'Jianfeng Wang', 'jianfeng wang')
('48009795', 'Ye Yuan', 'ye yuan')
('2789329', 'Boxun Li', 'boxun li')
('2352391', 'Gang Yu', 'gang yu')
('2017810', 'Sun Jian', 'sun jian')
{wjfwzzc}@buaa.edu.cn, {yuanye,liboxun,yugang,sunjian}@megvii.com
e00241f00fb31c660df6c6f129ca38370e6eadb3What have we learned from deep representations for action recognition?
TU Graz
TU Graz
York University, Toronto
University of Oxford
('2322150', 'Christoph Feichtenhofer', 'christoph feichtenhofer')
('1718587', 'Axel Pinz', 'axel pinz')
('1709096', 'Richard P. Wildes', 'richard p. wildes')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
feichtenhofer@tugraz.at
axel.pinz@tugraz.at
wildes@cse.yorku.ca
az@robots.ox.ac.uk
e013c650c7c6b480a1b692bedb663947cd9d260f860
Robust Image Analysis With Sparse Representation
on Quantized Visual Features
('8180253', 'Bing-Kun Bao', 'bing-kun bao')
('36601906', 'Guangyu Zhu', 'guangyu zhu')
('38203359', 'Jialie Shen', 'jialie shen')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
e0244a8356b57a5721c101ead351924bcfb2eef4Journal of Experimental Psychology: General
2017, Vol. 146, No. 10, 1379 –1401
0096-3445/17/$12.00
© 2017 American Psychological Association
http://dx.doi.org/10.1037/xge0000292
Power as an Emotional Liability: Implications for Perceived Authenticity
and Trust After a Transgression
University of Southern California
Webster University
University of Haifa
Alexandra Mislin
American University
University of Washington, Seattle
Gerben A. van Kleef
University of Amsterdam
People may express a variety of emotions after committing a transgression. Through 6 empirical studies and
a meta-analysis, we investigate how the perceived authenticity of such emotional displays and resulting levels
of trust are shaped by the transgressor’s power. Past findings suggest that individuals with power tend to be
more authentic because they have more freedom to act on the basis of their own personal inclinations. Yet,
our findings reveal that (a) a transgressor’s display of emotion is perceived to be less authentic when that
party’s power is high rather than low; (b) this perception of emotional authenticity, in turn, directly influences
(and mediates) the level of trust in that party; and (c) perceivers ultimately exert less effort when asked to make
a case for leniency toward high rather than low-power transgressors. This tendency to discount the emotional
authenticity of the powerful was found to arise from power increasing the transgressor’s perceived level of
emotional control and strategic motivation, rather than a host of alternative mechanisms. These results were
also found across different types of emotions (sadness, anger, fear, happiness, and neutral), expressive
modalities, operationalizations of the transgression, and participant populations. Altogether, our findings
demonstrate that besides the wealth of benefits power can afford, it also comes with a notable downside. The
findings, furthermore, extend past research on perceived emotional authenticity, which has focused on how
and when specific emotions are expressed, by revealing how this perception can depend on considerations that
have nothing to do with the expression itself.
Keywords: trust, emotion, power, authenticity, perception
Supplemental materials: http://dx.doi.org/10.1037/xge0000292.supp
Research suggests that those who attain positions of power tend
to be more emotionally skilled (Côté, Lopes, Salovey, & Miners,
2010; George, 2000). Indeed, it is the very possession of such
skills that has been suggested to help these parties attain and
succeed in leadership positions (e.g., Lewis, 2000; Rubin, Munz,
School of Business, University of Southern California; Alexandra Mislin
Department of Management, Kogod School of Business, American Uni-
chael G. Foster School of Business, University of Washington, Seattle
A. van Kleef, University of Amsterdam
This research was supported in part by a faculty research grant from
Webster University
Correspondence concerning this article should be addressed to Peter H.
Kim, Marshall School of Business, Department of Management and Or-
ganization, University of Southern California, Hoffman Hall 515, Los
1379
& Bommer, 2005). Yet, this tendency for the powerful to be
emotionally skilled may not necessarily prove beneficial, to the
extent that those evaluating such powerful individuals subscribe to
this notion as well, and may even undermine the effectiveness of
high-power parties’ emotional expressions when they might need
them most. In particular, through six empirical studies and a
meta-analysis, we investigate the possibility that perceivers’ gen-
eral beliefs about the powerful as emotionally skilled would lead
perceivers to discount the authenticity of the emotions the power-
ful express, and that this would ultimately impair the effectiveness
of those emotional displays for addressing a transgression.
Theoretical Background
Power, which has been defined as an individual’s capacity to
modify others’ states by providing or withholding resources or
administering punishments (Keltner, Gruenfeld, & Anderson,
2003), has been widely recognized to offer numerous benefits to
those who possess it, including the ability to act based on one s
own inclinations, perceive greater choice, and obtain greater ben-
efits from both work and nonwork interactions (e.g., Galinsky,
('34770901', 'Peter H. Kim', 'peter h. kim')
('47847686', 'Ece Tuncel', 'ece tuncel')
('3198839', 'Arik Cheshin', 'arik cheshin')
('50222018', 'Ryan Fehr', 'ryan fehr')
('34770901', 'Peter H. Kim', 'peter h. kim')
('47847686', 'Ece Tuncel', 'ece tuncel')
('50222018', 'Ryan Fehr', 'ryan fehr')
('3198839', 'Arik Cheshin', 'arik cheshin')
Angeles, CA 90089-1421. E-mail: kimpeter@usc.edu
e0dc6f1b740479098c1d397a7bc0962991b5e294快速人脸检测技术综述
李月敏 1 陈杰 2 高文 1,2,3 尹宝才 1
1(北京工业大学计算机学院多媒体与智能软件技术实验室 北京 100022)
2(哈尔滨工业大学计算机科学与技术学院 哈尔滨 150001)
3(中国科学院计算技术研究所先进人机通信技术联合实验室 北京 100080)
摘 要 人脸检测问题研究具有很重要的意义,可以应用到人脸识别、新一代的人机界
面、安全访问和视觉监控以及基于内容的检索等领域,近年来受到研究者的普遍重视。人脸
检测要走向实际应用,精度和速度是亟需解决的两个关键问题。经过 20 世纪 90 年代以来十
多年的发展,人脸检测的精度得到了大幅度的提高,但是速度却一直是阻挠人脸检测走向实
用的绊脚石。为此研究者们也作了艰辛的努力。直到 21 世纪 Viola 基于 AdaBoost 算法的人
脸检测器的发表,人脸检测的速度才得到了实质性的提高。该算法的发表也促进了人脸检测
研究的进一步蓬勃发展,在这方面先后涌现出了一批优秀的文献。基于此,本文在系统地整
理分析了人脸检测领域内的相关文献之后,从速度的角度将人脸检测的各种算法大致划分为
初始期,发展期,转折点和综合期等四类,并在此基础上进行了全新的总结和论述,最后给
出了人脸检测研究的一些可能的发展方向。
关键词 人脸检测,速度,人脸识别,模式识别,Boosting
图法分类号:TP391.4
Face Detection: a Survey
1(Multimedia and Intelligent Software Technology Laboratory
Beijing University of Technology, Beijing 100022, China
School of Computer Science and Technology, Harbin Institute of
Technology, Harbin, 150001, China)
Institute of Computing Technology, Chinese Academy of Sciences
Beijing, 100080, China)
('7771395', 'Yuemin Li', 'yuemin li')
('1714354', 'Baocai Yin', 'baocai yin')
ymli@jdl.ac.cn, chenjie@jdl.ac.cn,
wgao@jdl.ac.cn, ybc@bjut.edu.cn
468c8f09d2ad8b558b65d11ec5ad49208c4da2f2MSR-CNN: Applying Motion Salient Region Based
Descriptors for Action Recognition
School of Computing, Informatics,
Decision System Engineering
Arizona State University
Tempe, USA
Intel Corp.
Tempe, USA
School of Computing, Informatics,
Decision System Engineering
Arizona State University
Tempe, USA
('3334478', 'Zhigang Tu', 'zhigang tu')
('4244188', 'Jun Cao', 'jun cao')
('2180892', 'Yikang Li', 'yikang li')
('2913552', 'Baoxin Li', 'baoxin li')
Email: Zhigang.Tu@asu.edu
Email: jun.cao@intel.com
Email: YikangLi,Baoxin.Li@asu.edu
46a4551a6d53a3cd10474ef3945f546f45ef76ee2014 IEEE Intelligent Vehicles Symposium (IV)
June 8-11, 2014. Dearborn, Michigan, USA
978-1-4799-3637-3/14/$31.00 ©2014 IEEE
344
4686bdcee01520ed6a769943f112b2471e436208Utsumi et al. IPSJ Transactions on Computer Vision and
Applications (2017) 9:11
DOI 10.1186/s41074-017-0024-5
IPSJ Transactions on Computer
Vision and Applications
EXPRESS PAPER
Open Access
Fast search based on generalized
similarity measure
('40142989', 'Yuzuko Utsumi', 'yuzuko utsumi')
('4629425', 'Tomoya Mizuno', 'tomoya mizuno')
('35613969', 'Masakazu Iwamura', 'masakazu iwamura')
('3277321', 'Koichi Kise', 'koichi kise')
4688787d064e59023a304f7c9af950d192ddd33eInvestigating the Discriminative Power of Keystroke
Sound
and Dimitris Metaxas, Member, IEEE
('38993748', 'Joseph Roth', 'joseph roth')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('1698707', 'Arun Ross', 'arun ross')
466184b10fb7ce9857e6b5bd6b4e5003e09a0b16Extended Grassmann Kernels for
Subspace-Based Learning
GRASP Laboratory
University of Pennsylvania
Philadelphia, PA 19104
GRASP Laboratory
University of Pennsylvania
Philadelphia, PA 19104
('2720935', 'Jihun Ham', 'jihun ham')
('1732066', 'Daniel D. Lee', 'daniel d. lee')
jhham@seas.upenn.edu
ddlee@seas.upenn.edu
46e86cdb674440f61b6658ef3e84fea95ea51fb4
46f2611dc4a9302e0ac00a79456fa162461a8c80for Action Classification
ESAT-PSI, KU Leuven, 2CV:HCI, KIT, Karlsruhe, 3University of Bonn, 4Sensifai
('3310120', 'Ali Diba', 'ali diba')
('3169187', 'Mohsen Fayyaz', 'mohsen fayyaz')
('50633941', 'Vivek Sharma', 'vivek sharma')
('2946643', 'Juergen Gall', 'juergen gall')
('1681236', 'Luc Van Gool', 'luc van gool')
1{firstname.lastname}@kuleuven.be, 2{firstname.lastname}@kit.edu,
3{lastname}@iai.uni-bonn.de, 4{firstname.lastname}@sensifai.com
46b7ee97d7dfbd61cc3745e8dfdd81a15ab5c1d43D FACIAL GEOMETRIC FEATURES FOR CONSTRAINED LOCAL MODEL
cid:2) Imperial College London, United Kingdom
University of Twente, EEMCS, Netherlands
('1694605', 'Maja Pantic', 'maja pantic')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('3183108', 'Akshay Asthana', 'akshay asthana')
('1902288', 'Shiyang Cheng', 'shiyang cheng')
{shiyang.cheng11, s.zafeiriou, a.asthana, m.pantic}@imperial.ac.uk
46ae4d593d89b72e1a479a91806c39095cd96615A CONDITIONAL RANDOM FIELD APPROACH FOR FACE IDENTIFICATION IN
BROADCAST NEWS USING OVERLAID TEXT
(1,2)Gay Paul, 1Khoury Elie, 2Meignier Sylvain, 1Odobez Jean-Marc, 2Deleglise Paul
Idiap Research Institute, Martigny, Switzerland, 2LIUM, University of Maine, Le Mans, France
467b602a67cfd7c347fe7ce74c02b38c4bb1f332Large Margin Local Metric Learning
University College London, London, UK
2 Safran Morpho, Issy-les-Moulineaux, France
University of Exceter, Exceter, UK
('38954213', 'Yiming Ying', 'yiming ying')
('1704699', 'Massimiliano Pontil', 'massimiliano pontil')
m.pontil@cs.ucl.ac.uk
{julien.bohne,stephane.gentric}@morpho.com
y.ying@exeter.ac.uk
466f80b066215e85da63e6f30e276f1a9d7c843b2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
Joint Head Pose Estimation and Face Alignment Framework
Using Global and Local CNN Features
Computational Biomedicine Lab
University of Houston, Houston, TX, USA
('5084124', 'Xiang Xu', 'xiang xu')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
{xxu18, ikakadia}@central.uh.edu
464de30d3310123644ab81a1f0adc51598586fd2
466a5add15bb5f91e0cfd29a55f5fb159a7980e5Video Repeat Recognition and Mining by Visual
Features
('4052001', 'Xianfeng Yang', 'xianfeng yang')
46f3b113838e4680caa5fc8bda6e9ae0d35a038cCancers 2010, 2, 262-273; doi:10.3390/cancers2020262
OPEN ACCESS
cancers
ISSN 2072-6694
www.mdpi.com/journal/cancers
Review
Automated Dermoscopy Image Analysis of Pigmented Skin
Lesions
Section of Pathology, Second University of Naples, Via L. Armanni
5, 80138 Naples, Italy
3 ACS, Advanced Computer Systems, Via della Bufalotta 378, 00139 Rome, Italy
Fax: +390815569693.
Received: 23 February 2010; in revised form: 15 March 2010 / Accepted: 25 March 2010 /
Published: 26 March 2010
('32152948', 'Alfonso Baldi', 'alfonso baldi')
('1705562', 'Marco Quartulli', 'marco quartulli')
('3899127', 'Raffaele Murace', 'raffaele murace')
('5703272', 'Emanuele Dragonetti', 'emanuele dragonetti')
('38220535', 'Mario Manganaro', 'mario manganaro')
('2237329', 'Oscar Guerra', 'oscar guerra')
('4108084', 'Stefano Bizzi', 'stefano bizzi')
2 Futura-onlus, Via Pordenone 2, 00182 Rome, Italy; E-Mail: raffaele@murace.it
* Author to whom correspondence should be addressed; E-Mail: alfonsobaldi@tiscali.it;
465d5bb11912005f0a4f0569c6524981df18a7deIMOTION – Searching for Video Sequences
using Multi-Shot Sketch Queries
Metin Sezgin3, Ozan Can Altıok3, and Yusuf Sahillio˘glu3
1 Databases and Information Systems Research Group,
University of Basel, Switzerland
Research Center in Information Technologies, Universit e de Mons, Belgium
Intelligent User Interfaces Lab, Ko c University, Turkey
('27401642', 'Luca Rossetto', 'luca rossetto')
('2155883', 'Ivan Giangreco', 'ivan giangreco')
('34588610', 'Silvan Heller', 'silvan heller')
('1806643', 'Heiko Schuldt', 'heiko schuldt')
('3272087', 'Omar Seddati', 'omar seddati')
{luca.rossetto|ivan.giangreco|c.tanase|silvan.heller|heiko.schuldt}@unibas.ch
{stephane.dupont|omar.seddati}@umons.ac.be
{mtsezgin|oaltiok15|ysahillioglu}@ku.edu.tr
46c87fded035c97f35bb991fdec45634d15f9df2Spatial-Aware Object Embeddings for Zero-Shot Localization
and Classification of Actions
University of Amsterdam
('2606260', 'Pascal Mettes', 'pascal mettes')
46e72046a9bb2d4982d60bcf5c63dbc622717f0fLearning Discriminative Features with Class Encoder
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences
University of Chinese Academy of Science
('1704812', 'Hailin Shi', 'hailin shi')
('8362374', 'Xiangyu Zhu', 'xiangyu zhu')
('1718623', 'Zhen Lei', 'zhen lei')
('40397682', 'Shengcai Liao', 'shengcai liao')
('34679741', 'Stan Z. Li', 'stan z. li')
{hailin.shi, xiangyu.zhu, zlei, scliao, szli}@nlpr.ia.ac.cn
46f32991ebb6235509a6d297928947a8c483f29eIn Proc. IEEE Computer Vision and Pattern Recognition (CVPR), Madison (WI), June 2003
Recognizing Expression Variant Faces
from a Single Sample Image per Class
Aleix M. Mart(cid:19)(cid:16)nez
Department of Electrical Engineering
The Ohio State University, OH
aleix@ee.eng.ohio-state.edu
46538b0d841654a0934e4c75ccd659f6c5309b72Signal & Image Processing : An International Journal (SIPIJ) Vol.5, No.1, February 2014
A NOVEL APPROACH TO GENERATE FACE
BIOMETRIC TEMPLATE USING BINARY
DISCRIMINATING ANALYSIS
1P.G. Student, Department of Computer Engineering, MCERC, Nashik (M.S.), India.
2Associate Professor, Department of Computer Engineering,
MCERC, Nashik (M.S.), India
('40075681', 'Shraddha S. Shinde', 'shraddha s. shinde')
('2590072', 'Anagha P. Khedkar', 'anagha p. khedkar')
4641986af5fc8836b2c883ea1a65278d58fe4577Scene Graph Generation by Iterative Message Passing
Stanford University
Stanford University
('2068265', 'Danfei Xu', 'danfei xu'){danfei, yukez, chrischoy, feifeili}@cs.stanford.edu
464b3f0824fc1c3a9eaf721ce2db1b7dfe7cb05aDeep Adaptive Temporal Pooling for Activity Recognition
Singapore University of Technology and Design
Singapore University of Technology and Design
Singapore, Singapore
Singapore, Singapore
Institute for Infocomm Research
Singapore, Singapore
Keele University
Keele, Staffordshire, United Kingdom
('1729827', 'Ngai-Man Cheung', 'ngai-man cheung')
('2527741', 'Sibo Song', 'sibo song')
('1802086', 'Vijay Chandrasekhar', 'vijay chandrasekhar')
('1709001', 'Bappaditya Mandal', 'bappaditya mandal')
ngaiman_cheung@sutd.edu.sg
sibo_song@mymail.sutd.edu.sg
vijay@i2r.a-star.edu.sg
b.mandal@keele.ac.uk
469ee1b00f7bbfe17c698ccded6f48be398f2a44MIT International Journal of Computer Science and Information Technology, Vol. 4, No. 2, August 2014, pp. 82-88
ISSN 2230-7621©MIT Publications
82
SURVEy: Techniques for
Aging Problems in Face Recognition
Aashmi
Scholar, Computer Science Engg. Dept.
Moradabad Institute of Technology
Scholar, Computer Science Engg. Dept.
Moradabad Institute of Technology
Scholar, Computer Science Engg. Dept.
Moradabad Institute of Technology
Moradabad, U.P., INDIA
Moradabad, U.P., INDIA
Moradabad, U.P., INDIA
('40062749', 'Sakshi Sahni', 'sakshi sahni')
('9186211', 'Sakshi Saxena', 'sakshi saxena')
E-mail: aashmichaudhary@gmail.com
E-mail: sakshisahni92@gmail.com
E-mail: saxena.sakshi2511992@gmail.com
46196735a201185db3a6d8f6e473baf05ba7b68f
4682fee7dc045aea7177d7f3bfe344aabf153bd5Tabula Rasa: Model Transfer for
Object Category Detection
Department of Engineering Science
Oxford
(Presented by Elad Liebman)
('3152281', 'Yusuf Aytar', 'yusuf aytar')
4657d87aebd652a5920ed255dca993353575f441Image Normalization for
Illumination Compensation in Facial Images
by
Department of Electrical & Computer Engineering
& Center for Intelligent Machines
McGill University, Montreal, Canada
August 2004
('3631473', 'Martin D. Levine', 'martin d. levine')
('35712223', 'Jisnu Bhattacharyya', 'jisnu bhattacharyya')
4622b82a8aff4ac1e87b01d2708a333380b5913bMulti-label CNN Based Pedestrian Attribute Learning for Soft Biometrics
Center for Biometrics and Security Research,
Institute of Automation, Chinese Academy of Sciences
95 Zhongguancun Donglu, Beijing 100190, China
('1739258', 'Jianqing Zhu', 'jianqing zhu')
('40397682', 'Shengcai Liao', 'shengcai liao')
('1716143', 'Dong Yi', 'dong yi')
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
jianqingzhu@foxmail.com, {scliao, dyi, zlei, szli}@nlpr.ia.ac.cn
46e866f58419ff4259c65e8256c1d4f14927b2c6On the Generalization Power of Face and Gait Gender
Recognition Methods
University of Warwick
Gibbet Hill Road, Coventry, CV4 7AL, UK
('1735787', 'Yu Guan', 'yu guan')
('1799504', 'Chang-Tsun Li', 'chang-tsun li')
{g.yu, x.wei, c-t.li}@warwick.ac.uk
46072f872eee3413f9d05482be6446f6b96b6c09Trace Quotient Problems Revisited
1 Department of Information Engineering,
The Chinese University of Hong Kong, Hong Kong
2 Microsoft Research Asia, Beijing, China
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
4698a599425c3a6bae1c698456029519f8f2befeTransferring Rich Deep Features
for Facial Beauty Prediction
College of Informatics
College of Informatics
Department of Computer Science and Engineering
Huazhong Agricultural University
Huazhong Agricultural University
Wuhan, China
Wuhan, China
University of North Texas
Denton, USA
('40557104', 'Lu Xu', 'lu xu')
('2697879', 'Jinhai Xiang', 'jinhai xiang')
('1982703', 'Xiaohui Yuan', 'xiaohui yuan')
Email: xulu coi@webmail.hzau.edu.cn
Email: jimmy xiang@mail.hzau.edu.cn
Email: Xiaohui.Yuan@unt.edu
2c424f21607ff6c92e640bfe3da9ff105c08fac4Learning Structured Output Representation
using Deep Conditional Generative Models
NEC Laboratories America, Inc
University of Michigan, Ann Arbor
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
('3084614', 'Xinchen Yan', 'xinchen yan')
('1697141', 'Honglak Lee', 'honglak lee')
ksohn@nec-labs.com, {xcyan,honglak}@umich.edu
2c258eec8e4da9e65018f116b237f7e2e0b2ad17Deep Quantization: Encoding Convolutional Activations
with Deep Generative Model ∗
University of Science and Technology of China, Hefei, China
Microsoft Research, Beijing, China
('3430743', 'Zhaofan Qiu', 'zhaofan qiu')
('2053452', 'Ting Yao', 'ting yao')
('1724211', 'Tao Mei', 'tao mei')
zhaofanqiu@gmail.com, {tiyao, tmei}@microsoft.com
2cbb4a2f8fd2ddac86f8804fd7ffacd830a66b58
2c8743089d9c7df04883405a31b5fbe494f175b4Washington State Convention Center
Seattle, Washington, May 26-30, 2015
978-1-4799-6922-7/15/$31.00 ©2015 IEEE
3039
2c61a9e26557dd0fe824909adeadf22a6a0d86b0
2c93c8da5dfe5c50119949881f90ac5a0a4f39feAdvanced local motion patterns for macro and micro facial
expression recognition
B. Allaerta,∗, IM. Bilascoa, C. Djerabaa
aUniv. Lille, CNRS, Centrale Lille, UMR 9189 - CRIStAL -
Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
2c34bf897bad780e124d5539099405c28f3279acRobust Face Recognition via Block Sparse Bayesian Learning
School of Financial Information Engineering, Southwestern University of Finance and Economics, Chengdu
China
Institute of Chinese Payment System, Southwestern University of Finance and Economics, Chengdu 610074, China
University of California at San Diego, La Jolla, CA
USA
Samsung RandD Institute America - Dallas, 1301 East Lookout Drive, Richardson, TX 75082, USA
('2775350', 'Taiyong Li', 'taiyong li')
('1791667', 'Zhilin Zhang', 'zhilin zhang')
2c203050a6cca0a0bff80e574bda16a8c46fe9c2Discriminative Deep Hashing for Scalable Face Image Retrieval
School of Computer Science and Engineering, Nanjing University of Science and Technology
('1699053', 'Jie Lin', 'jie lin')
('3233021', 'Zechao Li', 'zechao li')
('8053308', 'Jinhui Tang', 'jinhui tang')
jinhuitang@njust.edu.cn
2cc4ae2e864321cdab13c90144d4810464b2427523
Face Recognition Using Optimized 3D
Information from Stereo Images
1Advanced Technology R&D Center, Samsung Thales Co., Ltd., 2Graduate School of
Advanced Imaging Science, Multimedia, and Film Chung-Ang University, Seoul
Korea
1. Introduction
Human biometric characteristics are unique, so it can not be easily duplicated [1]. Such
information
includes; facial, hands, torso, fingerprints, etc. Potential applications,
economical efficiency, and user convenience make the face detection and recognition
technique an important commodity compared to other biometric features [2], [3]. It can also
use a low-cost personal computer (PC) camera instead of expensive equipments, and require
minimal user interface. Recently, extensive research using 3D face data has been carried out
in order to overcome the limits of 2D face detection and feature extraction [2], which
includes PCA [3], neural networks (NN) [4], support vector machines (SVM) [5], hidden
markov models (HMM) [6], and linear discriminant analysis (LDA) [7]. Among them, PCA
and LDA methods with self-learning method are most widely used [3]. The frontal face
image database provides fairly high recognition rate. However, if the view data of facial
rotation, illumination and pose change is not acquired, the correct recognition rate
remarkably drops because of the entire face modeling. Such performance degradation
problem can be solved by using a new recognition method based on the optimized 3D
information in the stereo face images.
This chapter presents a new face detection and recognition method using optimized 3D
information from stereo images. The proposed method can significantly improve the
recognition rate and is robust against object’s size, distance, motion, and depth using the
PCA algorithm. By using the optimized 3D information, we estimate the position of the eyes
in the stereo face images. As a result, we can accurately detect the facial size, depth, and
rotation in the stereo face images. For efficient detection of face area, we adopt YCbCr color
format. The biggest object can be chosen as a face candidate among the candidate areas
which are extracted by the morphological opening for the Cb and Cr components [8]. In
order to detect the face characteristics such as eyes, nose, and mouth, a pre-processing is
performed, which utilizes brightness information in the estimated face area. For fast
processing, we train the partial face region segmented by estimating the position of eyes,
instead of the entire face region. Figure 1. shows the block diagram of proposed algorithm.
This chapter is organized as follows: Section 2 and 3 describe proposed stereo vision system
and pos estimation for face recognition, respectively. Section 4 presents experimental, and
section 5 concludes the chapter.
Source: Face Recognition, Book edited by: Kresimir Delac and Mislav Grgic, ISBN 978-3-902613-03-5, pp.558, I-Tech, Vienna, Austria, June 2007
('1727735', 'Changhan Park', 'changhan park')
('1684329', 'Joonki Paik', 'joonki paik')
2c3430e0cbe6c8d7be3316a88a5c13a50e90021dMulti-feature Spectral Clustering with Minimax Optimization
School of Electrical and Electronic Engineering
Nanyang Technological University, Singapore
('19172541', 'Hongxing Wang', 'hongxing wang')
('1764228', 'Chaoqun Weng', 'chaoqun weng')
('34316743', 'Junsong Yuan', 'junsong yuan')
{hwang8, weng0018}@e.ntu.edu.sg, jsyuan@ntu.edu.sg
2cac8ab4088e2bdd32dcb276b86459427355085cA Face-to-Face Neural Conversation Model
Hang Chu1
University of Toronto 2Vector Institute
('46598920', 'Daiqing Li', 'daiqing li'){chuhang1122, daiqing, fidler}@cs.toronto.edu
2cde051e04569496fb525d7f1b1e5ce6364c8b21Sparse 3D convolutional neural networks
University of Warwick
August 26, 2015
('39294240', 'Ben Graham', 'ben graham')b.graham@warwick.ac.uk
2c2786ea6386f2d611fc9dbf209362699b104f83('31914125', 'Mohammad Shahidul Islam', 'mohammad shahidul islam')
2c92839418a64728438c351a42f6dc5ad0c6e686Pose-Aware Face Recognition in the Wild
Prem Natarajan2
USC Institute for Robotics and Intelligent Systems (IRIS), Los Angeles, CA
G´erard Medioni1
USC Information Sciences Institute (ISI), Marina Del Rey, CA
('11269472', 'Iacopo Masi', 'iacopo masi')
('38696444', 'Stephen Rawls', 'stephen rawls')
{srawls,pnataraj}@isi.edu
{iacopo.masi,medioni}@usc.edu
2c848cc514293414d916c0e5931baf1e8583eabcAn automatic facial expression recognition system
evaluated by different classifiers
∗Programa de P´os-Graduac¸˜ao em Mecatrˆonica
Universidade Federal da Bahia,
†Department of Electrical Engineering - EESC/USP
('3797834', 'Caroline Silva', 'caroline silva')
('2105008', 'Raissa Tavares Vieira', 'raissa tavares vieira')
Email: lolyne.pacheco@gmail.com
Email: andrewssobral@gmail.com
Email: raissa@ieee.org,
2c883977e4292806739041cf8409b2f6df171aeeAalborg Universitet
Are Haar-like Rectangular Features for Biometric Recognition Reducible?
Nasrollahi, Kamal; Moeslund, Thomas B.
Published in:
Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
DOI (link to publication from Publisher):
10.1007/978-3-642-41827-3_42
Publication date:
2013
Document Version
Early version, also known as pre-print
Link to publication from Aalborg University
Citation for published version (APA):
Nasrollahi, K., & Moeslund, T. B. (2013). Are Haar-like Rectangular Features for Biometric Recognition
Reducible? In J. Ruiz-Shulcloper, & G. Sanniti di Baja (Eds.), Progress in Pattern Recognition, Image Analysis,
Computer Vision, and Applications (Vol. 8259, pp. 334-341). Springer Berlin Heidelberg: Springer Publishing
Company. Lecture Notes in Computer Science, DOI: 10.1007/978-3-642-41827-3_42
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
? Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
? You may not further distribute the material or use it for any profit-making activity or commercial gain
? You may freely distribute the URL identifying the publication in the public portal ?
Take down policy
the work immediately and investigate your claim.
Downloaded from vbn.aau.dk on: oktober 28, 2017
If you believe that this document breaches copyright please contact us at vbn@aub.aau.dk providing details, and we will remove access to
2cdd9e445e7259117b995516025fcfc02fa7eebbTitle
Temporal Exemplar-based Bayesian Networks for facial
expression recognition
Author(s)
Shang, L; Chan, KP
Citation
Proceedings - 7Th International Conference On Machine
Learning And Applications, Icmla 2008, 2008, p. 16-22
Issued Date
2008
URL
http://hdl.handle.net/10722/61208
Rights
This work is licensed under a Creative Commons Attribution-
NonCommercial-NoDerivatives 4.0 International License.;
International Conference on Machine Learning and Applications
Proceedings. Copyright © IEEE.; ©2008 IEEE. Personal use of
this material is permitted. However, permission to
reprint/republish this material for advertising or promotional
purposes or for creating new collective works for resale or
redistribution to servers or lists, or to reuse any copyrighted
component of this work in other works must be obtained from
the IEEE.
2c1ffb0feea5f707c890347d2c2882be0494a67aLearning to learn high capacity generative models from few examples
The Variational Homoencoder:
Tommi Jaakkola1
Massachusetts Institute of Technology
2MIT-IBM Watson AI Lab
('51152627', 'Luke B. Hewitt', 'luke b. hewitt')
('51150953', 'Maxwell I. Nye', 'maxwell i. nye')
('3071104', 'Andreea Gane', 'andreea gane')
('1763295', 'Joshua B. Tenenbaum', 'joshua b. tenenbaum')
2cdc40f20b70ca44d9fd8e7716080ee05ca7924aReal-time Convolutional Neural Networks for
Emotion and Gender Classification
Hochschule Bonn-Rhein-Sieg
Sankt Augustin Germany
Paul G. Pl¨oger
Hochschule Bonn-Rhein-Sieg
Sankt Augustin Germany
Matias Valdenegro
Heriot-Watt University
Edinburgh, UK
('27629437', 'Octavio Arriaga', 'octavio arriaga')Email: octavio.arriaga@smail.inf.h-brs.de
Email: paul.ploeger@h-brs.de
Email: m.valdenegro@hw.ac.uk
2cac70f9c8140a12b6a55cef834a3d7504200b62Reconstructing High Quality Face-Surfaces using Model Based Stereo
University of Basel, Switzerland
Contribution
We present a method to fit a detailed 3D morphable
model to multiple images. Our formulation allows
the fitting of the model without determining the
lighting conditions and albedo of the face, mak-
ing the system robust against difficult lighting sit-
uations and unmodelled albedo variations such as
skin colour, moles, freckles and cast shadows.
The cost function employs
Microsoft Research, Cambridge‡
Ambient Lighting
Evaluation: Gold Standard
Ambient Only Dataset (20 Subjects)
Stereo: Landmarks + Silhouette + Colour
Stereo: Landmarks + Silhouette
Stereo: Landmarks
Monocular
The model shape prior
A small number of landmarks for initialization
A monocular silhouette distance cost
A stereo colour cost
The optimisation consists of multiple runs of a non-
linear minimizer. During each run the visibility of
all sample points is assumed to stay constant. After
some iterations the minimizer is stopped and visi-
bility is reevaluated.
Model
The linear morphable face model was created by
registering 200 face scans and performing a PCA on
the data matrix to fit a Gaussian probability to the
data and reduce the dimensionality of the model.
Input Images
Multiview
Landmarks
Multiview
L.+Silhouette
Multiview
L.+S.+Colour
Ground Truth
Monocular [1]
Each cue increases the reconstruction accuracy, lead-
ing to significantly better result than possible with
the state of the art monocular system [1]. Recon-
structions of the face surface are compared to ground
truth data acquired with a structured light system.
The point wise distance from the reconstruction to
the ground truth is shown in the inset head render-
ings. Here green is a perfect match, and red denotes
a distance of 3mm or more.
The best of the three monocular results is shown.
Silhouette Cost
Directed Lighting
The silhouette cost measures
the distance of the silhouette
to image edges. An edge cost
surface is created from the im-
age, by combining the distance
transforms of edge detections
with different thresholds. The
cost ist integrated over the pro-
jection of 3D sample points at
the silhouette of the hypotheses.
Edge Cost Surface
Colour Reprojection Cost
The colour
reprojection cost
measures the image colour dif-
ference between the projected
positions of sample points in
two images. The sample points
are spaced out regularly in the
projected images.
Multiview Ground Truth Monocular
Input Images
The new stereo algorithm is robust under directed
lighting and yields significantly more accurate sur-
face reconstructions than the monocular algorithm.
Again the distance to the ground truth is shown
Funding
This work was supported in part by Microsoft Research through
the European PhD Scholarship Programme.
Multiview Ground Truth Monocular
Input Images
for green=0mm and red=3mm in the insets. Future
work will include a skin and lighting model, hope-
fully improving speed and accuracy of the method.
All cues were used.
References
[1] S. Romdhani and T. Vetter. Estimating 3D Shape and Texture
Using Pixel Intensity, Edges, Specular Highlights, Texture
Constraints and a Prior. In CVPR 2005
Distance to Ground Truth (mm)
Directed Light Dataset (5 Subjects)
Stereo: Landmarks + Silhouette + Colour
Stereo: Landmarks + Silhouette
Stereo: Landmarks
Monocular
Distance to Ground Truth (mm)
The use of multi-view information results in a
much higher accuracy than achievable by the
monocular method. A higher frequency of lower
residuals is better.
Evaluation: Face Recognition
To test the method on a difficult dataset, a face
recognition experiment on the PIE dataset was per-
formed. The results show, that the extracted sur-
faces are consistent over variations in viewpoint
and that the reconstruction quality increases with
an increasing number of images.
View-
points
Landmark
+ Silhouette
+ Colour
2nd
2nd
1st
1st
2nd
1st
68% 63% 82%
10% 18% 50%
74% 74% 85%
7% 18% 62%
82% 87% 94%
19% 37% 76%
The columns labelled “1st” show the frequency of
correct results, “2nd” is the frequency with which
the correct result was within the first two subjects
returned. The angle between the shape coefficients
was used as the distance measure.
Texture information should be used to achieve state
of the art recognition results.
FaceCamera1Camera2SamplePoint
('1994157', 'Brian Amberg', 'brian amberg')
('1745076', 'Andrew Blake', 'andrew blake')
('3293655', 'Sami Romdhani', 'sami romdhani')
('1687079', 'Thomas Vetter', 'thomas vetter')
2c5d1e0719f3ad7f66e1763685ae536806f0c23bAENet: Learning Deep Audio Features for Video
Analysis
('47893464', 'Naoya Takahashi', 'naoya takahashi')
('3037160', 'Michael Gygli', 'michael gygli')
('7329802', 'Luc van Gool', 'luc van gool')
2c8f24f859bbbc4193d4d83645ef467bcf25adc2845
Classification in the Presence of
Label Noise: a Survey
('1786603', 'Benoît Frénay', 'benoît frénay')
('1782629', 'Michel Verleysen', 'michel verleysen')
2c1f8ddbfbb224271253a27fed0c2425599dfe47Understanding and Comparing Deep Neural Networks
for Age and Gender Classification
Fraunhofer Heinrich Hertz Institute
Singapore University of Technology and Design
10587 Berlin, Germany
Klaus-Robert M¨uller
Berlin Institute of Technology
10623 Berlin, Germany
Singapore 487372, Singapore
Fraunhofer Heinrich Hertz Institute
10587 Berlin, Germany
('3633358', 'Sebastian Lapuschkin', 'sebastian lapuschkin')
('40344011', 'Alexander Binder', 'alexander binder')
('1699054', 'Wojciech Samek', 'wojciech samek')
sebastian.lapuschkin@hhi.fraunhofer.de
alexander binder@sutd.edu.sg
klaus-robert.mueller@tu-berlin.de
wojciech.samek@hhi.fraunhofer.de
2ca43325a5dbde91af90bf850b83b0984587b3ccFor Your Eyes Only – Biometric Protection of PDF Documents
Faculty of ETI, Gdansk University of Technology, Gdansk, Poland
('2026734', 'J. Siciarek', 'j. siciarek')
2cfc28a96b57e0817cc9624a5d553b3aafba56f3P2F2: Privacy-Preserving Face Finder
New Jersey Institute of Technology
('9037517', 'Nora Almalki', 'nora almalki')
('1692516', 'Reza Curtmola', 'reza curtmola')
('34645435', 'Xiaoning Ding', 'xiaoning ding')
('1690806', 'Cristian Borcea', 'cristian borcea')
Email: {naa34, crix, xiaoning.ding, narain.gehani, borcea}@njit.edu
2cdd5b50a67e4615cb0892beaac12664ec53b81fTo appear in ACM TOG 33(6).
Mirror Mirror: Crowdsourcing Better Portraits
Jun-Yan Zhu1
Aseem Agarwala2
Jue Wang2
University of California, Berkeley1 Adobe
Figure 1: We collect thousands of portraits by capturing video of a subject while they watch movie clips designed to elicit a range of positive
emotions. We use crowdsourcing and machine learning to train models that can predict attractiveness scores of different expressions. These
models can be used to select a subject’s best expressions across a range of emotions, from more serious professional portraits to big smiles.
('1763086', 'Alexei A. Efros', 'alexei a. efros')
('2177801', 'Eli Shechtman', 'eli shechtman')
2cae619d0209c338dc94593892a787ee712d9db0Selective Hidden Random Fields: Exploiting Domain-Specific Saliency for Event
Classification
University of Massachusetts Amherst
Amherst MA USA
('2246870', 'Vidit Jain', 'vidit jain')vidit@cs.umass.edu
2c0acaec54ab2585ff807e18b6b9550c44651eabFace Quality Assessment for Face Verification in Video
Lomonosov Moscow State University, 2Video Analysis Technologies, LLC
fusion of
facial
('38982797', 'M. Nikitin', 'm. nikitin')
('2943115', 'V. Konushin', 'v. konushin')
('1934937', 'A. Konushin', 'a. konushin')
mnikitin@graphics.cs.msu.ru, vadim@tevian.ru, ktosh@graphics.cs.msu.ru
2cdde47c27a8ecd391cbb6b2dea64b73282c7491ORDER-AWARE CONVOLUTIONAL POOLING FOR VIDEO BASED ACTION RECOGNITION
Order-aware Convolutional Pooling for Video Based
Action Recognition
('1722767', 'Peng Wang', 'peng wang')
('2161037', 'Lingqiao Liu', 'lingqiao liu')
('1780381', 'Chunhua Shen', 'chunhua shen')
('1724393', 'Heng Tao Shen', 'heng tao shen')
2c62b9e64aeddf12f9d399b43baaefbca8e11148Evaluation of Dense 3D Reconstruction from 2D Face Images in the Wild
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, UK
Faculty of Natural Sciences, University of Stirling, Stirling FK9 4LA, UK
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
Biometrics Research Lab, College of Computer Science, Sichuan University, Chengdu 610065, China
Image Understanding and Interactive Robotics, Reutlingen University, 72762 Reutlingen, Germany
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('1748684', 'Josef Kittler', 'josef kittler')
('7345195', 'Qijun Zhao', 'qijun zhao')
{z.feng, j.kittler, p.koppen}@surrey.ac.uk, patrikhuber@gmail.com,
wu_xiaojun@jiangnan.edu.cn, p.j.b.hancock@stir.ac.uk, qjzhao@scu.edu.cn
2c7c3a74da960cc76c00965bd3e343958464da45
2cf5f2091f9c2d9ab97086756c47cd11522a6ef3MPIIGaze: Real-World Dataset and Deep
Appearance-Based Gaze Estimation
('2520795', 'Xucong Zhang', 'xucong zhang')
('1751242', 'Yusuke Sugano', 'yusuke sugano')
('1739548', 'Mario Fritz', 'mario fritz')
('3194727', 'Andreas Bulling', 'andreas bulling')
2c19d3d35ef7062061b9e16d040cebd7e45f281dEnd-to-end Video-level Representation Learning for Action Recognition
Institute of Automation, Chinese Academy of Sciences (CASIA
University of Chinese Academy of Sciences (UCAS
('1696573', 'Jiagang Zhu', 'jiagang zhu')
('1726367', 'Wei Zou', 'wei zou')
('48147901', 'Zheng Zhu', 'zheng zhu')
{zhujiagang2015, wei.zou}@ia.ac.cn, zhuzheng14@mails.ucas.ac.cn
2c17d36bab56083293456fe14ceff5497cc97d75Unconstrained Face Alignment via Cascaded Compositional Learning
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
2SenseTime Group Limited
('2226254', 'Shizhan Zhu', 'shizhan zhu')
('40475617', 'Cheng Li', 'cheng li')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
zs014@ie.cuhk.edu.hk, chengli@sensetime.com, ccloy@ie.cuhk.edu.hk, xtang@ie.cuhk.edu.hk
2c4b96f6c1a520e75eb37c6ee8b844332bc0435cAutomatic Emotion Recognition in Robot-Children Interaction for ASD
Treatment
ISASI UOS Lecce
Campus Universitario via Monteroni sn, 73100 Lecce Italy
ISASI UOS Messina
Univerisita’ di Bari
Marine Institute, via Torre Bianca, 98164 Messina Italy
Via Orabona 4, 70126 Bari, Italy
('4730472', 'Marco Leo', 'marco leo')
('33097940', 'Marco Del Coco', 'marco del coco')
('1741861', 'Cosimo Distante', 'cosimo distante')
('3049247', 'Giovanni Pioggia', 'giovanni pioggia')
('2235498', 'Giuseppe Palestra', 'giuseppe palestra')
marco.leo@cnr.it
2cd7821fcf5fae53a185624f7eeda007434ae037Exploring the Geo-Dependence of Human Face Appearance
Computer Science
University of Kentucky
Computer Science
UNC Charlotte
Computer Science
University of Kentucky
('2142962', 'Mohammad T. Islam', 'mohammad t. islam')
('38792670', 'Scott Workman', 'scott workman')
('1873911', 'Hui Wu', 'hui wu')
('1690110', 'Richard Souvenir', 'richard souvenir')
('1990750', 'Nathan Jacobs', 'nathan jacobs')
{tarik,scott}@cs.uky.edu
{hwu13,souvenir}@uncc.edu
jacobs@cs.uky.edu
79581c364cefe53bff6bdd224acd4f4bbc43d6d4
794ddb1f3b7598985d4d289b5b0664be736a50c4Exploiting Competition Relationship for Robust Visual Recognition
Center for Data Analytics and Biomedical Informatics
Department of Computer and Information Science
Temple University
Philadelphia, PA, 19122, USA
('38909760', 'Liang Du', 'liang du')
('1805398', 'Haibin Ling', 'haibin ling')
{liang.du, hbling}@temple.edu
790aa543151312aef3f7102d64ea699a1d15cb29Confidence-Weighted Local Expression Predictions for
Occlusion Handling in Expression Recognition and Action
Unit detection
1 Sorbonne Universités, UPMC Univ Paris 06, CNRS, ISIR UMR 7222
4 place Jussieu 75005 Paris
('3190846', 'Arnaud Dapogny', 'arnaud dapogny')
('2521061', 'Kevin Bailly', 'kevin bailly')
('1701986', 'Séverine Dubuisson', 'séverine dubuisson')
arnaud.dapogny@isir.upmc.fr
kevin.bailly@isir.upmc.fr
severine.dubuisson@isir.upmc.fr
79f6a8f777a11fd626185ab549079236629431acCopyright
by
2013
('35788904', 'Sung Ju Hwang', 'sung ju hwang')
795ea140df2c3d29753f40ccc4952ef24f46576c
79dc84a3bf76f1cb983902e2591d913cee5bdb0e
79744fc71bea58d2e1918c9e254b10047472bd76Disentangling 3D Pose in A Dendritic CNN
for Unconstrained 2D Face Alignment
Department of Electrical and Computer Engineering, CFAR and UMIACS
University of Maryland-College Park, USA
('50333013', 'Amit Kumar', 'amit kumar')
('9215658', 'Rama Chellappa', 'rama chellappa')
akumar14@umiacs.umd.edu, rama@umiacs.umd.edu
79b669abf65c2ca323098cf3f19fa7bdd837ff31 Deakin Research Online
This is the published version:
Rana, Santu, Liu, Wanquan, Lazarescu, Mihai and Venkatesh, Svetha 2008, Efficient tensor
based face recognition, in ICPR 2008 : Proceedings of the 19th International Conference on
Pattern Recognition, IEEE, Washington, D. C., pp. 1-4.
Available from Deakin Research Online:
http://hdl.handle.net/10536/DRO/DU:30044585

Reproduced with the kind permissions of the copyright owner.
Personal use of this material is permitted. However, permission to reprint/republish this
material for advertising or promotional purposes or for creating new collective works for
resale or redistribution to servers or lists, or to reuse any copyrighted component of this work
in other works must be obtained from the IEEE.
Copyright : 2008, IEEE
794c0dc199f0bf778e2d40ce8e1969d4069ffa7bOdd Leaf Out
Improving visual recognition with games
Preece
School of Information
University of Maryland
College Park, United States
('6519022', 'Darcy Lewis', 'darcy lewis')
('2662457', 'Dana Rotman', 'dana rotman')
79c3a7131c6c176b02b97d368cd0cd0bc713ff7e
79dd787b2877cf9ce08762d702589543bda373beFace Detection Using SURF Cascade
Intel Labs China
('35423937', 'Jianguo Li', 'jianguo li')
('40279370', 'Tao Wang', 'tao wang')
('2470865', 'Yimin Zhang', 'yimin zhang')
799c02a3cde2c0805ea728eb778161499017396bPersonRank: Detecting Important People in Images
School of Electronics and Information Technology, Sun Yat-Sen University, GuangZhou, China
School of Data and Computer Science, Sun Yat-Sen University, GuangZhou, China
('9186191', 'Benchao Li', 'benchao li')
('3333315', 'Wei-Shi Zheng', 'wei-shi zheng')
7966146d72f9953330556baa04be746d18702047Harnessing Human Manipulation
NSF/ARL Workshop on Cloud Robotics: Challenges and Opportunities
February 27-28, 2013
The Robotics Institute Carnegie Mellon University
Georgia Institute of Technology
('1781040', 'Matthew T. Mason', 'matthew t. mason')
('1735665', 'Nancy Pollard', 'nancy pollard')
('1760708', 'Alberto Rodriguez', 'alberto rodriguez')
('38637733', 'Ryan Kerwin', 'ryan kerwin')
@cs.cmu.edu
ryankerwin@gatech.edu
79fa57dedafddd3f3720ca26eb41c82086bfb332Modeling Facial Expression Space for Recognition *
National Lab. on Machine Perception
Peking University
Beijing, China
National Lab. on Machine Perception
Peking University
Beijing, China
National Lab. on Machine Perception
Peking University
Beijing, China
('2086289', 'Hong Liu', 'hong liu')
('1687248', 'Hongbin Zha', 'hongbin zha')
('2976781', 'Yuwen Wu', 'yuwen wu')
wuyw@cis.pku.edu.cn
liuhong@cis.pku.edu.cn
zha@cis.pku.edu.cn
793e7f1ba18848908da30cbad14323b0389fd2a8
79db191ca1268dc88271abef3179c4fe4ee92aedFacial Expression Based Automatic Album
Creation
School of Computer Science, CECS, Australian National University, Canberra
School of Engineering, CECS, Australian National University, Canberra, Australia
3 Vision & Sensing, Faculty of Information Sciences and Engineering,
Australia
University of Canberra, Australia
('1735697', 'Abhinav Dhall', 'abhinav dhall')
('3183108', 'Akshay Asthana', 'akshay asthana')
('1717204', 'Roland Goecke', 'roland goecke')
abhinav.dhall@anu.edu.au, aasthana@rsise.anu.edu.au,
roland.goecke@ieee.org
2d990b04c2bd61d3b7b922b8eed33aeeeb7b9359Discriminative Dictionary Learning with
Pairwise Constraints
University of Maryland, College Park, MD
('2723427', 'Huimin Guo', 'huimin guo')
('34145947', 'Zhuolin Jiang', 'zhuolin jiang')
('1693428', 'Larry S. Davis', 'larry s. davis')
{hmguo,zhuolin,lsd}@umiacs.umd.edu
2d25045ec63f9132371841c0beccd801d3733908Sensors 2015, 15, 6719-6739; doi:10.3390/s150306719
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Multi-Layer Sparse Representation for Weighted LBP-Patches
Based Facial Expression Recognition
School of Software, Dalian University of Technology, Dalian 116621, China
Tel.: +86-411-8757-1516.
Academic Editor: Vittorio M.N. Passaro
Received: 15 December 2014 / Accepted: 10 March 2015 / Published: 19 March 2015
('2235253', 'Qi Jia', 'qi jia')
('3459398', 'Xinkai Gao', 'xinkai gao')
('2736880', 'He Guo', 'he guo')
('7864960', 'Zhongxuan Luo', 'zhongxuan luo')
('1734275', 'Yi Wang', 'yi wang')
E-Mails: jiaqi7166@gmail.com (Q.J.); gaoxinkai@mail.dlut.edu.cn (X.G.); zxluo@dlut.edu.cn (Z.L.);
wangyi_dlut@126.com (Y.W.)
* Author to whom correspondence should be addressed; E-Mail: guohe@dlut.edu.cn;
2dd6c988b279d89ab5fb5155baba65ce4ce53c1e
2d080662a1653f523321974a57518e7cb67ecb41On Constrained Local Model Feature
Normalization for Facial Expression Recognition
School of Computing and Info. Sciences, Florida International University
11200 SW 8th St, Miami, FL 33199, USA
http://ascl.cis.fiu.edu/
('3489972', 'Zhenglin Pan', 'zhenglin pan')
('2008564', 'Mihai Polceanu', 'mihai polceanu')
zpan004@fiu.edu,{mpolcean,lisetti}@cs.fiu.edu
2d4b9fe3854ccce24040074c461d0c516c46baf4Temporal Action Localization by Structured Maximal Sums
State Key Laboratory for Novel Software Technology, Nanjing University, China
University of Michigan, Ann Arbor
('40188401', 'Jonathan C. Stroud', 'jonathan c. stroud')
('2285916', 'Tong Lu', 'tong lu')
('8342699', 'Jia Deng', 'jia deng')
2d294c58b2afb529b26c49d3c92293431f5f98d04413
Maximum Margin Projection Subspace Learning
for Visual Data Analysis
('1793625', 'Symeon Nikitidis', 'symeon nikitidis')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
2d1f86e2c7ba81392c8914edbc079ac64d29b666
2d9e58ea582e054e9d690afca8b6a554c3687ce6Learning Local Feature Aggregation Functions
with Backpropagation
Multimedia Understanding Group
Aristotle University of Thessaloniki, Greece
('3493855', 'Angelos Katharopoulos', 'angelos katharopoulos')
('3493472', 'Despoina Paschalidou', 'despoina paschalidou')
('1789830', 'Christos Diou', 'christos diou')
('1708199', 'Anastasios Delopoulos', 'anastasios delopoulos')
{katharas, pdespoin}@auth.gr; diou@mug.ee.auth.gr; adelo@eng.auth.gr
2d164f88a579ba53e06b601d39959aaaae9016b7Dynamic Facial Expression Recognition Using
A Bayesian Temporal Manifold Model
Department of Computer Science
Queen Mary University of London
Mile End Road, London E1 4NS, UK
('10795229', 'Caifeng Shan', 'caifeng shan')
('2073354', 'Shaogang Gong', 'shaogang gong')
('2803283', 'Peter W. McOwan', 'peter w. mcowan')
{cfshan, sgg, pmco}@dcs.qmul.ac.uk
2d8001ffee6584b3f4d951d230dc00a06e8219f8Feature Agglomeration Networks for Single Stage Face Detection
School of Information Systems, Singapore Management University, Singapore
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
§DeepIR Inc., Beijing, China
('1826176', 'Jialiang Zhang', 'jialiang zhang')
('2791484', 'Xiongwei Wu', 'xiongwei wu')
('1704030', 'Jianke Zhu', 'jianke zhu')
{chhoi,xwwu.2015@phdis}@smu.edu.sg;{zjialiang,jkzhu}@zju.edu.cn
2d23fa205acca9c21e3e1a04674f1e5a9528550eThe Fast and the Flexible:
Extended Pseudo Two-Dimensional Warping for
Face Recognition
1Computer Vision and Multimodal Computing
2 Computer Vision Laboratory
MPI Informatics, Saarbruecken
ETH Zurich
3Human Language Technology and Pattern Recognition Group,
RWTH Aachen University
('2299109', 'Leonid Pishchulin', 'leonid pishchulin')
('1948162', 'Tobias Gass', 'tobias gass')
('1967060', 'Philippe Dreuw', 'philippe dreuw')
('1685956', 'Hermann Ney', 'hermann ney')
leonid@mpi-inf.mpg.de
gasst@vision.ee.ethz.ch
@cs.rwth-aachen.de
2d244d70ed1a2ba03d152189f1f90ff2b4f16a79An Analytical Mapping for LLE and Its
Application in Multi-Pose Face Synthesis
State Key Lab of Intelligent Technology and Systems
Tsinghua University
Beijing, 100084, China
('1715001', 'Jun Wang', 'jun wang')wangjun00@mails.tsinghua.edu.cn
zcs@mail.tsinghua.edu.cn
kzb98@mails.tsinghua.edu.cn
2d88e7922d9f046ace0234f9f96f570ee848a5b5Building Better Detection with Privileged Information
Department of CSE
The Pennsylvania State
University
Department of CSE
The Pennsylvania State
University
Applied Communication
Sciences
Basking Ridge, NJ, US
Department of CSE
The Pennsylvania State
University
Army Research
Laboratory
Adelphi, MD, USA
('2950892', 'Z. Berkay Celik', 'z. berkay celik')
('4108832', 'Patrick McDaniel', 'patrick mcdaniel')
('1804289', 'Rauf Izmailov', 'rauf izmailov')
('1967156', 'Nicolas Papernot', 'nicolas papernot')
('1703726', 'Ananthram Swami', 'ananthram swami')
zbc102@cse.psu.edu
mcdaniel@cse.psu.edu
rizmailov@appcomsci.com
npg5056@cse.psu.edu
ananthram.swami.civ@mail.mil
2d31ab536b3c8a05de0d24e0257ca4433d5a7c75Materials Discovery: Fine-Grained Classification of X-ray Scattering Images
Kevin Yager†
University of North Carolina at Chapel Hill, NC, USA
†Brookhaven National Lab, NY, USA
('1772294', 'M. Hadi Kiapour', 'm. hadi kiapour')
('39668247', 'Alexander C. Berg', 'alexander c. berg')
('1685538', 'Tamara L. Berg', 'tamara l. berg')
{hadi,aberg,tlberg}@cs.unc.edu
kyager@bnl.gov
2dbde64ca75e7986a0fa6181b6940263bcd70684Pose Independent Face Recognition by Localizing
Local Binary Patterns via Deformation Components
MICC, University of Florence
Italy
http://www.micc.unifi.it/vim
G´erard Medioni
USC IRIS Lab, University of Southern California
Los Angeles, USA
http://iris.usc.edu/USC-Computer-Vision.html
('11269472', 'Iacopo Masi', 'iacopo masi')
('35220006', 'Claudio Ferrari', 'claudio ferrari')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
2d0363a3ebda56d91d704d5ff5458a527775b609Attribute2Image: Conditional Image Generation from Visual Attributes
1Computer Science and Engineering Division
2Adobe Research
3NEC Labs
University of Michigan, Ann Arbor
('3084614', 'Xinchen Yan', 'xinchen yan')
('1768964', 'Jimei Yang', 'jimei yang')
('1729571', 'Kihyuk Sohn', 'kihyuk sohn')
('1697141', 'Honglak Lee', 'honglak lee')
{xcyan,kihyuks,honglak}@umich.edu
jimyang@adobe.com
ksohn@nec-labs.com
2d93a9aa8bed51d0d1b940c73ac32c046ebf1eb8Perceptual Reward Functions
College of Computing, Georgia Institute of Technology, Atlanta, GA, USA
Waseda University, Tokyo, Japan
('1737432', 'Atsuo Takanishi', 'atsuo takanishi')aedwards8@gatech.edu, isbell@cc.gatech.edu
takanisi@waseda.jp
2dd2c7602d7f4a0b78494ac23ee1e28ff489be88Large Scale Metric Learning from Equivalence Constraints ∗
Institute for Computer Graphics and Vision, Graz University of Technology
('2918450', 'Martin Hirzer', 'martin hirzer')
('3202367', 'Paul Wohlhart', 'paul wohlhart')
('1791182', 'Peter M. Roth', 'peter m. roth')
('3628150', 'Horst Bischof', 'horst bischof')
{koestinger,hirzer,wohlhart,pmroth,bischof}@icg.tugraz.at
2d84e30c61281d3d7cdd11676683d6e66a68aea6Automatic Construction of Action Datasets
using Web videos with Density-based Cluster
Analysis and Outlier Detection
The University of Electro-Communications
185-8585 , Japan Tokyo Chofu Chofugaoka 1-5-1
('1681659', 'Keiji Yanai', 'keiji yanai')
2d98a1cb0d1a37c79a7ebcb727066f9ccc781703Coupled Support Vector Machines for Supervised
Domain Adaptation
∗Center for Cognitive Ubiquitous Computing, Arizona State Univeristy
† Bosch Research and Technology Center, Palo Alto
University of Michigan, Ann Arbor
('3151995', 'Hemanth Venkateswara', 'hemanth venkateswara')
('2929090', 'Prasanth Lade', 'prasanth lade')
('37513601', 'Jieping Ye', 'jieping ye')
('1743991', 'Sethuraman Panchanathan', 'sethuraman panchanathan')
hemanthv@asu.edu, prasanth.lade@us.bosch.com, jpye@umich.edu,
panch@asu.edu
2dced31a14401d465cd115902bf8f508d79de076ORIGINAL RESEARCH
published: 26 May 2015
doi: 10.3389/fbioe.2015.00064
Can a humanoid face be expressive?
A psychophysiological investigation
Research Center E. Piaggio , University of Pisa, Pisa, Italy, 2 Faculty of Psychology, University of Florence, Florence, Italy
University of Pisa, Pisa, Italy
Non-verbal signals expressed through body language play a crucial role in multi-modal
human communication during social relations. Indeed, in all cultures, facial expressions
are the most universal and direct signs to express innate emotional cues. A human face
conveys important information in social interactions and helps us to better understand
our social partners and establish empathic links. Latest researches show that humanoid
and social robots are becoming increasingly similar to humans, both esthetically and
expressively. However, their visual expressiveness is a crucial issue that must be improved
to make these robots more realistic and intuitively perceivable by humans as not different
from them. This study concerns the capability of a humanoid robot to exhibit emotions
through facial expressions. More specifically, emotional signs performed by a humanoid
robot have been compared with corresponding human facial expressions in terms of
recognition rate and response time. The set of stimuli
included standardized human
expressions taken from an Ekman-based database and the same facial expressions
performed by the robot. Furthermore, participants’ psychophysiological responses have
been explored to investigate whether there could be differences induced by interpreting
robot or human emotional stimuli. Preliminary results show a trend to better recognize
expressions performed by the robot than 2D photos or 3D models. Moreover, no
significant differences in the subjects’ psychophysiological state have been found during
the discrimination of facial expressions performed by the robot in comparison with the
same task performed with 2D photos and 3D models.
Keywords: facial expressions, emotion perception, humanoid robot, expression recognition, social robots,
psychophysiological signals, affective computing
1. Introduction
Human beings communicate in a rich and sophisticated way through many different channels,
e.g., sound, vision, and touch. In human social relationships, visual information plays a crucial
role. Human faces convey important information both from static features, such as identity, age,
and gender, and from dynamic changes, such as expressions, eye blinking, and muscular micro-
movements. The ability to recognize and understand facial expressions of the social partner allows
us to establish and manage the empathic links that drive our social relationships.
Charles Darwin was the first to observe that basic expressions, such as anger, disgust, contempt,
fear, surprise, sadness, and happiness, are universal and innate (Darwin, 1872). Since the publication
of his book “The Expression of the Emotions in Man and Animals” in 1872, a strong debate over the
Edited by:
Cecilia Laschi,
Scuola Superiore Sant’Anna, Italy
Reviewed by:
John-John Cabibihan,
Qatar University, Qatar
Egidio Falotico,
Scuola Superiore Sant’Anna, Italy
*Correspondence:
Research Center E. Piaggio
University of Pisa, Largo Lucio
Lazzarino 1, Pisa 56122, Italy
Specialty section:
This article was submitted to Bionics
and Biomimetics, a section of the
journal Frontiers in Bioengineering and
Biotechnology
Received: 24 November 2014
Accepted: 27 April 2015
Published: 26 May 2015
Citation:
Lazzeri N, Mazzei D, Greco A, Rotesi
A, Lanatà A and De Rossi DE (2015)
Can a humanoid face be expressive?
A psychophysiological investigation.
Front. Bioeng. Biotechnol. 3:64.
doi: 10.3389/fbioe.2015.00064
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org
May 2015 | Volume 3 | Article 64
('35440863', 'Nicole Lazzeri', 'nicole lazzeri')
('34573296', 'Daniele Mazzei', 'daniele mazzei')
('32070391', 'Alberto Greco', 'alberto greco')
('6284325', 'Annalisa Rotesi', 'annalisa rotesi')
('1730665', 'Antonio Lanatà', 'antonio lanatà')
('20115987', 'Danilo Emilio De Rossi', 'danilo emilio de rossi')
('34573296', 'Daniele Mazzei', 'daniele mazzei')
mazzei@di.unipi.it
2d05e768c64628c034db858b7154c6cbd580b2d5Available Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
IJCSMC, Vol. 4, Issue. 8, August 2015, pg.431 – 446
RESEARCH ARTICLE
ISSN 2320–088X
FACIAL EXPRESSION RECOGNITION:
Machine Learning using C#
Author: Neda Firoz (nedafiroz1910@gmail.com)
Advisor: Dr. Prashant Ankur Jain (prashant.jain@shiats.edu.in)
2dfe0e7e81f65716b09c590652a4dd8452c10294ORIGINAL RESEARCH
published: 06 June 2018
doi: 10.3389/fpsyg.2018.00864
Incongruence Between Observers’
and Observed Facial Muscle
Activation Reduces Recognition of
Emotional Facial Expressions From
Video Stimuli
Centre for Applied Autism Research, University of Bath, Bath, United Kingdom, 2 Social and
Cognitive Neuroscience Laboratory, Centre of Biology and Health Sciences, Mackenzie Presbyterian University, S o Paulo
Brazil, University Hospital Zurich, Z rich
Switzerland, Psychosomatic Medicine, and Psychotherapy, University Hospital Frankfurt
Frankfurt, Germany
According to embodied cognition accounts, viewing others’ facial emotion can elicit
the respective emotion representation in observers which entails simulations of sensory,
motor, and contextual experiences. In line with that, published research found viewing
others’
facial emotion to elicit automatic matched facial muscle activation, which
was further found to facilitate emotion recognition. Perhaps making congruent facial
muscle activity explicit produces an even greater recognition advantage. If there is
con icting sensory information, i.e., incongruent facial muscle activity, this might impede
recognition. The effects of actively manipulating facial muscle activity on facial emotion
recognition from videos were investigated across three experimental conditions: (a)
explicit imitation of viewed facial emotional expressions (stimulus-congruent condition),
(b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing
(control condition). It was hypothesised that (1) experimental condition (a) and (b) result
in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion
recognition accuracy from others’ faces compared to (c), (3) experimental condition (b)
lowers recognition accuracy for expressions with a salient facial feature in the lower,
but not the upper face area, compared to (c). Participants (42 males, 42 females)
underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography
(EMG) was recorded from five facial muscle sites. The experimental conditions’ order
was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity
for expressions with facial feature saliency in the lower face region, which reduced
recognition of lower face region emotions. Explicit imitation caused stimulus-congruent
facial muscle activity without modulating recognition. Methodological
implications are
discussed.
Keywords: facial emotion recognition, imitation, facial muscle activity, facial EMG, embodiment, videos, dynamic
stimuli, facial expressions of emotion
Edited by:
Eva G. Krumhuber,
University College London
United Kingdom
Reviewed by:
Sebastian Korb,
Universität Wien, Austria
Michal Olszanowski,
SWPS University of Social Sciences
and Humanities, Poland
*Correspondence:
Tanja S. H. Wingenbach
Specialty section:
This article was submitted to
Emotion Science,
a section of the journal
Frontiers in Psychology
Received: 15 December 2017
Accepted: 14 May 2018
Published: 06 June 2018
Citation:
Wingenbach TSH, Brosnan M,
Pfaltz MC, Plichta MM and Ashwin C
(2018) Incongruence Between
Observers’ and Observed Facial
Muscle Activation Reduces
Recognition of Emotional Facial
Expressions From Video Stimuli.
Front. Psychol. 9:864.
doi: 10.3389/fpsyg.2018.00864
Frontiers in Psychology | www.frontiersin.org
June 2018 | Volume 9 | Article 864
('39455300', 'Mark Brosnan', 'mark brosnan')
('34495803', 'Monique C. Pfaltz', 'monique c. pfaltz')
('2976177', 'Michael M. Plichta', 'michael m. plichta')
('2708124', 'Chris Ashwin', 'chris ashwin')
tanja.wingenbach@bath.edu
2d072cd43de8d17ce3198fae4469c498f97c6277Random Cascaded-Regression Copse for Robust
Facial Landmark Detection
and Xiao-Jun Wu
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('39976184', 'Patrik Huber', 'patrik huber')
('1748684', 'Josef Kittler', 'josef kittler')
2dd5f1d69e0e8a95a10f3f07f2c0c7fa172994b320
Machine Analysis of Facial Expressions
Imperial College London
Inst. Neural Computation, University of California
1 UK, 2 USA
1. Human Face and Its Expression
The human face is the site for major sensory inputs and major communicative outputs. It
houses the majority of our sensory apparatus as well as our speech production apparatus. It
is used to identify other members of our species, to gather information about age, gender,
attractiveness, and personality, and to regulate conversation by gazing or nodding.
Moreover, the human face is our preeminent means of communicating and understanding
somebody’s affective state and intentions on the basis of the shown facial expression
(Keltner & Ekman, 2000). Thus, the human face
input-output
communicative system capable of tremendous flexibility and specificity (Ekman & Friesen,
1975). In general, the human face conveys information via four kinds of signals.
(a) Static facial signals represent relatively permanent features of the face, such as the bony
structure, the soft tissue, and the overall proportions of the face. These signals
contribute to an individual’s appearance and are usually exploited for person
identification.
is a multi-signal
(b) Slow facial signals represent changes in the appearance of the face that occur gradually
over time, such as development of permanent wrinkles and changes in skin texture.
These signals can be used for assessing the age of an individual. Note that these signals
might diminish the distinctness of the boundaries of the facial features and impede
recognition of the rapid facial signals.
(c) Artificial signals are exogenous features of the face such as glasses and cosmetics. These
signals provide additional information that can be used for gender recognition. Note
that these signals might obscure facial features or, conversely, might enhance them.
(d) Rapid facial signals represent temporal changes in neuromuscular activity that may lead
to visually detectable changes in facial appearance, including blushing and tears. These
(atomic facial) signals underlie facial expressions.
All four classes of signals contribute to person identification, gender recognition,
attractiveness assessment, and personality prediction. In Aristotle’s time, a theory was
proposed about mutual dependency between static facial signals (physiognomy) and
personality: “soft hair reveals a coward, strong chin a stubborn person, and a smile a happy
person”. Today, few psychologists share the belief about the meaning of soft hair and strong
chin, but many believe that rapid facial signals (facial expressions) communicate emotions
(Ekman & Friesen, 1975; Ambady & Rosenthal, 1992; Keltner & Ekman, 2000) and
personality traits (Ambady & Rosenthal, 1992). More specifically, types of messages
Source: Face Recognition, Book edited by: Kresimir Delac and Mislav Grgic, ISBN 978-3-902613-03-5, pp.558, I-Tech, Vienna, Austria, June 2007
('1694605', 'Maja Pantic', 'maja pantic')
('2218905', 'Marian Stewart Bartlett', 'marian stewart bartlett')
2d71e0464a55ef2f424017ce91a6bcc6fd83f6c3International Journal of Computer Applications (0975 – 8887)
National Conference on Advancements in Computer & Information Technology (NCACIT-2016)
A Survey on: Image Process using Two- Stage Crawler
Assistant Professor
SPPU, Pune
Department of Computer Engg
Department of Computer Engg
Department of Computer Engg
BE Student
SPPU, Pune
BE Student
SPPU, Pune
BE Student
Department of Computer Engg
SPPU, Pune
additional
analysis
for
information
('15156505', 'Nilesh Wani', 'nilesh wani')
('1936852', 'Savita Gunjal', 'savita gunjal')
2d38fd1df95f5025e2cee5bc439ba92b369a93dfScalable Object-Class Search
via Sparse Retrieval Models and Approximate Ranking
Dartmouth Computer Science Technical Report TR2011-700
Computer Science Department
Dartmouth College
Hanover, NH 03755, U.S.A.
July 5, 2011
('2563325', 'Mohammad Rastegari', 'mohammad rastegari')
('2442612', 'Chen Fang', 'chen fang')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
{mrastegari, chenfang, lorenzo}@cs.dartmouth.edu
2d83ba2d43306e3c0587ef16f327d59bf4888dc3Large-scale Video Classification with Convolutional Neural Networks
Stanford University
1Google Research
http://cs.stanford.edu/people/karpathy/deepvideo
('2354728', 'Andrej Karpathy', 'andrej karpathy')
('1805076', 'George Toderici', 'george toderici')
('24792872', 'Sanketh Shetty', 'sanketh shetty')
('1893833', 'Thomas Leung', 'thomas leung')
('1694199', 'Rahul Sukthankar', 'rahul sukthankar')
('3216322', 'Li Fei-Fei', 'li fei-fei')
karpathy@cs.stanford.edu
gtoderici@google.com
sanketh@google.com
leungt@google.com
sukthankar@google.com
feifeili@cs.stanford.edu
2d84c0d96332bb4fbd8acced98e726aabbf15591UNIVERSITY OF CALIFORNIA
RIVERSIDE
Investigating the Role of Saliency for Face Recognition
A Dissertation submitted in partial satisfaction
of the requirements for the degree of
Doctor of Philosophy
in
Electrical Engineering
by
March 2015
Dissertation Committee:
Professor Conrad Rudolph
('11012197', 'Ramya Malur Srinivasan', 'ramya malur srinivasan')
('1688416', 'Amit K Roy-Chowdhury', 'amit k roy-chowdhury')
('1686303', 'Ertem Tuncel', 'ertem tuncel')
('2357146', 'Tamar Shinar', 'tamar shinar')
2d8d089d368f2982748fde93a959cf5944873673Proceedings of NAACL-HLT 2018, pages 788–794
New Orleans, Louisiana, June 1 - 6, 2018. c(cid:13)2018 Association for Computational Linguistics
788
2d79d338c114ece1d97cde1aa06ab4cf17d38254iLab-20M: A large-scale controlled object dataset to investigate deep learning
Center for Research in Computer Vision, University of Central Florida
Amirkabir University of Technology, University of Southern California
('3177797', 'Ali Borji', 'ali borji')
('2391309', 'Saeed Izadi', 'saeed izadi')
('7326223', 'Laurent Itti', 'laurent itti')
aborji@crcv.ucf.edu, sizadi@aut.ac.ir, itti@usc.edu
2df4d05119fe3fbf1f8112b3ad901c33728b498aFacial landmark detection using structured output deep
neural networks
Soufiane Belharbi ∗1, Cl´ement Chatelain∗1, Romain H´erault∗1, and S´ebastien
Adam∗2
1LITIS EA 4108, INSA de Rouen, Saint ´Etienne du Rouvray 76800, France
2LITIS EA 4108, UFR des Sciences, Universit´e de Rouen, France.
September 24, 2015
2d3482dcff69c7417c7b933f22de606a0e8e42d4Labeled Faces in the Wild: Updates and
New Reporting Procedures
University of Massachusetts, Amherst Technical Report UM-CS
('3219900', 'Gary B. Huang', 'gary b. huang')
('1714536', 'Erik Learned-Miller', 'erik learned-miller')
2d4a3e9361505616fa4851674eb5c8dd18e0c3cfTowards Privacy-Preserving Visual Recognition
via Adversarial Training: A Pilot Study
Texas AandM University, College Station TX 77843, USA
2 Adobe Research, San Jose CA 95110, USA
('1733940', 'Zhenyu Wu', 'zhenyu wu')
('2969311', 'Zhangyang Wang', 'zhangyang wang')
('8056043', 'Zhaowen Wang', 'zhaowen wang')
('39909162', 'Hailin Jin', 'hailin jin')
{wuzhenyu sjtu,atlaswang}@tamu.edu
{zhawang,hljin}@adobe.com
2d748f8ee023a5b1fbd50294d176981ded4ad4eeTRIPLET SIMILARITY EMBEDDING FOR FACE VERIFICATION
Center for Automation Research, UMIACS, University of Maryland, College Park, MD
1Department of Electrical and Computer Engineering,
('2716670', 'Swami Sankaranarayanan', 'swami sankaranarayanan')
('2943431', 'Azadeh Alavi', 'azadeh alavi')
('9215658', 'Rama Chellappa', 'rama chellappa')
{swamiviv, azadeh, rama}@umiacs.umd.edu
2d3c17ced03e4b6c4b014490fe3d40c62d02e914COMPUTER ANIMATION AND VIRTUAL WORLDS
Comp.Anim.VirtualWorlds2012; 23:167–178
Published online 30 May 2012 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cav.1455
SPECIAL ISSUE PAPER
Video-driven state-aware facial animation
State Key Lab of CADandCG, Zhejiang University, Hangzhou, Zhejiang, China
2 Microsoft Corporation, Seattle, WA, USA
('2894564', 'Ming Zeng', 'ming zeng')
('1680293', 'Lin Liang', 'lin liang')
('3227032', 'Xinguo Liu', 'xinguo liu')
('1679542', 'Hujun Bao', 'hujun bao')
41f26101fed63a8d149744264dd5aa79f1928265Spot On: Action Localization from
Pointly-Supervised Proposals
University of Amsterdam
Delft University of Technology
('2606260', 'Pascal Mettes', 'pascal mettes')
('1738975', 'Jan C. van Gemert', 'jan c. van gemert')
4188bd3ef976ea0dec24a2512b44d7673fd4ad261050
Nonlinear Non-Negative Component
Analysis Algorithms
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('2871609', 'Maria Petrou', 'maria petrou')
416b559402d0f3e2b785074fcee989d44d82b8e5Multi-View Super Vector for Action Recognition
1Shenzhen Key Lab of Computer Vision and Pattern Recognition,
Shenzhen Institutes of Advanced Technology, CAS, China
The Chinese University of Hong Kong, Hong Kong
('2985266', 'Zhuowei Cai', 'zhuowei cai')
('33345248', 'Limin Wang', 'limin wang')
('1766837', 'Xiaojiang Peng', 'xiaojiang peng')
('33427555', 'Yu Qiao', 'yu qiao')
{iamcaizhuowei, 07wanglimin, xiaojiangp}@gmail.com, yu.qiao@siat.ac.cn
416364cfdbc131d6544582e552daf25f585c557dSynthesis and Recognition of Facial Expressions in Virtual 3D Views
Queen Mary, University of London, E1 4NS, UK
('34780294', 'Lukasz Zalewski', 'lukasz zalewski')
('2073354', 'Shaogang Gong', 'shaogang gong')
[lukas|sgg]@dcs.qmul.ac.uk
41000c3a3344676513ef4bfcd392d14c7a9a7599A NOVEL APPROACH FOR GENERATING FACE
TEMPLATE USING BDA
1P.G. Student, Department of Computer Engineering, MCERC, Nashik (M.S.), India.
2Associate Professor, Department of Computer Engineering, MCERC, Nashik (M.S.),
India
('40075681', 'Shraddha S. Shinde', 'shraddha s. shinde')
('2590072', 'Anagha P. Khedkar', 'anagha p. khedkar')
shraddhashinde@gmail.com
anagha_p2@yahoo.com
411ee9236095f8f5ca3b9ef18fd3381c1c68c4b8Vol.59: e16161057, January-December 2016
http://dx.doi.org/10.1590/1678-4324-2016161057
ISSN 1678-4324 Online Edition
1
Biological and Applied Sciences
BRAZILIAN ARCHIVES OF
BIOLOGY AND TECHNOLOGY
A N I N T E R N A T I O N A L J O U R N A L
An Empirical Evaluation of the Local Texture Description
Framework-Based Modified Local Directional Number
Pattern with Various Classifiers for Face Recognition
St. Xavier s Catholic College of Engineering, Nagercoil, India
VelTech Dr. R.R. and Dr. S.R. Technical University, Chennai
Manonmaniam Sundaranar University, Tirunelveli
India.
('9375880', 'R. Reena Rose', 'r. reena rose')
411318684bd2d42e4b663a37dcf0532a48f0146dImproved Face Verification with Simple
Weighted Feature Combination
College of Electronics and Information Engineering, Tongji University
4800 Cao’an Highway, Shanghai 201804, People’s Republic of China
('1775391', 'Xinyu Zhang', 'xinyu zhang')
('48566761', 'Jiang Zhu', 'jiang zhu')
('34647494', 'Mingyu You', 'mingyu you')
{1510464,zhujiang,myyou}@tongji.edu.cn
4140498e96a5ff3ba816d13daf148fffb9a2be3f2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition
Constrained Ensemble Initialization for Facial Landmark
Tracking in Video
Language Technology Institute, Carnegie Mellon University, Pittsburgh, PA, USA
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
41f8477a6be9cd992a674d84062108c68b7a9520An Automated System for Visual Biometrics
Dept. of Electrical Engineering and Computer Science
Northwestern University
Evanston, IL 60208-3118
('2563314', 'Derek J. Shiell', 'derek j. shiell')
('3271105', 'Louis H. Terry', 'louis h. terry')
('2691927', 'Petar S. Aleksic', 'petar s. aleksic')
('1695338', 'Aggelos K. Katsaggelos', 'aggelos k. katsaggelos')
d-shiell@northwestern.edu, l-terry@northwestern.edu,
apetar@eecs.northwestern.edu, aggk@eecs.northwestern.edu
414715421e01e8c8b5743c5330e6d2553a08c16dPoTion: Pose MoTion Representation for Action Recognition
1Inria∗
2NAVER LABS Europe
('2492127', 'Philippe Weinzaepfel', 'philippe weinzaepfel')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
41aa8c1c90d74f2653ef4b3a2e02ac473af61e47Compositional Structure Learning for Action Understanding
1Department of Computer Science and Engineering, SUNY at Buffalo
2Department of Statistics, UCLA
University of Michigan
October 23, 2014
('1856629', 'Ran Xu', 'ran xu')
('1690235', 'Gang Chen', 'gang chen')
('2228109', 'Caiming Xiong', 'caiming xiong')
('1728624', 'Wei Chen', 'wei chen')
('3587688', 'Jason J. Corso', 'jason j. corso')
41ab4939db641fa4d327071ae9bb0df4a612dc89Interpreting Face Images by Fitting a Fast
Illumination-Based 3D Active Appearance
Model
Instituto Nacional de Astrof´ısica, ´Optica y Electr´onica,
Luis Enrique Erro #1, 72840 Sta Ma. Tonantzintla. Pue., M´exico
Coordinaci´on de Ciencias Computacionales
('2349309', 'Salvador E. Ayala-Raggi', 'salvador e. ayala-raggi'){saraggi, robles, jcruze}@ccc.inaoep.mx
41971dfbf404abeb8cf73fea29dc37b9aae12439Detection of Facial Feature Points Using
Anthropometric Face Model

Concordia University
1455 de Maisonneuve Blvd. West, Montréal, Québec H3G 1M8, Canada
('8018736', 'Abu Sayeed', 'abu sayeed')
('1715620', 'Prabir Bhattacharya', 'prabir bhattacharya')
E-mails: a_sohai@encs.concordia.ca, prabir@ciise.concordia.ca
4157e45f616233a0874f54a59c3df001b9646cd7elifesciences.org
RESEARCH ARTICLE
Diagnostically relevant facial gestalt
information from ordinary photos
University of Oxford, Oxford, United Kingdom
2Medical Research Council Functional Genomics Unit, Department of Physiology,
Anatomy and Genetics, University of Oxford, Oxford, United Kingdom; 3The Wellcome
Trust Centre for Human Genetics, University of Oxford, Oxford, United Kingdom
Medical Research Council Human Genetics Unit, Institute of Genetics and Molecular
Medicine, Edinburgh, United Kingdom
('4569459', 'Quentin Ferry', 'quentin ferry')
('1985983', 'Julia Steinberg', 'julia steinberg')
('39722750', 'Caleb Webber', 'caleb webber')
('1880309', 'David R FitzPatrick', 'david r fitzpatrick')
('2500371', 'Chris P Ponting', 'chris p ponting')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
('2204967', 'Christoffer Nellåker', 'christoffer nellåker')
41a6196f88beced105d8bc48dd54d5494cc156fb2015 International Conference on
Communications, Signal
Processing, and their Applications
(ICCSPA 2015)
Sharjah, United Arab Emirates
17-19 February 2015
IEEE Catalog Number:
ISBN:
CFP1574T-POD
978-1-4799-6533-5
41de109bca9343691f1d5720df864cdbeeecd9d0Article
Facial Emotion Recognition: A Survey and
Real-World User Experiences in Mixed Reality
Received: 10 December 2017; Accepted: 26 January 2018; Published: 1 Febuary 2018
('38085139', 'Dhwani Mehta', 'dhwani mehta')
('3655354', 'Mohammad Faridul Haque Siddiqui', 'mohammad faridul haque siddiqui')
('39803999', 'Ahmad Y. Javaid', 'ahmad y. javaid')
EECS Department, The University of Toledo, Toledo, OH 43606, USA; dhwani.mehta@utoledo.edu (D.M.);
mohammadfaridulhaque.siddiqui@utoledo.edu (M.F.H.S.)
* Correspondence: ahmad.javaid@utoledo.edu; Tel.: +1-419-530-8260
41d9a240b711ff76c5448d4bf4df840cc5dad5fcJOURNAL DRAFT, VOL. X, NO. X, APR 2013
Image Similarity Using Sparse Representation
and Compression Distance
('1720741', 'Tanaya Guha', 'tanaya guha')
419a6fca4c8d73a1e43003edc3f6b610174c41d2A Component Based Approach Improves Classification of Discrete
Facial Expressions Over a Holistic Approach
('2370974', 'Kenny Hong', 'kenny hong')
('1716539', 'Stephan K. Chalup', 'stephan k. chalup')
4136a4c4b24c9c386d00e5ef5dffdd31ca7aea2cMULTI-MODAL PERSON-PROFILES FROM BROADCAST NEWS VIDEO
Beckman Institute for Advanced Science and Technology
University of Illinois at Urbana-Champaign
Urbana, IL 61801
('1804874', 'Charlie K. Dagli', 'charlie k. dagli')
('25639435', 'Sharad V. Rao', 'sharad v. rao')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
{dagli,svrao,huang}@ifp.uiuc.edu
4180978dbcd09162d166f7449136cb0b320adf1fReal-time head pose classification in uncontrolled environments
with Spatio-Temporal Active Appearance Models
∗ Matematica Aplicada i Analisi ,Universitat de Barcelona, Barcelona, Spain
+ Matematica Aplicada i Analisi, Universitat de Barcelona, Barcelona, Spain
+ Matematica Aplicada i Analisi, Universitat de Barcelona, Barcelona, Spain
('3276130', 'Miguel Reyes', 'miguel reyes')
('7855312', 'Sergio Escalera', 'sergio escalera')
('1724155', 'Petia Radeva', 'petia radeva')
E-mail:mreyes@cvc.uab.es
E-mail:sergio@maia.ub.es
E-mail:petia@cvc.uab.es
41b997f6cec7a6a773cd09f174cb6d2f036b36cd
41aa209e9d294d370357434f310d49b2b0baebebBEYOND CAPTION TO NARRATIVE:
VIDEO CAPTIONING WITH MULTIPLE SENTENCES
Grad. School of Information Science and Technology, The University of Tokyo, Japan
('2518695', 'Andrew Shin', 'andrew shin')
('8197937', 'Katsunori Ohnishi', 'katsunori ohnishi')
('1790553', 'Tatsuya Harada', 'tatsuya harada')
413a184b584dc2b669fbe731ace1e48b22945443Human Pose Co-Estimation and Applications ('31786895', 'Marcin Eichner', 'marcin eichner')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
83b7578e2d9fa60d33d9336be334f6f2cc4f218fThe S-HOCK Dataset: Analyzing Crowds at the Stadium
University of Verona. 2Vienna Institute of Technology. 3ISTC CNR (Trento). 4University of Trento
The topic of crowd modeling in computer vision usually assumes a sin-
gle generic typology of crowd, which is very simplistic. In this paper we
adopt a taxonomy that is widely accepted in sociology, focusing on a partic-
ular category, the spectator crowd, which is formed by people “interested in
watching something specific that they came to see” [1]. This can be found
at the stadiums, amphitheaters, cinema, etc.
In particular, we propose a
novel dataset, the Spectators Hockey (S-HOCK), which deals with 4 hockey
matches during an international tournament.
The dataset is unique in the crowd literature, and in general in the
surveillance realm. The dataset analyzes the crowd at different levels of
detail. At the highest level, it models the network of social connections
among the public (who knows whom in the neighborhood), what is the sup-
ported team and what has been the best action in the match; all of this has
been obtained by interviews at the stadium. At a medium level, spectators
are localized, and information regarding the pose of their heads and body is
given. Finally, at a lowest level, a fine grained specification of all the actions
performed by each single person is available. This information is summa-
rized by a large number of annotations collected over a year of work: more
than 100 millions of double checked annotations. This permits potentially
to deal with hundreds of tasks, some of which are documented in the full
paper.
Furthermore, the dataset is multidimensional, in the sense that offers
not only the view of the crowd (at different resolutions, with 4 cameras) but
also on the matches. This multiplies the number of possible applications that
could be assessed, investigating the reactions of the crowd to the actions of
the game, opening up to applications of summarization and content analysis.
Besides these figures, S-HOCK is significantly different from all the other
crowd datasets, since the crowd as a whole is mostly static and the motion
of each spectator is constrained within a limited space in the surrounding of
his position.
Annotation
People detection
Head detection
Head pose∗
Body position
Posture
Locomotion
Action / Interaction
Supported team
Best action
Social relation
Typical Values
full body bounding box [x,y,width,height]
head bounding box [x,y,width,height]
left, frontal, right, away, down
sitting, standing, (locomotion)
crossed arms, hands in pocket, crossed legs . . .
walking, jumping (each jump), rising pelvis slightly up
waving arms, pointing toward game, applauding, . . .
the team supported in this game
the most exciting action of the game
If he/she did know the person seated at his/her right
Table 1: Some of the annotations provided for each person and each frame
of the videos.
Together with the annotations, in the paper we discuss issues related to
low and high level detail of the crowd analysis, namely, people detection
and head pose estimation for the low level analysis, and the spectator cate-
gorization for the high level analysis. For all of these applications, we define
the experimental protocols, promoting future comparisons.
For people detection task we provide five different baselines, from the
simplest algorithms to the state of the art method for object detection, show-
ing how in this scenario the simplest method gets very high scores.
Regarding head pose estimation, we tested two state of the art methods
which work in a low resolution domain. Furthermore, we propose two novel
approaches based on Deep Learning. In particular, we evaluate the perfor-
mance of the Convolutional Neural Network and the Stacked Auto-encoder
Neural Network architecture. Here the results are comparable with state of
the art but are obtainable at a much higher speed.
Spectator categorization is a kind of crowd segmentation, where the goal
is to find the team supported by each spectator. This task is intuitively use-
('1843683', 'Davide Conigliaro', 'davide conigliaro')
('39337007', 'Paolo Rota', 'paolo rota')
('2793423', 'Francesco Setti', 'francesco setti')
('1919464', 'Chiara Bassetti', 'chiara bassetti')
('3058987', 'Nicola Conci', 'nicola conci')
('1703601', 'Nicu Sebe', 'nicu sebe')
('1723008', 'Marco Cristani', 'marco cristani')
839a2155995acc0a053a326e283be12068b35cb8Under review as a conference paper at ICLR 2016
HANDCRAFTED LOCAL FEATURES ARE CONVOLU-
TIONAL NEURAL NETWORKS
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213, USA
('2927024', 'Shoou-I Yu', 'shoou-i yu')
('2735055', 'Ming Lin', 'ming lin')
('1681921', 'Bhiksha Raj', 'bhiksha raj')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
{lanzhzh, iyu, minglin, bhiksha, alex}@cs.cmu.edu
83fd2d2d5ad6e4e153672c9b6d1a3785f754b60eRESEARCH ARTICLE
Neuropsychiatric Genetics
Quantifying Naturalistic Social Gaze in Fragile X
Syndrome Using a Novel Eye Tracking Paradigm
and Allan L. Reiss1
1Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, Stanford, California
Stanford University, Stanford, California
Manuscript Received: 7 November 2014; Manuscript Accepted: 22 May 2015
A hallmark behavioral feature of fragile X syndrome (FXS) is
the propensity for individuals with the syndrome to exhibit
significant impairments in social gaze during interactions
with others. However, previous studies employing eye tracking
methodology to investigate this phenomenon have been limited
to presenting static photographs or videos of social interactions
rather than employing a real-life social partner. To improve
upon previous studies, we used a customized eye tracking
configuration to quantify the social gaze of 51 individuals
with FXS and 19 controls, aged 14–28 years, while they engaged
in a naturalistic face-to-face social interaction with a female
experimenter. Importantly, our control group was matched to
the FXS group on age, developmental functioning, and degree of
autistic symptomatology. Results showed that participants with
FXS spent significantly less time looking at the face and had
shorter episodes (and longer inter-episodes) of social gaze than
controls. Regression analyses indicated that communication
ability predicted higher levels of social gaze in individuals
with FXS, but not in controls. Conversely, degree of autistic
symptoms predicted lower levels of social gaze in controls, but
not in individuals with FXS. Taken together, these data indicate
that naturalistic social gaze in FXS can be measured objectively
using existing eye tracking technology during face-to-face social
interactions. Given that impairments in social gaze were specific
to FXS, this paradigm could be employed as an objective and
ecologically valid outcome measure in ongoing Phase II/Phase
III clinical trials of FXS-specific interventions.
2015 Wiley Periodicals, Inc
Key words: eye tracking; social gaze; autism;
syndrome
fragile X
INTRODUCTION
Children diagnosed with genetic syndromes associated with intel-
lectual and developmental disability (e.g., fragile X syndrome,
Williams syndrome) often engage in highly specific forms of aber-
rant social behavior that can interfere with everyday functioning. For
How to Cite this Article:
Hall SS, Frank MC, Pusiol GT, Farzin F,
Lightbody AA, Reiss AL. 2015. Quantifying
Naturalistic Social Gaze in Fragile X
Syndrome Using a Novel Eye Tracking
Paradigm.
Am J Med Genet Part B 9999:1–9.
example, individuals diagnosed with Williams syndrome show a
particular form of hypersociability in which they actively seek out
social interactions with others [Jones et al., 2000; Frigerio et al.,
2006]. Conversely, children with fragile X syndrome (FXS) com-
monly show deficits in social gaze behavior in which interactions
with others are actively avoided [Cohen et al., 1988; Cohen et al.,
1989; Cohen et al., 1991; Hall et al., 2006; Hall et al., 2009]. These
contrasting behavioral phenotypes have been considered useful
and important models for investigations examining the interplay
between genes and environment [Kennedy et al., 2001; Schroeder
et al., 2001].
FXS is a particularly interesting model of potential gene-envi-
ronment interactions because it is a “single-gene” disorder. The
disease affects approximately 1 in 3,000 individuals in the United
States (approx. 100,000 people) and is the most common known
form of inherited intellectual disability [Hagerman, 2008]. First
described by Martin and Bell in 1943 as a “pedigree of mental defect
showing sex linkage” [Martin and Bell, 1943], FXS is caused by
mutations to the FMR1 gene at locus 27.3 on the long arm of the X
chromosome [Verkerk et al., 1991]. Excessive methylation of the
gene results in reduced or absent Fragile X Mental Retardation
Protein (FMRP), a key protein involved in synaptic plasticity and
Grant sponsor: NIH grants; Grant numbers: MH050047, MH081998.
Correspondence to:
Scott S. Hall, PhD, Department of Psychiatry and Behavioral Sciences,
Rm 1365, Stanford University, 401 Quarry Road, Stanford, CA
Article first published online in Wiley Online Library
(wileyonlinelibrary.com): 00 Month 2015
DOI 10.1002/ajmg.b.32331
2015 Wiley Periodicals, Inc
('4708625', 'Faraz Farzin', 'faraz farzin')E-mail: hallss@stanford.edu
83ca4cca9b28ae58f461b5a192e08dffdc1c76f3DETECTING EMOTIONAL STRESS FROM FACIAL EXPRESSIONS FOR DRIVING SAFETY
Signal Processing Laboratory (LTS5),
´Ecole Polytechnique F´ed´erale de Lausanne, Switzerland
('1697965', 'Hua Gao', 'hua gao')
('1710257', 'Jean-Philippe Thiran', 'jean-philippe thiran')
8356832f883207187437872742d6b7dc95b51fdeAdversarial Perturbations Against Real-Time Video
Classification Systems
University of California, Riverside
University of California, Riverside
University of California, Riverside
Riverside, California
Riverside, California
University of California, Riverside
Riverside, California
Riverside, California
University of California, Riverside
Riverside, California
Amit K. Roy Chowdhury
University of California, Riverside
Riverside, California
United States Army Research
Laboratory
('26576993', 'Shasha Li', 'shasha li')
('2252367', 'Chengyu Song', 'chengyu song')
('1718484', 'Ajaya Neupane', 'ajaya neupane')
('49616225', 'Sujoy Paul', 'sujoy paul')
('38774813', 'Srikanth V. Krishnamurthy', 'srikanth v. krishnamurthy')
('1703726', 'Ananthram Swami', 'ananthram swami')
sli057@ucr.edu
csong@cs.ucr.edu
ajaya@ucr.edu
spaul003@ucr.edu
krish@cs.ucr.edu
amitrc@ece.ucr.edu
ananthram.swami.civ@mail.mil
831fbef657cc5e1bbf298ce6aad6b62f00a5b5d9
835e510fcf22b4b9097ef51b8d0bb4e7b806bdfdUnsupervised Learning of Sequence Representations by
Autoencoders
aPattern Recognition Laboratory, Delft University of Technology
('1678473', 'Wenjie Pei', 'wenjie pei')
832e1d128059dd5ed5fa5a0b0f021a025903f9d5Pairwise Conditional Random Forests for Facial Expression Recognition
S´everine Dubuisson1
1 Sorbonne Universit´es, UPMC Univ Paris 06, CNRS, ISIR UMR 7222, 4 place Jussieu 75005 Paris
('3190846', 'Arnaud Dapogny', 'arnaud dapogny')
('2521061', 'Kevin Bailly', 'kevin bailly')
arnaud.dapogny@isir.upmc.fr
kevin.bailly@isir.upmc.fr
severine.dubuisson@isir.upmc.fr
83e093a07efcf795db5e3aa3576531d61557dd0dFacial Landmark Localization using Robust
Relationship Priors and Approximative Gibbs
Sampling
Institut f¨ur Informationsverarbeitung (tnt)
Leibniz Universit¨at Hannover, Germany
('35033145', 'Karsten Vogt', 'karsten vogt'){vogt, omueller, ostermann}@tnt.uni-hannover.de
831d661d657d97a07894da8639a048c430c5536dWeakly Supervised Facial Analysis with Dense Hyper-column Features
CyLab Biometrics Center and the Department of Electrical and Computer Engineering,
Carnegie Mellon University, Pittsburgh, PA, USA
('3117715', 'Chenchen Zhu', 'chenchen zhu')
('3049981', 'Yutong Zheng', 'yutong zheng')
('1769788', 'Khoa Luu', 'khoa luu')
('6131978', 'T. Hoang Ngan Le', 't. hoang ngan le')
('2043374', 'Chandrasekhar Bhagavatula', 'chandrasekhar bhagavatula')
('1794486', 'Marios Savvides', 'marios savvides')
{chenchez, yutongzh, kluu, thihoanl, cbhagava}@andrew.cmu.edu, msavvid@ri.cmu.edu
83b4899d2899dd6a8d956eda3c4b89f27f1cd3081-4244-1437-7/07/$20.00 ©2007 IEEE
I - 377
ICIP 2007
83295bce2340cb87901499cff492ae6ff3365475Deep Multi-Center Learning for Face Alignment
Shanghai Jiao Tong University, China
School of Computer Science and Software Engineering, East China Normal University, China
('3403352', 'Zhiwen Shao', 'zhiwen shao')
('7296339', 'Hengliang Zhu', 'hengliang zhu')
('1767677', 'Xin Tan', 'xin tan')
('2107352', 'Yangyang Hao', 'yangyang hao')
('8452947', 'Lizhuang Ma', 'lizhuang ma')
{shaozhiwen, hengliang zhu, tanxin2017, haoyangyang2014}@sjtu.edu.cn, ma-lz@cs.sjtu.edu.cn
83e96ed8a4663edaa3a5ca90b7ce75a1bb595b05ARANDJELOVI´C:RECOGNITIONFROMAPPEARANCESUBSPACESACROSSSCALE
Recognition from Appearance Subspaces
Across Image Sets of Variable Scale
Ognjen Arandjelovi´c
http://mi.eng.cam.ac.uk/~oa214
Trinity College
University of Cambridge
CB2 1TQ, UK
830e5b1043227fe189b3f93619ef4c58868758a7
8323af714efe9a3cadb31b309fcc2c36c8acba8fAutomatic Real-Time
Facial Expression Recognition
for Signed Language Translation
A thesis submitted in partial fulfillment of the requirements for the de-
gree of Magister Scientiae in the Department of Computer Science,
University of the Western Cape
May 2006
('1775637', 'Jacob Richard Whitehill', 'jacob richard whitehill')
831226405bb255527e9127b84e8eaedd7eb8e9f9ORIGINAL RESEARCH
published: 04 January 2017
doi: 10.3389/fnins.2016.00594
A Motion-Based Feature for
Event-Based Pattern Recognition
Centre National de la Recherche Scientifique, Institut National de la Santé Et de la Recherche Médicale, Institut de la Vision,
Sorbonne Universit s, UPMC University Paris 06, Paris, France
This paper introduces an event-based luminance-free feature from the output of
asynchronous event-based neuromorphic retinas. The feature consists in mapping the
distribution of the optical flow along the contours of the moving objects in the visual
scene into a matrix. Asynchronous event-based neuromorphic retinas are composed
of autonomous pixels, each of them asynchronously generating “spiking” events that
encode relative changes in pixels’ illumination at high temporal resolutions. The optical
flow is computed at each event, and is integrated locally or globally in a speed and
direction coordinate frame based grid, using speed-tuned temporal kernels. The latter
ensures that the resulting feature equitably represents the distribution of the normal
motion along the current moving edges, whatever their respective dynamics. The
usefulness and the generality of the proposed feature are demonstrated in pattern
recognition applications: local corner detection and global gesture recognition.
Keywords: neuromorphic sensor, event-driven vision, pattern recognition, motion-based feature, speed-tuned
integration time, histogram of oriented optical flow, corner detection, gesture recognition
1. INTRODUCTION
In computer vision, a feature is a more or less compact representation of visual information that is
relevant to solve a task related to a given application (see Laptev, 2005; Mikolajczyk and Schmid,
2005; Mokhtarian and Mohanna, 2006; Moreels and Perona, 2007; Gil et al., 2010; Dickscheid et al.,
2011; Gauglitz et al., 2011). Building a feature consists in encoding information contained in the
visual scene (global approach) or in a neighborhood of a point (local approach). It can represent
static information (e.g., shape of an object, contour, etc.), dynamic information (e.g., speed and
direction at the point, dynamic deformations, etc.) or both simultaneously.
In this article, we propose a motion-based feature computed on visual information provided by
asynchronous image sensors known as neuromorphic retinas (see Delbrück et al., 2010; Posch,
2015). These cameras provide visual information as asynchronous event-based streams while
conventional cameras output it as synchronous frame-based streams. The ATIS (“Asynchronous
Time-based Image Sensor,” Posch et al., 2010; Posch, 2015), one of the neuromorphic visual
sensors used in this work, is a time-domain encoding image sensor with QVGA resolution. It
contains an array of fully autonomous pixels that combine an illuminance change detector circuit,
associated to the PD1 photodiode, see Figure 1A and a conditional exposure measurement block,
associated to the PD2 photodiode. The change detector individually and asynchronously initiates
the measurement of an exposure/gray scale value only if a brightness change of a certain magnitude
has been detected in the field-of-view of the respective pixel, as shown in the functional diagram
of the ATIS pixel in Figures 1B, 2. The exposure measurement circuit encodes the absolute
instantaneous pixel illuminance into the timing of asynchronous event pulses, more precisely
Edited by:
Tobi Delbruck,
ETH Zurich, Switzerland
Reviewed by:
Dan Hammerstrom,
Portland State University, USA
Rodrigo Alvarez-Icaza,
IBM, USA
*Correspondence:
Specialty section:
This article was submitted to
Neuromorphic Engineering,
a section of the journal
Frontiers in Neuroscience
Received: 07 September 2016
Accepted: 13 December 2016
Published: 04 January 2017
Citation:
Clady X, Maro J-M, Barré S and
Benosman RB (2017) A Motion-Based
Feature for Event-Based Pattern
Recognition. Front. Neurosci. 10:594.
doi: 10.3389/fnins.2016.00594
Frontiers in Neuroscience | www.frontiersin.org
January 2017 | Volume 10 | Article 594
('1804748', 'Xavier Clady', 'xavier clady')
('24337536', 'Jean-Matthieu Maro', 'jean-matthieu maro')
('2133648', 'Sébastien Barré', 'sébastien barré')
('1750848', 'Ryad B. Benosman', 'ryad b. benosman')
('1804748', 'Xavier Clady', 'xavier clady')
xavier.clady@upmc.fr
83fd5c23204147844a0528c21e645b757edd7af9USDOT Number Localization and Recognition From Vehicle Side-View NIR
Images
Palo Alto Research Center (PARC
800 Phillips Rd. Webster NY 14580
('2415287', 'Orhan Bulan', 'orhan bulan')
('1732789', 'Safwan Wshah', 'safwan wshah')
('3195726', 'Ramesh Palghat', 'ramesh palghat')
('2978081', 'Vladimir Kozitsky', 'vladimir kozitsky')
('34801919', 'Aaron Burry', 'aaron burry')
orhan.bulan,safwan.wshah,ramesh.palghat,vladimir.kozitsky,aaron.burry@parc.com
8384e104796488fa2667c355dd15b65d6d5ff957A Discriminative Latent Model of Image Region and
Object Tag Correspondence
Department of Computer Science
University of Illinois at Urbana-Champaign
School of Computing Science
Simon Fraser University
('40457160', 'Yang Wang', 'yang wang')
('10771328', 'Greg Mori', 'greg mori')
yangwang@uiuc.edu
mori@cs.sfu.ca
8323529cf37f955fb3fc6674af6e708374006a28Evaluation of Face Resolution for Expression Analysis
IBM T. J. Watson Research Center
PO Box 704, Yorktown Heights, NY 10598
('40383812', 'Ying-li Tian', 'ying-li tian')Email: yltian@us.ibm.com
8395cf3535a6628c3bdc9b8d0171568d551f5ff0Entropy Non-increasing Games for the
Improvement of Dataflow Programming
Norbert B´atfai, Ren´at´o Besenczi, Gerg˝o Bogacsovics,
February 16, 2017
('9544536', 'Fanny Monori', 'fanny monori')
83ac942d71ba908c8d76fc68de6173151f012b38
834f5ab0cb374b13a6e19198d550e7a32901a4b2Face Translation between Images and Videos using Identity-aware CycleGAN
†Computer Vision Lab, ETH Zurich, Switzerland
‡VISICS, KU Leuven, Belgium
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('2208488', 'Bernhard Kratzwald', 'bernhard kratzwald')
('35268081', 'Danda Pani Paudel', 'danda pani paudel')
('1839268', 'Jiqing Wu', 'jiqing wu')
('1681236', 'Luc Van Gool', 'luc van gool')
{zhiwu.huang, paudel, jwu, vangool}@vision.ee.ethz.ch, bkratzwald@ethz.ch
8320dbdd3e4712cca813451cd94a909527652d63EAR BIOMETRICS
and Wilhelm Burger
Johannes Kepler University(cid:1) Institute of Systems Science(cid:1) A(cid:2) Linz(cid:1) Austria(cid
burge(cid:1)cast(cid:2)uni(cid:3)linz(cid:2)ac(cid:2)at
('12811570', 'Mark Burge', 'mark burge')
837e99301e00c2244023a8a48ff98d7b521c93acLocal Feature Evaluation for a Constrained
Local Model Framework
Graduate School of Engineering, Tottori University
101 Minami 4-chome, Koyama-cho, Tottori 680-8550, Japan
('1770332', 'Maiya Hori', 'maiya hori')
('48532779', 'Shogo Kawai', 'shogo kawai')
('2020088', 'Hiroki Yoshimura', 'hiroki yoshimura')
('1679437', 'Yoshio Iwai', 'yoshio iwai')
hori@ike.tottori-u.ac.jp
834b15762f97b4da11a2d851840123dbeee51d33Landmark-free smile intensity estimation
IMAGO Research Group - Universidade Federal do Paran´a
Fig. 1. Overview of our method for smile intensity estimation
('1800955', 'Olga R. P. Bellon', 'olga r. p. bellon'){julio.batista,olga,luciano}@ufpr.br
833f6ab858f26b848f0d747de502127406f06417978-1-4244-5654-3/09/$26.00 ©2009 IEEE
61
ICIP 2009
8334da483f1986aea87b62028672836cb3dc6205Fully Associative Patch-based 1-to-N Matcher for Face Recognition
Computational Biomedicine Lab
University of Houston
('39089616', 'Lingfeng Zhang', 'lingfeng zhang')
('1706204', 'Ioannis A. Kakadiaris', 'ioannis a. kakadiaris')
{lzhang34, ioannisk}@uh.edu
831b4d8b0c0173b0bac0e328e844a0fbafae6639Consensus-Driven Propagation in
Massive Unlabeled Data for Face Recognition
CUHK - SenseTime Joint Lab, The Chinese University of Hong Kong
2 SenseTime Group Limited
Nanyang Technological University
('31818765', 'Xiaohang Zhan', 'xiaohang zhan')
('3243969', 'Ziwei Liu', 'ziwei liu')
('1721677', 'Junjie Yan', 'junjie yan')
('1807606', 'Dahua Lin', 'dahua lin')
('1717179', 'Chen Change Loy', 'chen change loy')
{zx017, zwliu, dhlin}@ie.cuhk.edu.hk
yanjunjie@sensetime.com
ccloy@ieee.org
8309e8f27f3fb6f2ac1b4343a4ad7db09fb8f0ffGeneric versus Salient Region-based Partitioning
for Local Appearance Face Recognition
Computer Science Depatment, Universit¨at Karlsruhe (TH)
Am Fasanengarten 5, Karlsruhe 76131, Germany
http://isl.ira.uka.de/cvhci
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen'){ekenel,stiefel}@ira.uka.de
1b02b9413b730b96b91d16dcd61b2420aef97414Détection de marqueurs affectifs et attentionnels de
personnes âgées en interaction avec un robot
To cite this version:
avec un robot.
Intelligence artificielle [cs.AI]. Université Paris-Saclay, 2015. Français. 2015SACLS081>.
HAL Id: tel-01280505
https://tel.archives-ouvertes.fr/tel-01280505
Submitted on 29 Feb 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('47829802', 'Fan Yang', 'fan yang')
('47829802', 'Fan Yang', 'fan yang')
1b55c4e804d1298cbbb9c507497177014a923d22Incremental Class Representation
Learning for Face Recognition
Degree’s Thesis
Audiovisual Systems Engineering
Author:
Universitat Politècnica de Catalunya (UPC)
2016 - 2017
('2470219', 'Elisa Sayrol', 'elisa sayrol')
('2585946', 'Josep Ramon Morros', 'josep ramon morros')
1b635f494eff2e5501607ebe55eda7bdfa8263b8USC at THUMOS 2014
University of Southern California, Institute for Robotics and Intelligent Systems
Los Angeles, CA 90089, USA
('1726241', 'Chen Sun', 'chen sun')
('27735100', 'Ram Nevatia', 'ram nevatia')
1b6394178dbc31d0867f0b44686d224a19d61cf4EPML: Expanded Parts based Metric Learning for
Occlusion Robust Face Verification
To cite this version:
for Occlusion Robust Face Verification. Asian Conference on Computer Vision, Nov 2014, -,
Singapore. pp.1-15, 2014.
HAL Id: hal-01070657
https://hal.archives-ouvertes.fr/hal-01070657
Submitted on 2 Oct 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('2515597', 'Gaurav Sharma', 'gaurav sharma')
('2515597', 'Gaurav Sharma', 'gaurav sharma')
1bd50926079e68a6e32dc4412e9d5abe331daefb
1bdef21f093c41df2682a07f05f3548717c7a3d1Towards Automated Classification of Emotional Facial Expressions
1Department of Mathematics and Computer Science, 2Department of Psychology
Rutgers University Newark, 101 Warren St., Newark, NJ, 07102 USA
Lewis J. Baker (lewis.j.baker@rutgers.edu)1, Vanessa LoBue (vlobue@rutgers.edu)2,
Elizabeth Bonawitz (elizabeth.bonawitz@rutgers.edu)2, & Patrick Shafto (patrick.shafto@gmail.com)1
1b150248d856f95da8316da868532a4286b9d58eAnalyzing 3D Objects in Cluttered Images
UC Irvine
UC Irvine
('1888731', 'Mohsen Hejrati', 'mohsen hejrati')
('1770537', 'Deva Ramanan', 'deva ramanan')
shejrati@ics.uci.edu
dramanan@ics.uci.edu
1be498d4bbc30c3bfd0029114c784bc2114d67c0Age and Gender Estimation of Unfiltered Faces ('2037829', 'Eran Eidinger', 'eran eidinger')
('1792038', 'Roee Enbar', 'roee enbar')
('1756099', 'Tal Hassner', 'tal hassner')
1bbec7190ac3ba34ca91d28f145e356a11418b67Action Recognition with Dynamic Image Networks
Citation for published version:
Bilen, H, Fernando, B, Gravves, E & Vedaldi, A 2017, 'Action Recognition with Dynamic Image Networks'
IEEE Transactions on Pattern Analysis and Machine Intelligence. DOI: 10.1109/TPAMI.2017.2769085
Digital Object Identifier (DOI):
10.1109/TPAMI.2017.2769085
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
IEEE Transactions on Pattern Analysis and Machine Intelligence
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s)
and / or other copyright owners and it is a condition of accessing these publications that users recognise and
abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer
content complies with UK legislation. If you believe that the public display of this file breaches copyright please
investigate your claim.
Download date: 25. Dec. 2017
Edinburgh Research Explorer
contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and
1b3587363d37dd197b6adbcfa79d49b5486f27d8Multimodal Grounding for Language Processing
Language Technology Lab, University of Duisburg-Essen
(cid:52) Ubiquitous Knowledge Processing Lab (UKP) and Research Training Group AIPHES
Department of Computer Science, Technische Universit¨at Darmstadt
www.ukp.tu-darmstadt.de
('2752573', 'Lisa Beinborn', 'lisa beinborn')
('25080314', 'Teresa Botschen', 'teresa botschen')
('1730400', 'Iryna Gurevych', 'iryna gurevych')
1b5875dbebc76fec87e72cee7a5263d325a77376Learnt Quasi-Transitive Similarity for Retrieval from Large Collections of Faces
Ognjen Arandjelovi´c
University of St Andrews, United Kingdom
ognjen.arandjelovic@gmail.com
1bdfb3deae6e6c0df6537efcd1d7edcb4d7a96e9Groupwise Constrained Reconstruction for Subspace Clustering
Ke Zhang†
School of Computer Science, Fudan University, Shanghai, 200433, China
QCIS Centre, FEIT, University of Technology, Sydney, NSW 2007, Australia
('1736607', 'Ruijiang Li', 'ruijiang li')
('1713520', 'Bin Li', 'bin li')
('1751513', 'Cheng Jin', 'cheng jin')
('1713721', 'Xiangyang Xue', 'xiangyang xue')
rjli@fudan.edu.cn
bin.li-1@uts.edu.au
k_zhang@fudan.edu.cn
jc@fudan.edu.cn
xyxue@fudan.edu.cn
1b300a7858ab7870d36622a51b0549b1936572d4This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2537215, IEEE
Transactions on Image Processing
Dynamic Facial Expression Recognition with Atlas
Construction and Sparse Representation
('1734663', 'Yimo Guo', 'yimo guo')
('1757287', 'Guoying Zhao', 'guoying zhao')
1b90507f02967ff143fce993a5abbfba173b1ed0Image Processing Theory, Tools and Applications
Gradient-DCT (G-DCT) Descriptors
Technical University of Ostrava, FEECS
17. listopadu 15, 708 33 Ostrava-Poruba, Czech Republic
('2467747', 'Radovan Fusek', 'radovan fusek')
('2557877', 'Eduard Sojka', 'eduard sojka')
e-mail: radovan.fusek@vsb.cz, eduard.sojka@vsb.cz
1b794b944fd462a2742b6c2f8021fecc663004c9A Hierarchical Probabilistic Model for Facial Feature Detection
Rensselaer Polytechnic Institute
('1746738', 'Yue Wu', 'yue wu')
('2860279', 'Ziheng Wang', 'ziheng wang')
('1726583', 'Qiang Ji', 'qiang ji')
{wuy9,wangz10,jiq}@rpi.edu
1b7ae509c8637f3c123cf6151a3089e6b8a0d5b2From Few to Many: Generative Models for Recognition
Under Variable Pose and Illumination
Departments of Electrical Engineering
Beckman Institute
and Computer Science
Yale University
New Haven, CT -
University of Illinois, Urbana-Champaign
Urbana, IL 
('3230391', 'Athinodoros S. Georghiades', 'athinodoros s. georghiades')
('1765887', 'David J. Kriegman', 'david j. kriegman')
1b41d4ffb601d48d7a07dbbae01343f4eb8cc38cExploiting Temporal Information for DCNN-based Fine-Grained Object Classification
Australian Centre for Robotic Vision, Australia
Queensland University of Technology, Australia
Data61, CSIRO, Australia
University of Queensland, Australia
University of Adelaide, Australia
('1808390', 'ZongYuan Ge', 'zongyuan ge')
('1763662', 'Chris McCool', 'chris mccool')
('1781182', 'Conrad Sanderson', 'conrad sanderson')
('1722767', 'Peng Wang', 'peng wang')
('2161037', 'Lingqiao Liu', 'lingqiao liu')
1b1173a3fb33f9dfaf8d8cc36eb0bf35e364913dDICTA
#147
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
DICTA 2010 Submission #147. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
Registration Invariant Representations for Expression Detection
Anonymous DICTA submission
Paper ID 147
1b0a071450c419138432c033f722027ec88846eaWindsor Oceanico Hotel, Rio de Janeiro, Brazil, November 1-4, 2016
978-1-5090-1889-5/16/$31.00 ©2016 IEEE
649
1b60b8e70859d5c85ac90510b370b501c5728620Using Detailed Independent 3D Sub-models to Improve
Facial Feature Localisation and Pose Estimation
Imaging Science and Biomedical Engineering, The University of Manchester, UK
('1753123', 'Angela Caunce', 'angela caunce')
1b3b01513f99d13973e631c87ffa43904cd8a821HMM RECOGNITION OF EXPRESSIONS IN UNRESTRAINED VIDEO INTERVALS
Universitat Politècnica de Catalunya, Barcelona, Spain
('3067467', 'José Luis Landabaso', 'josé luis landabaso')
('1767549', 'Montse Pardàs', 'montse pardàs')
('2868058', 'Antonio Bonafonte', 'antonio bonafonte')
1bc214c39536c940b12c3a2a6b78cafcbfddb59a
1bc9aaa41c08bbd0c01dd5d7d7ebf3e48ae78113Article
k-Same-Net: k-Anonymity with Generative Deep
Neural Networks for Face Deidentification †
Faculty of Computer and Information Science, University of Ljubljana, Ve cna pot 113, SI-1000 Ljubljana
Faculty of Electrical Engineering, University of Ljubljana, Tr a ka cesta 25, SI-1000 Ljubljana, Slovenia
† This paper is an extended version of our paper published in Meden B.; Emeršiˇc Ž.; Štruc V.; Peer P.
k-Same-Net: Neural-Network-Based Face Deidentification. In the Proceedings of the International
Conference and Workshop on Bioinspired Intelligence (IWOBI), Funchal Madeira, Portugal, 10–12 July 2017.
Received: 1 December 2017 ; Accepted: 9 January 2018; Published: 13 January 2018
('34862665', 'Peter Peer', 'peter peer')Slovenia; ziga.emersic@fri.uni-lj.si (Z.E.); peter.peer@fri.uni-lj.si (P.P.)
vitomir.struc@fe.uni-lj.si
* Correspondence: blaz.meden@fri.uni-lj.si; Tel.: +386-1-479-8245
1be18a701d5af2d8088db3e6aaa5b9b1d54b6fd3ENHANCEMENT OF FAST FACE DETECTION ALGORITHM BASED ON A CASCADE OF
DECISION TREES
Commission II, WG II/5
KEY WORDS: Face Detection, Cascade Algorithm, Decision Trees.
('40293010', 'V. V. Khryashchev', 'v. v. khryashchev')
('32423989', 'A. A. Lebedev', 'a. a. lebedev')
('3414890', 'A. L. Priorov', 'a. l. priorov')
a YSU, Yaroslavl, Russia - lebedevdes@gmail.com, (vhr, andcat)@yandex.ru
1b79628af96eb3ad64dbb859dae64f31a09027d5
1bcbf2a4500d27d036e0f9d36d7af71c72f8ab61Computer Vision and Pattern Recognition 2005
Recognizing Facial Expression: Machine Learning and Application to
Spontaneous Behavior
Institute for Neural Computation, University of California, San Diego
Ian Fasel1, Javier Movellan1
Rutgers University, New Brunswick, NJ
('2218905', 'Marian Stewart Bartlett', 'marian stewart bartlett')
('2724380', 'Gwen Littlewort', 'gwen littlewort')
('2767464', 'Claudia Lainscsek', 'claudia lainscsek')
mbartlett@ucsd.edu
1b70bbf7cdfc692873ce98dd3c0e191580a1b041 International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 10 | Oct -2016 www.irjet.net p-ISSN: 2395-0072
Enhancing Performance of Face Recognition
System Using Independent Component Analysis
Student, Computer Science, Shah and Anchor Kuttchi Engineering College, Mumbai, India
Guide, HOD, Computer Science, Shah and Anchor Kuttchi Engineering College, Mumbai, India
Co-Guide, Computer Science, Shah and Anchor Kuttchi Engineering College, Mumbai, India
---------------------------------------------------------------------***---------------------------------------------------------------------
cards, tokens and keys. Biometric based methods examine
('32330340', 'Manimala Mahato', 'manimala mahato')
1b71d3f30238cb6621021a95543cce3aab96a21bFine-grained Video Classification and Captioning
University of Toronto1, Twenty Billion Neurons
('2454800', 'Farzaneh Mahdisoltani', 'farzaneh mahdisoltani')
('40586522', 'Guillaume Berger', 'guillaume berger')
('3462264', 'Waseem Gharbieh', 'waseem gharbieh')
('1710604', 'Roland Memisevic', 'roland memisevic')
1 {farzaneh, fleet}@cs.toronto.edu, {firstname.lastname}@twentybn.com
1b4f6f73c70353869026e5eec1dd903f9e26d43fRobust Subjective Visual Property Prediction
from Crowdsourced Pairwise Labels
('35782003', 'Yanwei Fu', 'yanwei fu')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('1700927', 'Tao Xiang', 'tao xiang')
('3081531', 'Jiechao Xiong', 'jiechao xiong')
('2073354', 'Shaogang Gong', 'shaogang gong')
('1717863', 'Yizhou Wang', 'yizhou wang')
('1746280', 'Yuan Yao', 'yuan yao')
1bc23c771688109bed9fd295ce82d7e702726327('1706007', 'Jianchao Yang', 'jianchao yang')
1bad8a9640cdbc4fe7de12685651f44c4cff35ceTHETIS: THree Dimensional Tennis Shots
A human action dataset
Sofia Gourgari
Konstantinos Karpouzis
Stefanos Kollias
National Technical University of Athens
Image Video and Multimedia Systems Laboratory
('2123731', 'Georgios Goudelis', 'georgios goudelis')
1b589016fbabe607a1fb7ce0c265442be9caf3a9
1be0ce87bb5ba35fa2b45506ad997deef6d6a0a8EXMOVES: Classifier-based Features for Scalable Action Recognition
Dartmouth College, NH 03755 USA
('1687325', 'Du Tran', 'du tran')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
{DUTRAN,LORENZO}@CS.DARTMOUTH.EDU
1b4bc7447f500af2601c5233879afc057a5876d8Facial Action Unit Classification with Hidden Knowledge
under Incomplete Annotation
University of Science and
Technology of China
Hefei, Anhui
University of Science and
Technology of China
Hefei, Anhui
Rensselaer Polytechnic
Institute
Troy, NY
P.R.China, 230027
P.R.China, 230027
USA, 12180
('1715001', 'Jun Wang', 'jun wang')
('1791319', 'Shangfei Wang', 'shangfei wang')
('1726583', 'Qiang Ji', 'qiang ji')
junwong@mail.ustc.edu.cn
sfwang@ustc.edu.cn
qji@ecse.rpi.edu
1b27ca161d2e1d4dd7d22b1247acee5c53db5104
1badfeece64d1bf43aa55c141afe61c74d0bd25eOL ´E: Orthogonal Low-rank Embedding,
A Plug and Play Geometric Loss for Deep Learning
1Universidad de la Rep´ublica
Uruguay
Duke University
USA
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
7711a7404f1f1ac3a0107203936e6332f50ac30cAction Classification and Highlighting in Videos
Disney Research Pittsburgh
Disney Research Pittsburgh
('1730844', 'Atousa Torabi', 'atousa torabi')
('14517812', 'Leonid Sigal', 'leonid sigal')
atousa.torabi@disneyresearch.com
lsigal@disneyresearch.com
778c9f88839eb26129427e1b8633caa4bd4d275ePose Pooling Kernels for Sub-category Recognition
ICSI & UC Berkeley
ICSI & UC Berkeley
Trever Darrell
ICSI & UC Berkeley
('40565777', 'Ning Zhang', 'ning zhang')
('2071606', 'Ryan Farrell', 'ryan farrell')
nzhang@eecs.berkeley.edu
farrell@eecs.berkeley.edu
trevor@eecs.berkeley.edu
7735f63e5790006cb3d989c8c19910e40200abfcMultispectral Imaging For Face
Recognition Over Varying
Illumination
A Dissertation
Presented for the
Doctor of Philosophy Degree
The University of Tennessee, Knoxville
December 2008
('21051127', 'Hong Chang', 'hong chang')
7789a5d87884f8bafec8a82085292e87d4e2866fA Unified Tensor-based Active Appearance Face
Model
Member, IEEE
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('1748684', 'Josef Kittler', 'josef kittler')
77b1db2281292372c38926cc4aca32ef056011dc451492 EMR0010.1177/1754073912451492Widen Children’s Interpretation of Facial ExpressionsEmotion Review
2012
SPECIAL SECTION: FACIAL EXPRESSIONS
Children’s Interpretation of Facial Expressions:
The Long Path from Valence-Based to Specific
Discrete Categories
Emotion Review
Vol. 0, No. 0 (2012) 1 –6
© The Author(s) 2012
ISSN 1754-0739
DOI: 10.1177/1754073912451492
er.sagepub.com
Boston College, USA
('3947094', 'Sherri C. Widen', 'sherri c. widen')
776835eb176ed4655d6e6c308ab203126194c41e
77c53ec6ea448db4dad586e002a395c4a47ecf66Research Journal of Applied Sciences, Engineering and Technology 4(17): 2879-2886, 2012
ISSN: 2040-7467
© Maxwell Scientific Organization, 2012
Submitted: November 25, 2011
Accepted: January 13, 2012
Published: September 01, 2012
Face Recognition Based on Facial Features
COMSATS Institute of Information Technology Wah Cantt
47040, Pakistan
National University of Science and Technology
Peshawar Road, Rawalpindi, 46000, Pakistan
('33088042', 'Muhammad Sharif', 'muhammad sharif')
('3349608', 'Muhammad Younas Javed', 'muhammad younas javed')
('32805529', 'Sajjad Mohsin', 'sajjad mohsin')
778bff335ae1b77fd7ec67404f71a1446624331bHough Forest-based Facial Expression Recognition from
Video Sequences
BIWI, ETH Zurich http://www.vision.ee.ethz.ch
VISICS, K.U. Leuven http://www.esat.kuleuven.be/psi/visics
('3092828', 'Gabriele Fanelli', 'gabriele fanelli')
('2569989', 'Angela Yao', 'angela yao')
('40324831', 'Pierre-Luc Noel', 'pierre-luc noel')
('2946643', 'Juergen Gall', 'juergen gall')
('1681236', 'Luc Van Gool', 'luc van gool')
{gfanelli,yaoa,gall,vangool}@vision.ee.ethz.ch
noelp@student.ethz.ch
7726a6ab26a1654d34ec04c0b7b3dd80c5f84e0dCONTENT-AWARE COMPRESSION USING SALIENCY-DRIVEN IMAGE RETARGETING
*Disney Research Zurich
†ETH Zurich
('1782328', 'Yael Pritch', 'yael pritch')
('2893744', 'Alexander Sorkine-Hornung', 'alexander sorkine-hornung')
('1712877', 'Stefan Mangold', 'stefan mangold')
774cbb45968607a027ae4729077734db000a1ec5I. KWAK ET AL.: VISUAL RECOGNITION OF URBAN TRIBES
From Bikers to Surfers:
Visual Recognition of Urban Tribes
Ana C. Murillo2
David Kriegman1
Serge Belongie1
1 Dept. of Computer Science and
Engineering
University of California, San Diego
San Diego, CA, USA
2 Dpt. Informática e Ing. Sistemas - Inst.
Investigación en Ingeniería de Aragón.
University of Zaragoza, Spain
3 Department of Computer Science
Columbia University, USA
('2064392', 'Iljung S. Kwak', 'iljung s. kwak')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
iskwak@cs.ucsd.edu
acm@unizar.es
belhumeur@cs.columbia.edu
kriegman@cs.ucsd.edu
sjb@cs.ucsd.edu
7754b708d6258fb8279aa5667ce805e9f925dfd0Facial Action Unit Recognition by Exploiting
Their Dynamic and Semantic Relationships
('1686235', 'Yan Tong', 'yan tong')
('2460793', 'Wenhui Liao', 'wenhui liao')
('1726583', 'Qiang Ji', 'qiang ji')
77db171a523fc3d08c91cea94c9562f3edce56e1Poursaberi et al. EURASIP Journal on Image and Video Processing 2012, 2012:17
http://jivp.eurasipjournals.com/content/2012/1/17
R ES EAR CH
Open Access
Gauss–Laguerre wavelet textural feature fusion
with geometrical information for facial expression
identification
('1786383', 'Ahmad Poursaberi', 'ahmad poursaberi')
('1870195', 'Hossein Ahmadi', 'hossein ahmadi')
77037a22c9b8169930d74d2ce6f50f1a999c1221Robust Face Recognition With Kernelized
Locality-Sensitive Group Sparsity Representation
('1907978', 'Shoubiao Tan', 'shoubiao tan')
('2796142', 'Xi Sun', 'xi sun')
('2710497', 'Wentao Chan', 'wentao chan')
('33306018', 'Lei Qu', 'lei qu')
779ad364cae60ca57af593c83851360c0f52c7bfSteerable Pyramids Feature Based Classification Using Fisher
Linear Discriminant for Face Recognition
EL HASSOUNI MOHAMMED12
GSCM-LRIT, Faculty of Sciences, Mohammed V University-Agdal, Rabat, Morocco
DESTEC, FLSHR Mohammed V University-Agdal, Rabat, Morocco
PO.Box 1014, Rabat, Morocco
('37917405', 'ABOUTAJDINE DRISS', 'aboutajdine driss')moha387@yahoo.fr
7792fbc59f3eafc709323cdb63852c5d3a4b23e9Pose from Action: Unsupervised Learning of
Pose Features based on Motion
Robotics Institute
Carnegie Mellon University
('3234247', 'Senthil Purushwalkam', 'senthil purushwalkam')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
{spurushw@andrew,abhinavg@cs}.cmu.edu
77fbbf0c5729f97fcdbfdc507deee3d388cd4889SMITH & DYER: 3D FACIAL LANDMARK ESTIMATION
Pose-Robust 3D Facial Landmark Estimation
from a Single 2D Image
http://www.cs.wisc.edu/~bmsmith
http://www.cs.wisc.edu/~dyer
Department of Computer Sciences
University of Wisconsin-Madison
Madison, WI USA
('2721523', 'Brandon M. Smith', 'brandon m. smith')
('1724754', 'Charles R. Dyer', 'charles r. dyer')
776362314f1479f5319aaf989624ac604ba42c65Attribute learning in large-scale datasets
Stanford University
('2192178', 'Olga Russakovsky', 'olga russakovsky')
('3216322', 'Li Fei-Fei', 'li fei-fei')
{olga,feifeili}@cs.stanford.edu
77d31d2ec25df44781d999d6ff980183093fb3deThe Multiverse Loss for Robust Transfer Learning
Supplementary
1. Omitted proofs
for which the joint loss:
m(cid:88)
r=1
L(F r, br, D, y)
(2)
J(F 1, b1...F m, bm, D, y) =
is bounded by:
mL∗(D, y) ≤ J(F 1, b1...F m, bm, D, y)
m−1(cid:88)
≤ mL∗(D, y) +
Alλd−j+1
(3)
l=1
where [A1 . . . Am−1] are bounded parameters.
We provide proofs that were omitted from the paper for
lack of space. We follow the same theorem numbering as in
the paper.
Lemma 1. The minimizers F ∗, b∗ of L are not unique, and
it holds that for any vector v ∈ Rc and scalar s, the solu-
tions F ∗ + v1(cid:62)
Proof. denoting V = v1(cid:62)
c , b∗ + s1c are also minimizers of L.
c , s = s1c,
i v+byi +s
i v+bj +s
i fyi +byi
i v+sed(cid:62)
i fj +bj
i=1
log(
L(F ∗ + V, b∗ + s, D, y) =
i fyi +d(cid:62)
ed(cid:62)
i fj +d(cid:62)
j=1 ed(cid:62)
i v+sed(cid:62)
ed(cid:62)
j=1 ed(cid:62)
i v+sed(cid:62)
ed(cid:62)
(cid:80)c
(cid:80)c
i v+s(cid:80)c
− n(cid:88)
= − n(cid:88)
= − n(cid:88)
(cid:80)c
= − n(cid:88)
ed(cid:62)
i fyi +byi
j=1 ed(cid:62)
i fj +bj
ed(cid:62)
log(
log(
log(
i=1
i=1
i=1
i fj +bj
i fyi +byi
j=1 ed(cid:62)
) = L(F ∗, b∗, D, y)
The following simple lemma was not part of the paper.
However, it is the reasoning behind the statement at the end
of the proof of Thm. 1. “Since ∀i, j pi(j) > 0 and since
rank(D) is full,(cid:80)n
Lemma 2. Let K =(cid:80)n
such that ∀i qi > 0, the matrix ˆK =(cid:80)n
i be a full rank d×d matrix,
i.e., it is PD and not just PSD, then for all vector q ∈ Rn
is also
i pi(j)pi(j(cid:48)) is PD.”
i=1 did(cid:62)
i=1 did(cid:62)
i=1 qidid(cid:62)
full rank.
Proof. For
(miniqi)v(cid:62)Kv > 0.
every vector v
(cid:2)f 1
(cid:3) , b1, F 2 = (cid:2)f 2
Theorem 3. There exist a set of weights F 1 =
j ⊥ f s
C ] , bm which are orthogonal ∀jrs f r
2 , ..., f 1
2 , ..., f m
1 , f 1
1 , f m
2 , ..., f 2
1 , f 2
[f m
(cid:3) , b2...F m =
Proof. We again prove the theorem by constructing such a
solution. Denoting by vd−m+2...vd the eigenvectors of K
corresponding to λd−m+2 . . . λd. Given F 1 = F ∗, b1 = b∗,
we can construct each pair F r, br as follows:
(1)
∀j, r
fj
r = f1
1 +
m−1(cid:88)
l=1
αjlrvd−l+1
br = b1
(4)
The tensor of parameters αjlr is constructed to insure the
orthogonality condition. Formally, αjlr has to satisfy:
Rd,
v(cid:62) ˆKv
∀j, r (cid:54)= s
(f 1
j +
m−1(cid:88)
l=1
αjlrvd−l+1)(cid:62)f s
j = 0
(5)
2 m(m− 1) equations, it
Noticing that 5 constitutes a set of 1
can be satisfied by the tensor αjlr which contains m(m −
c ] = F r −
1)c parameters. Defining Ψr = [ψr
1, ψr
2, . . . , ψr
77fb9e36196d7bb2b505340b6b94ba552a58b01bDetecting the Moment of Completion:
Temporal Models for Localising Action Completion
University of Bristol, Bristol, BS8 1UB, UK
('10007321', 'Farnoosh Heidarivincheh', 'farnoosh heidarivincheh')
('1728108', 'Majid Mirmehdi', 'majid mirmehdi')
('1728459', 'Dima Damen', 'dima damen')
farnoosh.heidarivincheh@bristol.ac.uk
486840f4f524e97f692a7f6b42cd19019ee71533DeepVisage: Making face recognition simple yet with powerful generalization
skills
1Laboratoire LIRIS, ´Ecole centrale de Lyon, 69134 Ecully, France.
2Safran Identity & Security, 92130 Issy-les-Moulineaux, France.
('34767162', 'Jonathan Milgram', 'jonathan milgram')
('34086868', 'Liming Chen', 'liming chen')
md-abul.hasnat@ec-lyon.fr, {julien.bohne, stephane.gentric, jonathan.milgram}@safrangroup.com,
liming.chen@ec-lyon.fr
48463a119f67ff2c43b7c38f0a722a32f590dfebInternational Journal of Computer Applications (0975 – 8887)
Volume 52– No.4, August 2012
Intelligent Method for Face Recognition of Infant
Department of Computer
Engineering
Indian Institute of Technology
Banaras Hindu University
Varanasi, India-221005
Department of Computer
Engineering
Indian Institute of Technology
Banaras Hindu University
Varanasi, India-221005

Department of Computer
Engineering
Indian Institute of Technology
Banaras Hindu University
Varanasi, India-221005
('2829597', 'Shrikant Tiwari', 'shrikant tiwari')
('1920426', 'Aruni Singh', 'aruni singh')
('32120516', 'Sanjay Kumar Singh', 'sanjay kumar singh')
488d3e32d046232680cc0ba80ce3879f92f35cacJournal of Information Systems and Telecommunication, Vol. 2, No. 4, October-December 2014
205
Facial Expression Recognition Using Texture Description of
Displacement Image
Amirkabir University of Technology, Tehran. Iran
Abolghasem-Asadollah Raie*
Amirkabir University of Technology, Tehran. Iran
Sharif University of Technology, Tehran. Iran
Received: 14/Sep/2013 Revised: 15/Mar/2014 Accepted: 10/Aug/2014
('3295771', 'Hamid Sadeghi', 'hamid sadeghi')
('1697809', 'Mohammad-Reza Mohammadi', 'mohammad-reza mohammadi')
hamid.sadeghi@aut.ac.ir
raie@aut.ac.ir
mrmohammadi@ee.sharif.edu
48186494fc7c0cc664edec16ce582b3fcb5249c0P-CNN: Pose-based CNN Features for Action Recognition
Guilhem Ch´eron∗ †
INRIA
('1785596', 'Ivan Laptev', 'ivan laptev')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
48499deeaa1e31ac22c901d115b8b9867f89f952Interim Report of Final Year Project
HKU-Face: A Large Scale Dataset for
Deep Face Recognition
3035140108
Haoyu Li
3035141841
COMP4801 Final Year Project
Project Code: 17007
('3347561', 'Haicheng Wang', 'haicheng wang')
486a82f50835ea888fbc5c6babf3cf8e8b9807bcMSU TECHNICAL REPORT MSU-CSE-15-11, JULY 24, 2015
Face Search at Scale: 80 Million Gallery
('7496032', 'Dayong Wang', 'dayong wang')
('40653304', 'Charles Otto', 'charles otto')
('6680444', 'Anil K. Jain', 'anil k. jain')
48fea82b247641c79e1994f4ac24cad6b6275972Mining Discriminative Components With Low-Rank And
Sparsity Constraints for Face Recognition
Computer Science and Engineering
Arizona State University
Tempe, AZ, 85281
('1689161', 'Qiang Zhang', 'qiang zhang')
('2913552', 'Baoxin Li', 'baoxin li')
qzhang53, baoxin.li@asu.edu
48734cb558b271d5809286447ff105fd2e9a6850Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural
Networks
Department of Electrical and Computer Engineering
University of Denver, Denver, CO
('3093835', 'Mohammad H. Mahoor', 'mohammad h. mahoor')behzad.hasani@du.edu and mmahoor@du.edu
48a417cfeba06feb4c7ab30f06c57ffbc288d0b5Robust Dictionary Learning by Error Source Decomposition
Northwestern University
2145 Sheridan Road, Evanston, IL 60208
('2240134', 'Zhuoyuan Chen', 'zhuoyuan chen')
('39955137', 'Ying Wu', 'ying wu')
zhuoyuanchen2014@u.northwestern.edu,yingwu@eecs.northwestern.edu
4850af6b54391fc33c8028a0b7fafe05855a96ffDiscovering Useful Parts for Pose Estimation in Sparsely Annotated Datasets
1Department of Computer Science and 2Department of Biology
Boston University and 2University of North Carolina
('2025025', 'Mikhail Breslav', 'mikhail breslav')
('1711465', 'Tyson L. Hedrick', 'tyson l. hedrick')
('1749590', 'Stan Sclaroff', 'stan sclaroff')
('1723703', 'Margrit Betke', 'margrit betke')
breslav@bu.edu, thedrick@bio.unc.edu, sclaroff@bu.edu, betke@bu.edu
48c41ffab7ff19d24e8df3092f0b5812c1d3fb6eMulti-Modal Embedding for Main Product Detection in Fashion
1Institut de Robtica i Informtica Industrial (CSIC-UPC)
2Wide Eyes Technologies
Waseda University
('1737881', 'Antonio Rubio', 'antonio rubio')
('9072783', 'LongLong Yu', 'longlong yu')
('3114470', 'Edgar Simo-Serra', 'edgar simo-serra')
('1994318', 'Francesc Moreno-Noguer', 'francesc moreno-noguer')
arubio@iri.upc.edu, longyu@wide-eyes.it, esimo@aoni.waseda.jp, fmoreno@iri.upc.edu
488a61e0a1c3768affdcd3c694706e5bb17ae548FITTING A 3D MORPHABLE MODEL TO EDGES:
A COMPARISON BETWEEN HARD AND SOFT CORRESPONDENCES
Multimodal Computing and Interaction, Saarland University, Germany
University of York, UK
‡ Morpheo Team, INRIA Grenoble Rhˆone-Alpes, France
('39180407', 'Anil Bas', 'anil bas')
('1687021', 'William A. P. Smith', 'william a. p. smith')
('1780750', 'Timo Bolkart', 'timo bolkart')
('1792200', 'Stefanie Wuhrer', 'stefanie wuhrer')
48910f9b6ccc40226cd4f105ed5291571271b39eLearning Discriminative Fisher Kernels
Pattern Recognition and Bio-informatics Laboratory, Delft University of Technology, THE NETHERLANDS
('1803520', 'Laurens van der Maaten', 'laurens van der maaten')lvdmaaten@gmail.com
48a9241edda07252c1aadca09875fabcfee32871Convolutional Experts Network for Facial Landmark Detection
Carnegie Mellon University
Tadas Baltruˇsaitis∗
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213, USA
5000 Forbes Ave, Pittsburgh, PA 15213, USA
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213, USA
('1783029', 'Amir Zadeh', 'amir zadeh')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
abagherz@cs.cmu.edu
tbaltrus@cs.cmu.edu
morency@cs.cmu.edu
48f0055295be7b175a06df5bc6fa5c6b69725785International Journal of Computer Applications (0975 – 8887)
Volume 96– No.19, June 2014
Facial Action Unit Recognition from Video Streams
with Recurrent Neural Networks
University of the Witwatersrand
Braamfontein, Johannesburg
South Africa
('3122515', 'Hima Vadapalli', 'hima vadapalli')
48729e4de8aa478ee5eeeb08a72a446b0f5367d5COMPRESSED FACE HALLUCINATION
Electrical Engineering and Computer Science
University of California, Merced, CA 95344, USA
('2391885', 'Sifei Liu', 'sifei liu')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
48e6c6d981efe2c2fb0ae9287376fcae59da9878Sidekick Policy Learning
for Active Visual Exploration
The University of Texas at Austin, Austin, TX
2 Facebook AI Research, 300 W. Sixth St. Austin, TX 78701
('21810992', 'Santhosh K. Ramakrishnan', 'santhosh k. ramakrishnan')
('1794409', 'Kristen Grauman', 'kristen grauman')
srama@cs.utexas.edu, grauman@fb.com(cid:63)
48174c414cfce7f1d71c4401d2b3d49ba91c5338Robust Performance-driven 3D Face Tracking in Long Range Depth Scenes
Rutgers University, USA
Hong Kong Polytechnic University, Hong Kong
School of Computer Engineering, Nanyang Technological University, Singapore
('1965812', 'Chongyu Chen', 'chongyu chen')
('40643777', 'Luc N. Dao', 'luc n. dao')
('1736042', 'Vladimir Pavlovic', 'vladimir pavlovic')
('1688642', 'Jianfei Cai', 'jianfei cai')
('1775268', 'Tat-Jen Cham', 'tat-jen cham')
{hxp1,vladimir}@cs.rutgers.edu
{nldao,asjfcai,astfcham}@ntu.edu.sg
cscychen@comp.polyu.edu.hk
48a5b6ee60475b18411a910c6084b3a32147b8cdPedestrian attribute recognition with part-based CNN
and combined feature representations
Baskurt
To cite this version:
recognition with part-based CNN and combined feature representations. VISAPP2018, Jan 2018,
Funchal, Portugal.
HAL Id: hal-01625470
https://hal.archives-ouvertes.fr/hal-01625470
Submitted on 21 Jun 2018
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
('1705461', 'Yiqiang Chen', 'yiqiang chen')
('1762557', 'Stefan Duffner', 'stefan duffner')
('10469201', 'Andrei Stoian', 'andrei stoian')
('1733569', 'Jean-Yves Dufour', 'jean-yves dufour')
('1705461', 'Yiqiang Chen', 'yiqiang chen')
('1762557', 'Stefan Duffner', 'stefan duffner')
('10469201', 'Andrei Stoian', 'andrei stoian')
('1733569', 'Jean-Yves Dufour', 'jean-yves dufour')
('1739898', 'Atilla Baskurt', 'atilla baskurt')
488375ae857a424febed7c0347cc9590989f01f7Convolutional neural networks for the analysis of broadcasted
tennis games
Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Crete, 73100, Greece
(cid:63) NantVision Inc., Culver City, CA, 90230, USA.
University of Crete, Crete, 73100, Greece
('2272443', 'Grigorios Tsagkatakis', 'grigorios tsagkatakis')
('40495798', 'Mustafa Jaber', 'mustafa jaber')
('1694755', 'Panagiotis Tsakalides', 'panagiotis tsakalides')
4836b084a583d2e794eb6a94982ea30d7990f663Cascaded Face Alignment via Intimacy Definition Feature
The Hong Kong Polytechnic University
Hong Kong Applied Science and Technology Research Institute Company Limited
Hong Kong, China
('2116302', 'Hailiang Li', 'hailiang li')
('1703078', 'Kin-Man Lam', 'kin-man lam')
('2233216', 'Kangheng Wu', 'kangheng wu')
('1982263', 'Zhibin Lei', 'zhibin lei')
harley.li@connect.polyu.hk,{harleyli, edmondchiu, khwu, lei}@astri.org, enkmlam@polyu.edu.hk
4866a5d6d7a40a26f038fc743e16345c064e9842
488e475eeb3bb39a145f23ede197cd3620f1d98aPedestrian Attribute Classification in Surveillance: Database and Evaluation
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences (CASIA
95 Zhongguancun East Road, 100190, Beijing, China
('1739258', 'Jianqing Zhu', 'jianqing zhu')
('40397682', 'Shengcai Liao', 'shengcai liao')
('1718623', 'Zhen Lei', 'zhen lei')
('1716143', 'Dong Yi', 'dong yi')
('34679741', 'Stan Z. Li', 'stan z. li')
{jqzhu, scliao, zlei, dyi, szli}@cbsr.ia.ac.cn
487df616e981557c8e1201829a1d0ec1ecb7d275Acoustic Echo Cancellation Using a Vector-Space-Based
Adaptive Filtering Algorithm
('1742704', 'Yu Tsao', 'yu tsao')
('1757214', 'Shih-Hau Fang', 'shih-hau fang')
('40466874', 'Yao Shiao', 'yao shiao')
48f211a9764f2bf6d6dda4a467008eda5680837a
4858d014bb5119a199448fcd36746c413e60f295
48319e611f0daaa758ed5dcf5a6496b4c6ef45f2Non Binary Local Gradient Contours for Face Recognition
P.A. College of Engnineering, Mangalore
bSenior IEEE Member, Department of Electrical and Electronics Engineering, Aligarh Muslim
P A College of Engineering, Nadupadavu
As the features from the traditional Local Binary patterns (LBP) and Local Directional Patterns (LDP) are
found to be ineffective for face recognition, we have proposed a new approach derived on the basis of Information
sets whereby the loss of information that occurs during the binarization is eliminated. The information sets
as a product. Since face is having smooth texture in a limited area, the extracted features must be highly
discernible. To limit the number of features, we consider only the non overlapping windows. By the application
of the information set theory we can reduce the number of feature of an image. The derived features are shown
to work fairly well over eigenface, fisherface and LBP methods.
Keywords: Local Binary Pattern, Local Directional Pattern, Information Sets, Gradient Contour, Support
Vector Machine, KNN, Face Recognition.
1. INTRODUCTION
In face recognition, the major issue to be ad-
dressed is the extraction of features which are
discriminating in nature [1], [2]. The accuracy
of classification depends upon which texture fea-
ture of the face are extracted e.g., geometrical,
statistical, local or global features in addition to
representation of these features and the design
extraction algorithm should produce little vari-
ance of features within the class and large vari-
ance between the classes. There are typically
two common approaches to extract facial fea-
tures: geometric-feature-based and appearance-
based methods. The geometric-feature-based [[3],
[4]] method encodes the shape and locations of
different facial components, which are combined
into a feature vector that represents the face.
An illustration of this method is the graph-based
method [5], that uses several facial components
to create a representation of the face and pro-
cess it. The Local-Global Graph algorithm [5] ap-
proach makes use Voronoi tessellation and Delau-
nay graphs to segment local features and builds
a graph. These features are combined into a lo-
cal graph, and then the skeleton (global graph)
is created by interrelating the local graphs to
represent the topology of the face. The major
requirements of geometric-feature-based methods
is accurate and reliable facial feature detection
and tracking, which is difficult to accommodate
in many situations.
In the case of appearance
based methods, there are many methods for the
holistic classes such as, Eigenfaces [6] and Fisher-
faces [7], which are built on Principal Component
Analysis (PCA) [6], to the more recent 2D-PCA
[8], and Linear Discriminant Analysis [9] are also
examples of holistic methods. The [10] and [11]
makes use of image filters, either on the whole
face to create holistic features, or some specific
face-region to create local features, to extract the
('1913846', 'Abdullah Gubbi', 'abdullah gubbi')
('2093112', 'Mohammad Fazle Azeem', 'mohammad fazle azeem')
Nadupadavu, Mangalore, India, Contact: abdullahgubbi@yahoo.com
University, India, Contact: mf.azeem@gmail.com
Mangalore, India. Contact: sharmilabp@gmail.com
4896909796f9bd2f70a2cb24bf18daacd6a12128Spatial Bag of Features Learning for Large Scale
Face Image Retrieval
Aristotle University of Thessaloniki, Thessaloniki, Greece
('3200630', 'Nikolaos Passalis', 'nikolaos passalis')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
passalis@csd.auth.gr, tefas@aiia.csd.auth.gr
48cfc5789c246c6ad88ff841701204fc9d6577edJ Inf Process Syst, Vol.12, No.3, pp.392~409, September 2016


ISSN 1976-913X (Print)
ISSN 2092-805X (Electronic)
Age Invariant Face Recognition Based on DCT
Feature Extraction and Kernel Fisher Analysis
('17349931', 'Leila Boussaad', 'leila boussaad')
('2411455', 'Mohamed Benmohammed', 'mohamed benmohammed')
('2123013', 'Redha Benzid', 'redha benzid')
481fb0a74528fa7706669a5cce6a212ac46eaea3Recognizing RGB Images by Learning from RGB-D Data
Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore
School of Computer Engineering, Nanyang Technological University, Singapore
('39253009', 'Lin Chen', 'lin chen')
('38188040', 'Dong Xu', 'dong xu')
70f189798c8b9f2b31c8b5566a5cf3107050b349The Challenge of Face Recognition from Digital Point-and-Shoot Cameras
David Bolme‡
('1757322', 'J. Ross Beveridge', 'j. ross beveridge')
('1750370', 'Geof H. Givens', 'geof h. givens')
('2067993', 'W. Todd Scruggs', 'w. todd scruggs')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
('1733571', 'Yui Man Lui', 'yui man lui')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('36903861', 'Mohammad Nayeem Teli', 'mohammad nayeem teli')
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
('1694404', 'Bruce A. Draper', 'bruce a. draper')
('40370804', 'Hao Zhang', 'hao zhang')
('9099328', 'Su Cheng', 'su cheng')
70580ed8bc482cad66e059e838e4a779081d1648Acta Polytechnica Hungarica
Vol. 10, No. 4, 2013
Gender Classification using Multi-Level
Wavelets on Real World Face Images
Shaheed Zulfikar Ali Bhutto Institute of
Science and Technology, Plot # 67, Street # 9, H/8-4 Islamabad, 44000, Pakistan
isb.edu.pk
('35332495', 'Sajid Ali Khan', 'sajid ali khan')
('1723986', 'Muhammad Nazir', 'muhammad nazir')
('2521631', 'Naveed Riaz', 'naveed riaz')
sajid.ali@szabist-isb.edu.pk, nazir@szabist-isb.edu.pk, n.r.ansari@szabist-
70109c670471db2e0ede3842cbb58ba6be804561Noname manuscript No.
(will be inserted by the editor)
Zero-Shot Visual Recognition via Bidirectional Latent Embedding
Received: date / Accepted: date
('47599321', 'Qian Wang', 'qian wang')
703890b7a50d6535900a5883e8d2a6813ead3a03
703dc33736939f88625227e38367cfb2a65319feLabeling Temporal Bounds for Object Interactions in Egocentric Video
Trespassing the Boundaries:
University of Bristol, United Kingdom
Walterio Mayol-Cuevas
('3420479', 'Davide Moltisanti', 'davide moltisanti')
('2052236', 'Michael Wray', 'michael wray')
('1728459', 'Dima Damen', 'dima damen')
.@bristol.ac.uk
70db3a0d2ca8a797153cc68506b8650908cb0adaAn Overview of Research Activities in Facial
Age Estimation Using the FG-NET Aging
Database
Visual Media Computing Lab,
Dept. of Multimedia and Graphic Arts,
Cyprus University of Technology, Cyprus
('31950370', 'Gabriel Panis', 'gabriel panis')
('1830709', 'Andreas Lanitis', 'andreas lanitis')
gpanis@gmail.com, andreas.lanitis@cut.ac.cy
706236308e1c8d8b8ba7749869c6b9c25fa9f957Crowdsourced Data Collection of Facial Responses
MIT Media Lab
Cambridge
02139, USA
Rosalind Picard
MIT Media Lab
Cambridge
02139, USA
MIT Media Lab
Cambridge
02139, USA
('1801452', 'Daniel McDuff', 'daniel mcduff')
('1754451', 'Rana El Kaliouby', 'rana el kaliouby')
djmcduff@media.mit.edu
kaliouby@media.mit.edu
picard@media.mit.edu
701f56f0eac9f88387de1f556acef78016b05d52Direct Shape Regression Networks for End-to-End Face Alignment
1 ∗
University of Texas at Arlington, TX, USA, 2Beihang University, Beijing, China
Xidian University, Xi an, China, 4 University of Pittsburgh, PA, USA
('6050999', 'Xin Miao', 'xin miao')
('34798935', 'Xiantong Zhen', 'xiantong zhen')
('1720747', 'Vassilis Athitsos', 'vassilis athitsos')
('6820648', 'Xianglong Liu', 'xianglong liu')
('1748032', 'Heng Huang', 'heng huang')
('50542664', 'Cheng Deng', 'cheng deng')
xin.miao@mavs.uta.edu, zhenxt@gmail.com, xlliu@nlsde.edu.cn, chdeng.xd@gmail.com
athitsos@uta.edu, heng.huang@pitt.edu
7002d6fc3e0453320da5c863a70dbb598415e7aaElectrical Engineering
University of California, Riverside
Date: Friday, October 21, 2011
Location: EBU2 Room 205/206
Time: 12:10am
Understanding Discrete Facial
Expression in Video Using Emotion
Avatar Image
('1803478', 'Songfan Yang', 'songfan yang')
7071cd1ee46db4bc1824c4fd62d36f6d13cad08aFace Detection through Scale-Friendly Deep Convolutional Networks
The Chinese University of Hong Kong
('1692609', 'Shuo Yang', 'shuo yang')
('3331521', 'Yuanjun Xiong', 'yuanjun xiong')
('1717179', 'Chen Change Loy', 'chen change loy')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{ys014, yjxiong, ccloy, xtang}@ie.cuhk,edu.hk
706b9767a444de4fe153b2f3bff29df7674c3161Fast Metric Learning For Deep Neural Networks
University of Waikato, Hamilton, New Zealand
School of Engineering, University of Waikato, Hamilton, New Zealand
('2319565', 'Henry Gouk', 'henry gouk')
('1737420', 'Bernhard Pfahringer', 'bernhard pfahringer')
hgrg1@students.waikato.ac.nz, bernhard@waikato.ac.nz
cree@waikato.ac.nz
70c58700eb89368e66a8f0d3fc54f32f69d423e1INCORPORATING SCALABILITY IN UNSUPERVISED SPATIO-TEMPORAL FEATURE
LEARNING
University of California, Riverside, CA
('49616225', 'Sujoy Paul', 'sujoy paul')
('2177805', 'Sourya Roy', 'sourya roy')
('1688416', 'Amit K. Roy-Chowdhury', 'amit k. roy-chowdhury')
707a542c580bcbf3a5a75cce2df80d75990853ccDisentangled Variational Representation for Heterogeneous Face Recognition
1 Center for Research on Intelligent Perception and Computing (CRIPAC), CASIA, Beijing, China
2 National Laboratory of Pattern Recognition (NLPR), CASIA, Beijing, China
School of Arti cial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA
('2225749', 'Xiang Wu', 'xiang wu')
('32885778', 'Huaibo Huang', 'huaibo huang')
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('1705643', 'Ran He', 'ran he')
('1757186', 'Zhenan Sun', 'zhenan sun')
alfredxiangwu@gmail.com, huaibo.huang@cripac.ia.ac.cn,
vpatel36@jhu.edu, {rhe, znsun}@nlpr.ia.ac.cn
70569810e46f476515fce80a602a210f8d9a2b95Apparent Age Estimation from Face Images Combining General and
Children-Specialized Deep Learning Models
1Orange Labs – France Telecom, 4 rue Clos Courtel, 35512 Cesson-S´evign´e, France
2Eurecom, 450 route des Chappes, 06410 Biot, France
('3116433', 'Grigory Antipov', 'grigory antipov')
('2341854', 'Moez Baccouche', 'moez baccouche')
('1708844', 'Sid-Ahmed Berrani', 'sid-ahmed berrani')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
{grigory.antipov,moez.baccouche,sidahmed.berrani}@orange.com, jean-luc.dugelay@eurecom.fr
704d88168bdfabe31b6ff484507f4a2244b8c52bMLtuner: System Support for Automatic Machine Learning Tuning
Carnegie Mellon University
('1874200', 'Henggang Cui', 'henggang cui')
('1707164', 'Gregory R. Ganger', 'gregory r. ganger')
('1974678', 'Phillip B. Gibbons', 'phillip b. gibbons')
70e79d7b64f5540d309465620b0dab19d9520df1International Journal of Scientific & Engineering Research, Volume 8, Issue 3, March-2017
ISSN 2229-5518
Facial Expression Recognition System
Using Extreme Learning Machine
('3274320', 'Firoz Mahmud', 'firoz mahmud')
('2376022', 'Md. Al Mamun', 'md. al mamun')
7003d903d5e88351d649b90d378f3fc5f211282bInternational Journal of Computer Applications (0975 – 8887)
Volume 68– No.23, April 2013
Facial Expression Recognition using Gabor Wavelet
ENTC SVERI’S COE (Poly),
Pandharpur,
Solapur, India
ENTC SVERI’S COE,
Pandharpur,
Solapur, India
ENTC SVERI’S COE (Poly),
Pandharpur,
Solapur, India
('2730988', 'Mahesh Kumbhar', 'mahesh kumbhar')
('10845943', 'Manasi Patil', 'manasi patil')
('2707920', 'Ashish Jadhav', 'ashish jadhav')
703c9c8f20860a1b1be63e6df1622b2021b003caFlip-Invariant Motion Representation
National Institute of Advanced Industrial Science and Technology
Umezono 1-1-1, Tsukuba, Japan
('1800592', 'Takumi Kobayashi', 'takumi kobayashi')takumi.kobayashi@aist.go.jp
70a69569ba61f3585cd90c70ca5832e838fa1584Friendly Faces:
Weakly supervised character identification
CVSSP, University of Surrey, UK
('2735914', 'Matthew Marter', 'matthew marter')
('1695195', 'Richard Bowden', 'richard bowden')
{m.marter, s.hadfield, r.bowden} @surrey.ac.uk
70bf1769d2d5737fc82de72c24adbb7882d2effdFace detection in intelligent ambiences with colored illumination
Department of Intelligent Systems
TU Delft
Delft, The Netherlands
('3137870', 'Christina Katsimerou', 'christina katsimerou')
('1728396', 'Ingrid Heynderickx', 'ingrid heynderickx')
70c9d11cad12dc1692a4507a97f50311f1689dbfVideo Frame Synthesis using Deep Voxel Flow
The Chinese University of Hong Kong
3Pony.AI Inc.
University of Illinois at Urbana-Champaign
4Google Inc.
('3243969', 'Ziwei Liu', 'ziwei liu'){lz013,xtang}@ie.cuhk.edu.hk
yiming@pony.ai
yeh17@illinois.edu
aseemaa@google.com
1e5ca4183929929a4e6f09b1e1d54823b8217b8eClassification in the Presence of Heavy
Label Noise: A Markov Chain Sampling
Framework
by
B.Eng., Nankai University
Thesis Submitted in Partial Fulfillment of the
Requirements for the Degree of
Master of Science
in the
School of Computing Science
Faculty of Applied Sciences
SIMON FRASER UNIVERSITY
Summer 2017
However, in accordance with the Copyright Act of Canada, this work may be
reproduced without authorization under the conditions for “Fair Dealing.”
Therefore, limited reproduction of this work for the purposes of private study,
research, education, satire, parody, criticism, review and news reporting is likely
All rights reserved.
to be in accordance with the law, particularly if cited appropriately.
('3440173', 'Zijin Zhao', 'zijin zhao')
('3440173', 'Zijin Zhao', 'zijin zhao')
1e058b3af90d475bf53b3f977bab6f4d9269e6e8Manifold Relevance Determination
University of Shef eld, UK
KTH Royal Institute of Technology, CVAP Lab, Stockholm, Sweden
Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, UK
University of Shef eld, UK
('3106771', 'Andreas C. Damianou', 'andreas c. damianou')
('2484138', 'Carl Henrik Ek', 'carl henrik ek')
('1722732', 'Michalis K. Titsias', 'michalis k. titsias')
('1739851', 'Neil D. Lawrence', 'neil d. lawrence')
ANDREAS.DAMIANOU@SHEFFIELD.AC.UK
CHEK@CSC.KTH.SE
MTITSIAS@WELL.OX.AC.UK
N.LAWRENCE@SHEFFIELD.AC.UK
1e799047e294267087ec1e2c385fac67074ee5c8IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 21, NO. 12, DECEMBER 1999
1357
Short Papers___________________________________________________________________________________________________
Automatic Classification of
Single Facial Images
('1709339', 'Michael J. Lyons', 'michael j. lyons')
('2240088', 'Julien Budynek', 'julien budynek')
('34801422', 'Shigeru Akamatsu', 'shigeru akamatsu')
1ef4815f41fa3a9217a8a8af12cc385f6ed137e1Rendering of Eyes for Eye-Shape Registration and Gaze Estimation ('34399452', 'Erroll Wood', 'erroll wood')
('2520795', 'Xucong Zhang', 'xucong zhang')
('1751242', 'Yusuke Sugano', 'yusuke sugano')
('39626495', 'Peter Robinson', 'peter robinson')
('3194727', 'Andreas Bulling', 'andreas bulling')
University of Cambridge, United Kingdom {eww23,tb346,pr10}@cam.ac.uk
Max Planck Institute for Informatics, Germany {xczhang,sugano,bulling}@mpi-inf.mpg.de
1eb4ea011a3122dc7ef3447e10c1dad5b69b0642Contextual Visual Recognition from Images and Videos
Jitendra Malik
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2016-132
http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-132.html
July 19, 2016
('2082991', 'Georgia Gkioxari', 'georgia gkioxari')
1e7ae86a78a9b4860aa720fb0fd0bdc199b092c3Article
A Brief Review of Facial Emotion Recognition Based
on Visual Information
Byoung Chul Ko ID
Tel.: +82-10-3559-4564
Received: 6 December 2017; Accepted: 25 January 2018; Published: 30 January 2018
Department of Computer Engineering, Keimyung University, Daegu 42601, Korea; niceko@kmu.ac.kr;
1e8eee51fd3bf7a9570d6ee6aa9a09454254689dThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2582166, IEEE
Transactions on Pattern Analysis and Machine Intelligence
Face Search at Scale
('7496032', 'Dayong Wang', 'dayong wang')
('40653304', 'Charles Otto', 'charles otto')
('6680444', 'Anil K. Jain', 'anil k. jain')
1ea8085fe1c79d12adffb02bd157b54d799568e4
1ea74780d529a458123a08250d8fa6ef1da47a25Videos from the 2013 Boston Marathon:
An Event Reconstruction Dataset for
Synchronization and Localization
CMU-LTI-018
Language Technologies Institute
School of Computer Science
Carnegie Mellon University
5000 Forbes Ave., Pittsburgh, PA 15213
www.lti.cs.cmu.edu
© October 1, 2016
('49252656', 'Jia Chen', 'jia chen')
('1915796', 'Junwei Liang', 'junwei liang')
('47896638', 'Han Lu', 'han lu')
('2927024', 'Shoou-I Yu', 'shoou-i yu')
('7661726', 'Alexander G. Hauptmann', 'alexander g. hauptmann')
1ebdfceebad642299e573a8995bc5ed1fad173e3
1eec03527703114d15e98ef9e55bee5d6eeba736UNIVERSITÄT KARLSRUHE (TH)
FAKULTÄT FÜR INFORMATIK
INTERACTIVE SYSTEMS LABS
DIPLOMA THESIS
Automatic identification
of persons in TV series
SUBMITTED BY
MAY 2008
ADVISORS
('12141635', 'A. Waibel', 'a. waibel')
('2284204', 'Mika Fischer', 'mika fischer')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
1e07500b00fcd0f65cf30a11f9023f74fe8ce65cWHOLE SPACE SUBCLASS DISCRIMINANT ANALYSIS FOR FACE RECOGNITION
Institute for Infocomm Research, A*STAR, Singapore
('1709001', 'Bappaditya Mandal', 'bappaditya mandal')
('35718875', 'Liyuan Li', 'liyuan li')
('1802086', 'Vijay Chandrasekhar', 'vijay chandrasekhar')
Email: {bmandal, lyli, vijay, joohwee}@i2r.a-star.edu.sg
1e19ea6e7f1c04a18c952ce29386252485e4031eInternational Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0047
ISSN (Online): 2279-0055
International Journal of Emerging Technologies in Computational
and Applied Sciences (IJETCAS)
www.iasir.net
MATLAB Based Face Recognition System Using PCA and Neural Network
1Faculty of Computer Science & Engineering, 2Research Scholar
University Institute of Engineering and Technology
Kurukshetra University, Kurukshetra-136 119, Haryana, INDIA
('1989126', 'Sanjeev Dhawan', 'sanjeev dhawan')
('7940433', 'Himanshu Dogra', 'himanshu dogra')
E-mail (s): rsdhawan@rediffmail.com, himanshu.dogra.13@gmail.com
1ec98785ac91808455b753d4bc00441d8572c416Curriculum Learning for Facial Expression Recognition
Language Technologies Institute, School of Computer Science
Carnegie Mellon University, USA
few years,
('1970583', 'Liangke Gui', 'liangke gui')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
1ed6c7e02b4b3ef76f74dd04b2b6050faa6e2177Face Detection with a 3D Model
Department of Statistics
Florida State University
National Institutes of Health
('2455529', 'Adrian Barbu', 'adrian barbu')
('2230628', 'Nathan Lay', 'nathan lay')
abarbu@stat.fsu.edu
nathan.lay@nih.gov
1efacaa0eaa7e16146c34cd20814d1411b35538eHEIDARIVINCHEHET AL: ACTIONCOMPLETION:A TEMPORALMODEL..
Action Completion:
A Temporal Model for Moment Detection
Department of Computer Science
University of Bristol
Bristol, UK
('10007321', 'Farnoosh Heidarivincheh', 'farnoosh heidarivincheh')
('1728108', 'Majid Mirmehdi', 'majid mirmehdi')
('1728459', 'Dima Damen', 'dima damen')
Farnoosh.Heidarivincheh@bristol.ac.uk
M.Mirmehdi@bristol.ac.uk
Dima.Damen@bristol.ac.uk
1eba6fc35a027134aa8997413647b49685f6fbd1Superpower Glass: Delivering
Unobtrusive Real-time Social Cues
in Wearable Systems
Dennis Wall
Stanford University
Stanford, CA 94305, USA
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for third-
party components of this work must be honored. For all other uses, contact
the Owner/Author.
Copyright is held by the owner/author(s).
Ubicomp/ISWC'16 Adjunct , September 12-16, 2016, Heidelberg, Germany
ACM 978-1-4503-4462-3/16/09.
http://dx.doi.org/10.1145/2968219.2968310
('21701693', 'Catalin Voss', 'catalin voss')
('40026202', 'Peter Washington', 'peter washington')
('32551479', 'Nick Haber', 'nick haber')
('40494635', 'Aaron Kline', 'aaron kline')
('34240128', 'Jena Daniels', 'jena daniels')
('3407835', 'Azar Fazel', 'azar fazel')
('3457025', 'Titas De', 'titas de')
('3456914', 'Beth McCarthy', 'beth mccarthy')
('34925386', 'Carl Feinstein', 'carl feinstein')
('1699245', 'Terry Winograd', 'terry winograd')
catalin@cs.stanford.edu
peterwashington@stanford.edu
nhaber@stanford.edu
akline@stanford.edu
danielsj@stanford.edu
azarf@stanford.edu
titasde@stanford.edu
bethmac@stanford.edu
carlf@stanford.edu
winograd@cs.stanford.edu
dpwall@stanford.edu
1e1d7cbbef67e9e042a3a0a9a1bcefcc4a9adacfA Multi-Level Contextual Model For Person Recognition in Photo Albums
Stevens Institute of Technology
‡Adobe Research
(cid:92)Microsoft Research
('3131569', 'Haoxiang Li', 'haoxiang li')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
('1720987', 'Xiaohui Shen', 'xiaohui shen')
('1745420', 'Gang Hua', 'gang hua')
†hli18@stevens.edu
‡{jbrandt, zlin, xshen}@adobe.com
(cid:92)ganghua@microsoft.com
1ef1f33c48bc159881c5c8536cbbd533d31b0e9aZ. ZHANG ET AL.: ADVERSARIAL TRAINING FOR ACTION UNIT RECOGNITION
Identity-based Adversarial Training of Deep
CNNs for Facial Action Unit Recognition
Department of Computer Science
State University of New York at
Binghamton
NY, USA.
('47294008', 'Zheng Zhang', 'zheng zhang')
('2443456', 'Shuangfei Zhai', 'shuangfei zhai')
('8072251', 'Lijun Yin', 'lijun yin')
zzhang27@cs.binghamton.edu
szhai2@cs.binghamton.edu
lijun@cs.binghamton.edu
1ef5ce743a44d8a454dbfc2657e1e2e2d025e366Global Journal of Computer Science & Technology
Volume 11 Issue Version 1.0 April 2011
Type: Double Blind Peer Reviewed International Research Journal
Publisher: Global Journals Inc. (USA)
Online ISSN: 0975-4172 & Print ISSN: 0975-4350

Accurate Corner Detection Methods using Two Step Approach
Thapar University
('1765523', 'Nitin Bhatia', 'nitin bhatia')
('9180065', 'Megha Chhabra', 'megha chhabra')
1e58d7e5277288176456c66f6b1433c41ca77415Bootstrapping Fine-Grained Classifiers:
Active Learning with a Crowd in the Loop
Brown University, 2University of California, San Diego, 3California Institute of Technology
('40541456', 'Genevieve Patterson', 'genevieve patterson'){gen, hays}@cs.brown.edu gvanhorn@ucsd.edu sjb@cs.ucsd.edu
perona@caltech.edu
1e5a1619fe5586e5ded2c7a845e73f22960bbf5aGroup Membership Prediction
Boston University
('7969330', 'Ziming Zhang', 'ziming zhang')
('9772059', 'Yuting Chen', 'yuting chen')
('1699322', 'Venkatesh Saligrama', 'venkatesh saligrama')
{zzhang14, yutingch, srv}@bu.edu
1e9f1bbb751fe538dde9f612f60eb946747defaaJournal of Systems Engineering and Electronics
Vol. 28, No. 4, August 2017, pp.784 – 792
Identity-aware convolutional neural networks for
facial expression recognition
The Big Data Research Center, Henan University, Kaifeng 475001, China
Tampere University of Technology, Tampere 33720, Finland
('34461878', 'Chongsheng Zhang', 'chongsheng zhang')
('39720477', 'Pengyou Wang', 'pengyou wang')
('40611812', 'Ke Chen', 'ke chen')
1e917fe7462445996837934a7e46eeec14ebc65fExpression Classification using
Wavelet Packet Method
on Asymmetry Faces
CMU-RI-TR-06-03
January 2006
Robotics Institute
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
Carnegie Mellon University
('1689241', 'Yanxi Liu', 'yanxi liu')
1e8394cc9fe7c2392aa36fb4878faf7e78bbf2deTO APPEAR IN IEEE THMS
Zero-Shot Object Recognition System
based on Topic Model
('2800072', 'Wai Lam Hoo', 'wai lam hoo')
('2863960', 'Chee Seng Chan', 'chee seng chan')
1ef4aac0ebc34e76123f848c256840d89ff728d0
1ecb56e7c06a380b3ce582af3a629f6ef0104457List of Contents Vol.8
Contents of
Journal of Advanced Computational
Intelligence and Intelligent Informatics
Volume 8
Vol.8 No.1, January 2004
Editorial:
o Special Issue on Selected Papers from Humanoid,
Papers:
o Dynamic Color Object Recognition Using Fuzzy
Nano-technology, Information Technology,
Communication and Control, Environment, and
Management (HNICEM’03).
. 1
Elmer P. Dadios
Papers:
o A New Way of Discovery of Belief, Desire and
Intention in the BDI Agent-Based Software
Modeling .
. 2
o Integration of Distributed Robotic Systems
. 7
Fakhri Karray, Rogelio Soto, Federico Guedea,
and Insop Song
o A Searching and Tracking Framework for
Multi-Robot Observation of Multiple Moving
Targets .
. 14
Zheng Liu, Marcelo H. Ang Jr., and Winston
Khoon Guan Seah
Development Paper:
o Possibilistic Uncertainty Propagation and
Compromise Programming in the Life Cycle
Analysis of Alternative Motor Vehicle Fuels
Raymond R. Tan, Alvin B. Culaba, and
Michael R. I. Purvis
. 23
Logic .
Napoleon H. Reyes, and Elmer P. Dadios
. 29
o A Optical Coordinate Measuring Machine for
Nanoscale Dimensional Metrology .
. 39
Eric Kirkland, Thomas R. Kurfess, and Steven
Y. Liang
o Humanoid Robot HanSaRam: Recent Progress
and Developments .
. 45
Jong-Hwan Kim, Dong-Han Kim, Yong-Jae
Kim, Kui-Hong Park, Jae-Ho Park,
Choon-Kyoung Moon, Jee-Hwan Ryu, Kiam
Tian Seow, and Kyoung-Chul Koh
o Generalized Associative Memory Models: Their
Memory Capacities and Potential Application
. 56
Teddy N. Yap, Jr., and Arnulfo P. Azcarraga
o Hybrid Fuzzy Logic Strategy for Soccer Robot
Game.
. 65
Elmer A. Maravillas , Napoleon H. Reyes, and
Elmer P. Dadios
o Image Compression and Reconstruction Based on
Fuzzy Relation and Soft Computing
Technology .
. 72
Kaoru Hirota, Hajime Nobuhara, Kazuhiko
Kawamoto, and Shin’ichi Yoshida
Vol.8 No.2, March 2004
Editorial:
o Special Issue on Pattern Recognition .
. 83
Papers:
o Operation of Spatiotemporal Patterns Stored in
Osamu Hasegawa
Review:
o Support Vector Machine and Generalization . 84
Takio Kurita
o Bayesian Network: Probabilistic Reasoning,
Statistical Learning, and Applications .
. 93
Yoichi Motomura
Living Neuronal Networks Cultured on a
Microelectrode Array .
Suguru N. Kudoh, and Takahisa Taguchi
o Rapid Discriminative Learning .
. 100
. 108
Jun Rokui
o Robust Fuzzy Clustering Based on Similarity
between Data .
Kohei Inoue, and Kiichi Urahama
Vol.8 No.6, 2004
Journal of Advanced Computational Intelligence
and Intelligent Informatics
. 115
I-1
('33358236', 'Chang-Hyun Jo', 'chang-hyun jo')
1e64b2d2f0a8a608d0d9d913c4baee6973995952DOMINANT AND
COMPLEMENTARY MULTI-
EMOTIONAL FACIAL
EXPRESSION RECOGNITION
USING C-SUPPORT VECTOR
CLASSIFICATION
('19172816', 'Christer Loob', 'christer loob')
('2303909', 'Pejman Rasti', 'pejman rasti')
('7855312', 'Sergio Escalera', 'sergio escalera')
('2531522', 'Tomasz Sapinski', 'tomasz sapinski')
('34969391', 'Dorota Kaminska', 'dorota kaminska')
('3087532', 'Gholamreza Anbarjafari', 'gholamreza anbarjafari')
1e21b925b65303ef0299af65e018ec1e1b9b8d60Under review as a conference paper at ICLR 2017
UNSUPERVISED CROSS-DOMAIN IMAGE GENERATION
Facebook AI Research
Tel-Aviv, Israel
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('33964593', 'Adam Polyak', 'adam polyak')
{yaniv,adampolyak,wolf}@fb.com
1ee27c66fabde8ffe90bd2f4ccee5835f8dedbb9Entropy Regularization
The problem of semi-supervised induction consists in learning a decision rule from
labeled and unlabeled data. This task can be undertaken by discriminative methods,
provided that learning criteria are adapted consequently. In this chapter, we moti-
vate the use of entropy regularization as a means to bene(cid:12)t from unlabeled data in
the framework of maximum a posteriori estimation. The learning criterion is derived
from clearly stated assumptions and can be applied to any smoothly parametrized
model of posterior probabilities. The regularization scheme favors low density sep-
aration, without any modeling of the density of input features. The contribution
of unlabeled data to the learning criterion induces local optima, but this problem
can be alleviated by deterministic annealing. For well-behaved models of posterior
probabilities, deterministic annealing EM provides a decomposition of the learning
problem in a series of concave subproblems. Other approaches to the semi-supervised
problem are shown to be close relatives or limiting cases of entropy regularization.
A series of experiments illustrates the good behavior of the algorithm in terms of
performance and robustness with respect to the violation of the postulated low den-
sity separation assumption. The minimum entropy solution bene(cid:12)ts from unlabeled
data and is able to challenge mixture models and manifold learning in a number of
situations.
9.1 Introduction
semi-supervised
induction
This chapter addresses semi-supervised induction, which refers to the learning of
a decision rule, on the entire input domain X, from labeled and unlabeled data.
The objective is identical to the one of supervised classi(cid:12)cation: generalize from
examples. The problem di(cid:11)ers in the respect that the supervisor’s responses are
missing for some training examples. This characteristic is shared with transduction,
which has however a di(cid:11)erent goal, that is, of predicting labels on a set of prede(cid:12)ned
('1802711', 'Yves Grandvalet', 'yves grandvalet')
('1751762', 'Yoshua Bengio', 'yoshua bengio')
1ee3b4ba04e54bfbacba94d54bf8d05fd202931dIndonesian Journal of Electrical Engineering and Computer Science
Vol. 12, No. 2, November 2018, pp. 476~481
ISSN: 2502-4752, DOI: 10.11591/ijeecs.v12.i2.pp476-481
 476
Celebrity Face Recognition using Deep Learning
1,2,3Faculty of Computer and Mathematical Sciences, UniversitiTeknologi MARA (UiTM),
4Faculty of Computer and Mathematical Sciences, UniversitiTeknologi MARA (UiTM),
Shah Alam, Selangor, Malaysia
Campus Jasin, Melaka, Malaysia
Article Info
Article history:
Received May 29, 2018
Revised Jul 30, 2018
Accepted Aug 3, 2018
Keywords:
AlexNet
Convolutional neural network
Deep learning
Face recognition
GoogLeNet
('2743254', 'Zaidah Ibrahim', 'zaidah ibrahim')
1e41a3fdaac9f306c0ef0a978ae050d884d77d2a411
Robust Object Recognition with
Cortex-Like Mechanisms
Tomaso Poggio, Member, IEEE
('1981539', 'Thomas Serre', 'thomas serre')
('1776343', 'Lior Wolf', 'lior wolf')
('1996960', 'Maximilian Riesenhuber', 'maximilian riesenhuber')
1e94cc91c5293c8fc89204d4b881552e5b2ce672Unsupervised Alignment of Actions in Video with Text Descriptions
University of Rochester, Rochester, NY, USA
Indian Institute of Technology Delhi, New Delhi, India
('3193978', 'Young Chol Song', 'young chol song')
('2296971', 'Iftekhar Naim', 'iftekhar naim')
('1782355', 'Abdullah Al Mamun', 'abdullah al mamun')
('38370357', 'Kaustubh Kulkarni', 'kaustubh kulkarni')
('35108153', 'Parag Singla', 'parag singla')
('33642939', 'Jiebo Luo', 'jiebo luo')
('1793218', 'Daniel Gildea', 'daniel gildea')
1e1e66783f51a206509b0a427e68b3f6e40a27c8SEMI-SUPERVISED ESTIMATION OF PERCEIVED AGE
FROM FACE IMAGES
VALWAY Technology Center, NEC Soft, Ltd., Tokyo, Japan
Keywords:
('2163491', 'Kazuya Ueki', 'kazuya ueki')
('1719221', 'Masashi Sugiyama', 'masashi sugiyama')
ueki@mxf.nes.nec.co.jp
1efaa128378f988965841eb3f49d1319a102dc36JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Hierarchical binary CNNs for landmark
localization with limited resources
('3458121', 'Adrian Bulat', 'adrian bulat')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
1e8eec6fc0e4538e21909ab6037c228547a678baIMPERIAL COLLEGE
University of London
enVisage : Face Recognition in
Videos
Supervisor : Dr. Stefan Rüeger
June 14, 2006
('23558890', 'Ashwin Venkatraman', 'ashwin venkatraman')
('35805861', 'Ian Harries', 'ian harries')
(av102@doc.ic.ac.uk)
1e6ed6ca8209340573a5e907a6e2e546a3bf2d28Pooling Faces: Template based Face Recognition with Pooled Face Images
Prem Natarajan1
Gérard Medioni3
Information Sciences Institute, USC, CA, USA
The Open University of Israel, Israel
Institute for Robotics and Intelligent Systems, USC, CA, USA
('1756099', 'Tal Hassner', 'tal hassner')
('11269472', 'Iacopo Masi', 'iacopo masi')
('5911467', 'Jungyeon Kim', 'jungyeon kim')
('1689391', 'Jongmoo Choi', 'jongmoo choi')
('35840854', 'Shai Harel', 'shai harel')
8451bf3dd6bcd946be14b1a75af8bbb65a42d4b2Consensual and Privacy-Preserving Sharing of
Multi-Subject and Interdependent Data
EPFL, UNIL–HEC Lausanne
K´evin Huguenin
UNIL–HEC Lausanne
EPFL
EPFL
('1862343', 'Alexandra-Mihaela Olteanu', 'alexandra-mihaela olteanu')
('2461431', 'Italo Dacosta', 'italo dacosta')
('1757221', 'Jean-Pierre Hubaux', 'jean-pierre hubaux')
alexandramihaela.olteanu@epfl.ch
kevin.huguenin@unil.ch
italo.dacosta@epfl.ch
jean-pierre.hubaux@epfl.ch
841855205818d3a6d6f85ec17a22515f4f062882Low Resolution Face Recognition in the Wild
Patrick Flynn1
1Department of Computer Science and Engineering
University of Notre Dame
2Department of Computer Science
Pontificia Universidad Cat´olica de Chile
('50492554', 'Pei Li', 'pei li')
('47522390', 'Loreto Prieto', 'loreto prieto')
('1797475', 'Domingo Mery', 'domingo mery')
84c0f814951b80c3b2e39caf3925b56a9b2e1733Manifesto from Dagstuhl Perspectives Workshop 12382
Computation and Palaeography: Potentials and Limits∗
Edited by
The Open University of
University of Nebraska Lincoln, USA
King s College London, UK
The Blavatnik School of Computer Science, Tel Aviv University, IL
('1756099', 'Tal Hassner', 'tal hassner')
('34564710', 'Malte Rehbein', 'malte rehbein')
('34876976', 'Peter A. Stokes', 'peter a. stokes')
('1776343', 'Lior Wolf', 'lior wolf')
Israel, IL, hassner@openu.ac.il
malte.rehbein@unl.edu
peter.stokes@kcl.ac.uk
wolf@cs.tau.ac.il
84fe5b4ac805af63206012d29523a1e033bc827e
84e4b7469f9c4b6c9e73733fa28788730fd30379Duong et al. EURASIP Journal on Advances in Signal Processing (2018) 2018:10
DOI 10.1186/s13634-017-0521-9
EURASIP Journal on Advances
in Signal Processing
R ES EAR CH
Projective complex matrix factorization for
facial expression recognition
Open Access
('2345136', 'Viet-Hang Duong', 'viet-hang duong')
('2033188', 'Yuan-Shan Lee', 'yuan-shan lee')
('1782417', 'Jian-Jiun Ding', 'jian-jiun ding')
('34759060', 'Bach-Tung Pham', 'bach-tung pham')
('30065390', 'Manh-Quan Bui', 'manh-quan bui')
('35196812', 'Pham The Bao', 'pham the bao')
('3205648', 'Jia-Ching Wang', 'jia-ching wang')
84dcf04802743d9907b5b3ae28b19cbbacd97981
841bf196ee0086c805bd5d1d0bddfadc87e424ecInternational Journal of Signal Processing, Image Processing and Pattern Recognition
Vol. 5, No. 4, December, 2012
Locally Kernel-based Nonlinear Regression for Face Recognition
South Tehran Branch, Electrical Engineering Department, Tehran, Iran
Islamic Azad University
Amirkabir University of Technology
Electrical Engineering Department,Tehran, Iran
('3345810', 'Yaser Arianpour', 'yaser arianpour')
('2630546', 'Sedigheh Ghofrani', 'sedigheh ghofrani')
('1685153', 'Hamidreza Amindavar', 'hamidreza amindavar')
st_y_arianpour@azad.ac.ir, s_ghofrani@azad.ac.ir and hamidami@aut.ac.ir
842d82081f4b27ca2d4bc05c6c7e389378f0c7b8ELEKTROTEHNI ˇSKI VESTNIK 78(1-2): 12–17, 2011
ENGLISH EDITION
Usage of affective computing in recommender systems
Marko Tkalˇciˇc, Andrej Koˇsir, Jurij Tasiˇc
University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia
E-mail: marko.tkalcic@fe.uni-lj.si
84fa126cb19d569d2f0147bf6f9e26b54c9ad4f1Improved Boosting Performance by Explicit
Handling of Ambiguous Positive Examples
('1750517', 'Miroslav Kobetski', 'miroslav kobetski')
('1736906', 'Josephine Sullivan', 'josephine sullivan')
84508e846af3ac509f7e1d74b37709107ba48bdeUse of the Septum as a Reference Point in a Neurophysiologic Approach to
Facial Expression Recognition
Department of Computer Engineering, Faculty of Engineering,
Prince of Songkla University, Hat Yai, Songkhla, 90112 Thailand
Telephone: (66)080-7045015, (66)074-287-357
('38928684', 'Igor Stankovic', 'igor stankovic')
('2799130', 'Montri Karnjanadecha', 'montri karnjanadecha')
E-mail: bizmut@neobee.net, montri@coe.psu.ac.th
841a5de1d71a0b51957d9be9d9bebed33fb5d9fa5017
PCANet: A Simple Deep Learning Baseline for
Image Classification?
('1926757', 'Tsung-Han Chan', 'tsung-han chan')
('2370507', 'Kui Jia', 'kui jia')
('1702868', 'Shenghua Gao', 'shenghua gao')
('1697700', 'Jiwen Lu', 'jiwen lu')
('1920683', 'Zinan Zeng', 'zinan zeng')
('1700297', 'Yi Ma', 'yi ma')
84e6669b47670f9f4f49c0085311dce0e178b685Face frontalization for Alignment and Recognition
∗Department of Computing,
Imperial College London
180 Queens Gate,
†EEMCS,
University of Twente
Drienerlolaan 5,
London SW7 2AZ, U.K.
7522 NB Enschede, The Netherlands
('3320415', 'Christos Sagonas', 'christos sagonas')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{c.sagonas, i.panagakis, s.zafeiriou, m.pantic}@imperial.ac.uk
847e07387142c1bcc65035109ccce681ef88362cFeature Synthesis Using Genetic Programming for Face
Expression Recognition
Center for research in intelligent systems
University of California, Riverside CA 92521-0425, USA
('1707159', 'Bir Bhanu', 'bir bhanu')
('1723555', 'Jiangang Yu', 'jiangang yu')
('1711543', 'Xuejun Tan', 'xuejun tan')
('1742735', 'Yingqiang Lin', 'yingqiang lin')
{bhanu, jyu, xtan, yqlin}@cris.ucr.edu
8411fe1142935a86b819f065cd1f879f16e77401International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 6, November 2013
Facial Recognition using Modified Local Binary
Pattern and Random Forest
Department of Computer Science,
North Carolina AandT State University
Greensboro, NC 27411
('3536162', 'Brian O’Connor', 'brian o’connor')
('34999544', 'Kaushik Roy', 'kaushik roy')
843e6f1e226480e8a6872d8fd7b7b2cd74b637a4Research Journal of Applied Sciences, Engineering and Technology 4(22): 4724-4728, 2012
ISSN: 2040-7467
© Maxwell Scientific Organization, 2012
Submitted: March 31, 2012
Accepted: April 30, 2012
Published: November 15, 2012
Palmprint Recognition Using Directional Representation and
Compresses Sensing
1Shandong Provincial Key Laboratory of computer Network, Shandong Computer
Science Center, Jinan 250014, China
School of Mechanical Engineering, Southwest Jiaotong University, Chengdu 610031, China
('2112738', 'Hengjian Li', 'hengjian li')
84f904a71bee129a1cf00dc97f6cdbe1011657e6Fashioning with Networks: Neural Style Transfer to Design
Clothes
University Of Maryland
Baltimore County (UMBC),
University Of Maryland
Baltimore County (UMBC),
University Of Maryland
Baltimore County (UMBC),
Baltimore, MD,
USA
Baltimore, MD,
USA
Baltimore, MD,
USA
('30834050', 'Prutha Date', 'prutha date')
('2116290', 'Ashwinkumar Ganesan', 'ashwinkumar ganesan')
('1756624', 'Tim Oates', 'tim oates')
dprutha1@umbc.edu
gashwin1@umbc.edu
oates@cs.umbc.edu
849f891973ad2b6c6f70d7d43d9ac5805f1a1a5bDetecting Faces Using Region-based Fully
Convolutional Networks
Tencent AI Lab, China
('1996677', 'Yitong Wang', 'yitong wang'){yitongwang,denisji,encorezhou,hawelwang,michaelzfli}@tencent.com
846c028643e60fefc86bae13bebd27341b87c4d1Face Recognition Under Varying Illumination
Based on MAP Estimation Incorporating
Correlation Between Surface Points
1 Panasonic Tokyo (Matsushita Electric Industrial Co., Ltd.,)
4–3–1 Tsunashima-higashi, Kohoku-ku, Yokohama City, Kanagawa 223–8639, Japan
Institute of Industrial Science, The University of Tokyo
4–6–1 Komaba, Meguro-ku Tokyo 153–8505, Japan
National Institute of Informatics
2–1–2 Hitotsubashi, Chiyoda-ku Tokyo 101–8430, Japan
('20877506', 'Mihoko Shimano', 'mihoko shimano')
('1977815', 'Kenji Nagao', 'kenji nagao')
('1706742', 'Takahiro Okabe', 'takahiro okabe')
('1746794', 'Imari Sato', 'imari sato')
('9467266', 'Yoichi Sato', 'yoichi sato')
shimano.mhk@jp.panasonic.com
{takahiro, ysato}@iis.u-tokyo.ac.jp
imarik@nii.ac.jp
4a14a321a9b5101b14ed5ad6aa7636e757909a7cLearning Semi-Supervised Representation Towards a Unified Optimization
Framework for Semi-Supervised Learning
School of Info. and Commu. Engineering, Beijing University of Posts and Telecommunications
Key Laboratory of Machine Perception (MOE), School of EECS, Peking University
Cooperative Medianet Innovation Center, Shanghai Jiaotong University
('9171002', 'Chun-Guang Li', 'chun-guang li')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('1720776', 'Honggang Zhang', 'honggang zhang')
('39954962', 'Jun Guo', 'jun guo')
{lichunguang, zhhg, guojun}@bupt.edu.cn; zlin@pku.edu.cn
4adca62f888226d3a16654ca499bf2a7d3d11b71Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 572–582,
Sofia, Bulgaria, August 4-9 2013. c(cid:13)2013 Association for Computational Linguistics
572
4aa286914f17cd8cefa0320e41800a99c142a1cdLeveraging Context to Support Automated Food Recognition in Restaurants
School of Interactive Computing
Georgia Institute of Technology, Atlanta, Georgia, USA
http://www.vbettadapura.com/egocentric/food
('3115428', 'Vinay Bettadapura', 'vinay bettadapura')
('39642711', 'Edison Thomaz', 'edison thomaz')
('2943897', 'Aman Parnami', 'aman parnami')
('9267108', 'Gregory D. Abowd', 'gregory d. abowd')
4a9d906935c9de019c61aedc10b77ee10e3aec63Cross Modal Distillation for Supervision Transfer
University of California, Berkeley
('3134457', 'Saurabh Gupta', 'saurabh gupta')
('4742485', 'Judy Hoffman', 'judy hoffman')
('1689212', 'Jitendra Malik', 'jitendra malik')
{sgupta, jhoffman, malik}@eecs.berkeley.edu
4a2d54ea1da851151d43b38652b7ea30cdb6dfb2Direct Recognition of Motion Blurred Faces ('39487011', 'Kaushik Mitra', 'kaushik mitra')
('2715270', 'Priyanka Vageeswaran', 'priyanka vageeswaran')
('9215658', 'Rama Chellappa', 'rama chellappa')
4ae59d2a28abd76e6d9fb53c9e7ece833dce7733A Survey on Mobile Affective Computing
Shengkai Zhang and Pan Hui
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
{szhangaj, panhui}@cse.ust.hk
4ab10174a4f98f7e2da7cf6ccfeb9bc64c8e7da8Graz University of Technology
Institute for Computer Graphics and Vision
Dissertation
Efficient Metric Learning for
Real-World Face Recognition
Graz, Austria, December 2013
Thesis supervisors
Prof. Dr. Horst Bischof
Prof. Dr. Fernando De la Torre
('1993853', 'Martin Köstinger', 'martin köstinger')
4ab84f203b0e752be83f7f213d7495b04b1c4c79CONCAVE LOSSES FOR ROBUST DICTIONARY LEARNING
University of S ao Paulo
Institute of Mathematics and Statistics
Rua do Mat˜ao, 1010 – 05508-090 – S˜ao Paulo-SP, Brazil
Universit´e de Rouen Normandie
LITIS EA 4108
76800 Saint- ´Etienne-du-Rouvray, France
('30146203', 'Rafael Will M. de Araujo', 'rafael will m. de araujo')
('1792962', 'Alain Rakotomamonjy', 'alain rakotomamonjy')
4a484d97e402ed0365d6cf162f5a60a4d8000ea0A Crowdsourcing Approach for Finding Misidentifications of Bibliographic Records
University of Tsukuba
2 National Diet Library
3 Doshisha Univeristy
('34573158', 'Atsuyuki Morishima', 'atsuyuki morishima')
('32857584', 'Takanori Kawashima', 'takanori kawashima')
('23161591', 'Takashi Harada', 'takashi harada')
('2406721', 'Sho Sato', 'sho sato')
4a3758f283b7c484d3f164528d73bc8667eb1591Attribute Enhanced Face Aging with Wavelet-based Generative Adversarial
Networks
Center for Research on Intelligent Perception and Computing, CASIA
National Laboratory of Pattern Recognition, CASIA
('1860829', 'Yunfan Liu', 'yunfan liu')
('1682467', 'Qi Li', 'qi li')
('1757186', 'Zhenan Sun', 'zhenan sun')
yunfan,liu@cripac.ia.ac.cn, {qli, znsun}@nlpr.ia.ac.cn
4a4da3d1bbf10f15b448577e75112bac4861620aFACE, EXPRESSION, AND IRIS RECOGNITION
USING LEARNING-BASED APPROACHES
by
A dissertation submitted in partial fulfillment of
the requirements for the degree of
Doctor of Philosophy
(Computer Sciences)
at the
UNIVERSITY OF WISCONSIN MADISON
2006
('1822413', 'Guodong Guo', 'guodong guo')
4abd49538d04ea5c7e6d31701b57ea17bc349412Recognizing Fine-Grained and Composite Activities
using Hand-Centric Features and Script Data
('34849128', 'Marcus Rohrbach', 'marcus rohrbach')
('40404576', 'Sikandar Amin', 'sikandar amin')
4aa093d1986b4ad9b073ac9edfb903f62c00e0b0Facial Recognition with
Encoded Local Projections
Mechanincal and Mechatronics Engineering
University of Waterloo
Waterloo, Canada
Kimia Lab
University of Waterloo
Waterloo, Canada
('34139904', 'Dhruv Sharma', 'dhruv sharma')
('7641396', 'Sarim Zafar', 'sarim zafar')
('38685017', 'Morteza Babaie', 'morteza babaie')
4a0f98d7dbc31497106d4f652968c708f7da6692Real-time Eye Gaze Direction Classification Using
Convolutional Neural Network
('3110004', 'Anjith George', 'anjith george')
('2680543', 'Aurobinda Routray', 'aurobinda routray')
4aabd6db4594212019c9af89b3e66f39f3108aacUniversity of Colorado, Boulder
CU Scholar
Undergraduate Honors Theses
Honors Program
Spring 2015
The Mere Exposure Effect and Classical
Conditioning
Follow this and additional works at: http://scholar.colorado.edu/honr_theses
Part of the Cognition and Perception Commons, and the Cognitive Psychology Commons
Recommended Citation
Wong, Rosalyn, "The Mere Exposure Effect and Classical Conditioning" (2015). Undergraduate Honors Theses. Paper 937.
This Thesis is brought to you for free and open access by Honors Program at CU Scholar. It has been accepted for inclusion in Undergraduate Honors
('10191508', 'Rosalyn Wong', 'rosalyn wong')University of Colorado Boulder, Rosalyn.Wong@Colorado.EDU
Theses by an authorized administrator of CU Scholar. For more information, please contact cuscholaradmin@colorado.edu.
4adb97b096b700af9a58d00e45a2f980136fcbb5Exploring Temporal Preservation Networks for Precise Temporal Action
Localization
National Laboratory for Parallel and Distributed Processing,
National University of Defense Technology
Changsha, China
('40520103', 'Ke Yang', 'ke yang')
('2292038', 'Peng Qiao', 'peng qiao')
('40252278', 'Dongsheng Li', 'dongsheng li')
('1893776', 'Shaohe Lv', 'shaohe lv')
('1791001', 'Yong Dou', 'yong dou')
{yangke13,pengqiao,dongshengli,yongdou,shaohelv}@nudt.edu.cn
4a5592ae1f5e9fa83d9fa17451c8ab49608421e4Multi-modal Social Signal Analysis for Predicting
Agreement in Conversation Settings
IN3, Open University of
Catalonia, Roc Boronat, 117,
08018 Barcelona, Spain.
University of
Barcelona, Gran Via, 585,
08007 Barcelona, Spain.
Computer Vision Center, UAB,
08193 Barcelona, Spain.
University of
Barcelona, Gran Via, 585,
08007 Barcelona, Spain.
Computer Vision Center, UAB,
08193 Barcelona, Spain.
EIMT, Open University of
Catalonia, Rbla. Poblenou,
156, 08018 Barcelona, Spain.
Computer Vision Center, UAB,
08193 Barcelona, Spain.
('1960768', 'Víctor Ponce-López', 'víctor ponce-lópez')
('7855312', 'Sergio Escalera', 'sergio escalera')
('1857280', 'Xavier Baró', 'xavier baró')
vponcel@uoc.edu
sergio@maia.ub.es
xbaro@uoc.edu
4a1a5316e85528f4ff7a5f76699dfa8c70f6cc5c MVA2005 IAPR Conference on Machine VIsion Applications, May 16-18, 2005 Tsukuba Science City, Japan
3-22
Face Recognition using Local Features based on Two-layer Block M odel
W onjun Hwang1 Ji-Yeun Kim Seokcheol Kee
Computing Lab.,
Samsung Advanced Institute of Technology
combined by Yang and etc [7]. The sparsification of LFA
helps the reduction of dimension of image in LDA scheme
and local topological property is more useful than holistic
property of PCA in recognition, but there is still structural
problem because the method to select the features is
designed for minimization of reconstruction error, not for
increasing discriminability in face model.
In this paper, we proposed the novel recognition
algorithm to merge LFA and LDA method. We do not use
the existing sparsification method for selecting features but
adopt the two-layer block model to make several groups
with topographic local features in similar position. Each
local block, flocked local features, can represent its own
local property and at
time holistic face
information. Flocks of local features can easily solve the
small sample size problem in LDA without discarding
unselected local features, and LDA scheme can extract the
important information for recognition not in focus of
representation. M oreover, we can extract lots of vectors on
separated viewpoint from different layer model in one face
image and they have the property robust to environmental
changes and overfitting problem as compared with limited
number of features vectors.
the same
The rest of this paper is organized as follows: the brief
description on LFA and LDA is explained in Section 2.1
and Section 2.2, respectively and proposed algorithm -
local feature based on two-layer block model is given in
Section 2.3. The experimental results are given in Section 3.
Conclusion is summarized in Section 4.
2 LFA and LDA M ethod based on Two-
Layer Block M odel
2.1 Theory of local feature analysis
A topographic representation based on second-order
image dependencies called local features analysis (LFA)
was developed by Penev and Atick [4]. Local feature
analysis can makes a set of topographic and local kernels
that are optimally matched to the second-order statistics of
the input ensemble. Local features are basically derived
from principal component eigenvectors, and consist of
sphering principal component eigenvalues to equalize their
variance.
Suppose that we are given a set of M training
M , each represented by an
images,
i(cid:77) , =1,(cid:133) ,
dimensional vector obtained by a raster scan. The mean
4ae291b070ad7940b3c9d3cb10e8c05955c9e269Automatic Detection of Naturalistic Hand-over-Face
Gesture Descriptors
University of Cambridge, Computer Laboratory, UK
('2022940', 'Marwa Mahmoud', 'marwa mahmoud')
('39626495', 'Peter Robinson', 'peter robinson')
{marwa.mahmoud, tadas.baltrusaitis, peter.robinson}@cl.cam.ac.uk
4aa8db1a3379f00db2403bba7dade5d6e258b9e9Recognizing Combinations of Facial Action Units with
Different Intensity Using a Mixture of Hidden Markov
Models and Neural Network
DSP Lab, Sharif University of Technology, Tehran, Iran
('1736464', 'Mahmoud Khademi', 'mahmoud khademi')
('1702826', 'Mohammad Hadi Kiapour', 'mohammad hadi kiapour')
('1707281', 'Ali Akbar Kiaei', 'ali akbar kiaei')
{khademi@ce.,manzuri@,kiapour@ee.,kiaei@ce.}sharif.edu
4a2062ba576ca9e9a73b6aa6e8aac07f4d9344b9Fusing Deep Convolutional Networks for Large
Scale Visual Concept Classification
Department of Computer Engineering
Bas kent University
06810 Ankara, TURKEY
('2140386', 'Hilal Ergun', 'hilal ergun')
('1700011', 'Mustafa Sert', 'mustafa sert')
21020005@mail.baskent.edu.tr, Bmsert@baskent.edu.tr
4ac4e8d17132f2d9812a0088594d262a9a0d339bRank Constrained Recognition under Unknown Illuminations
Center for Automation Research (CfAR)
Department of Electrical and Computer Engineering
University of Maryland, College Park, MD
('9215658', 'Rama Chellappa', 'rama chellappa'){shaohua, rama}@cfar.umd.edu
4ac3cd8b6c50f7a26f27eefc64855134932b39beRobust Facial Landmark Detection
via a Fully-Convolutional Local-Global Context Network
Technical University of Munich
('3044182', 'Daniel Merget', 'daniel merget')
('28096417', 'Matthias Rock', 'matthias rock')
('46343645', 'Gerhard Rigoll', 'gerhard rigoll')
daniel.merget@tum.de
matthias.rock@tum.de
mmk@ei.tum.de
4abaebe5137d40c9fcb72711cdefdf13d9fc3e62Dimension Reduction for Regression
with Bottleneck Neural Networks
BECS, Aalto University School of Science and Technology, Finland
('2504988', 'Elina Parviainen', 'elina parviainen')
4acd683b5f91589002e6f50885df51f48bc985f4BRIDGING COMPUTER VISION AND SOCIAL SCIENCE : A MULTI-CAMERA VISION
SYSTEM FOR SOCIAL INTERACTION TRAINING ANALYSIS
Peter Tu
GE Global Research, Niskayuna NY USA
('1713712', 'Jixu Chen', 'jixu chen')
('39643145', 'Ming-Ching Chang', 'ming-ching chang')
('2095482', 'Tai-Peng Tian', 'tai-peng tian')
('1689202', 'Ting Yu', 'ting yu')
4a1d640f5e25bb60bb2347d36009718249ce9230Towards Multi-view and Partially-occluded Face Alignment
National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing 100190, P. R. China
National University of Singapore, Singapore
('1757173', 'Junliang Xing', 'junliang xing')
('1773437', 'Zhiheng Niu', 'zhiheng niu')
('1753492', 'Junshi Huang', 'junshi huang')
('40506509', 'Weiming Hu', 'weiming hu')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
{jlxing,wmhu}@nlpr.ia.ac.cn
{niuzhiheng,junshi.huang,eleyans}@nus.edu.sg
4aeb87c11fb3a8ad603311c4650040fd3c088832Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
1816
4a3d96b2a53114da4be3880f652a6eef3f3cc0352666
A Dictionary Learning-Based
3D Morphable Shape Model
('35220006', 'Claudio Ferrari', 'claudio ferrari')
('2973738', 'Giuseppe Lisanti', 'giuseppe lisanti')
('2507859', 'Stefano Berretti', 'stefano berretti')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
4a6fcf714f663618657effc341ae5961784504c7Scaling up Class-Specific Kernel Discriminant
Analysis for large-scale Face Verification
('9219875', 'Moncef Gabbouj', 'moncef gabbouj')
24b37016fee57057cf403fe2fc3dda78476a8262Automatic Recognition of Eye Blinking in Spontaneously Occurring Behavior
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
University of Pittsburgh, Pittsburgh
('1683262', 'Tsuyoshi Moriyama', 'tsuyoshi moriyama')
('1724419', 'Jing Xiao', 'jing xiao')
24115d209e0733e319e39badc5411bbfd82c5133Long-term Recurrent Convolutional Networks for
Visual Recognition and Description
('7408951', 'Jeff Donahue', 'jeff donahue')
('2234342', 'Lisa Anne Hendricks', 'lisa anne hendricks')
('34849128', 'Marcus Rohrbach', 'marcus rohrbach')
('1811430', 'Subhashini Venugopalan', 'subhashini venugopalan')
('1687120', 'Sergio Guadarrama', 'sergio guadarrama')
('2903226', 'Kate Saenko', 'kate saenko')
('1753210', 'Trevor Darrell', 'trevor darrell')
24c442ac3f6802296d71b1a1914b5d44e48b4f29Pose and expression-coherent face recovery in the wild
Technicolor, Cesson-S´evign´e, France
Franc¸ois Le Clerc
Patrick P´erez
('2232848', 'Xavier P. Burgos-Artizzu', 'xavier p. burgos-artizzu')
('2045531', 'Joaquin Zepeda', 'joaquin zepeda')
xavier.burgos,joaquin.zepeda,francois.leclerc,patrick.perez@technicolor.com
247cab87b133bd0f4f9e8ce5e7fc682be6340eacRESEARCH ARTICLE
Robust Eye Center Localization through Face
Alignment and Invariant Isocentric Patterns
School of Physics and Engineering, Sun Yat-Sen University, Guangzhou, China, 2 School of Information
Science and Technology, Sun Yat-Sen University, Guangzhou, China, 3 SYSU-CMU Shunde International
Joint Research Institute, Foshan, China
☯ These authors contributed equally to this work.
('36721307', 'Zhiyong Pang', 'zhiyong pang')
('2940388', 'Chuansheng Wei', 'chuansheng wei')
('2127322', 'Dongdong Teng', 'dongdong teng')
('2547930', 'Dihu Chen', 'dihu chen')
('31912378', 'Hongzhou Tan', 'hongzhou tan')
* issthz@mail.sysu.edu.cn (HT); stspzy@mail.sysu.edu.cn (ZP)
245f8ec4373e0a6c1cae36cd6fed5a2babed1386J. Appl. Environ. Biol. Sci., 7(3S)1-10, 2017
© 2017, TextRoad Publication
ISSN: 2090-4274
Journal of Applied Environmental
and Biological Sciences
www.textroad.com
Lucas Kanade Optical Flow Computation from Superpixel based Intensity
Region for Facial Expression Feature Extraction
1Intelligent Biometric Group, School of Electrical and Electronics Engineering, Universiti Sains Malaysia,
Electrical, Electronics and Automation Section, Universiti Kuala Lumpur Malaysian Spanish Institute
Engineering Campus, 14300 Nibong Tebal, Pulau Pinang, Malaysia
Kulim Hi-Tech Park, Kedah, Malaysia
Received: February 21, 2017
Accepted: May 14, 2017
('9114862', 'Halina Hassan', 'halina hassan')
('2583099', 'Abduljalil Radman', 'abduljalil radman')
('2612367', 'Shahrel Azmin Suandi', 'shahrel azmin suandi')
('1685966', 'Sazali Yaacob', 'sazali yaacob')
24cb375a998f4af278998f8dee1d33603057e525Projection Metric Learning on Grassmann Manifold with Application to Video based Face Recognition
1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing, 100049, China
seek to learn a generic mapping f : G(q,D) → G(q,d) that is defined as
f (YYY iYYY T
i ) = WWW TYYY iYYY T
i WWW = (WWW TYYY i)(WWW TYYY i)T .
(1)
where WWW ∈ RD×d (d ≤ D), is a transformation matrix of column full rank.
With this mapping, the original Grassmann manifold G(q,D) can be trans-
formed into a lower-dimensional Grassmann manifold G(q,d). However,
except the case WWW is an orthogonal matrix, WWW TYYY i is not generally an or-
thonormal basis matrix. Note that only the linear subspaces spanned by or-
thonormal basis matrix can form a valid Grassmann manifold. To tackle this
problem, we temporarily use the orthonormal components of WWW TYYY i defined
(cid:48)
by WWW TYYY
i to represent an orthonormal basis matrix of the transformed pro-
(cid:48)
jection matrices. As for the approach to get the WWW TYYY
i, we give more details
in the original paper. Here, we briefly describe the formulation of the Pro-
jection Metric on the new Grassmann manifold and the proposed objection
function in the following.
Learned Projection Metric. The Projection Metric of any pair of trans-
formed projection operators WWW TYYY
(cid:48)T
j WWW is defined by:
(cid:48)
jYYY
(cid:48)
iYYY
(cid:48)
iYYY
(cid:48)
jYYY
(cid:48)T
i WWW ,WWW TYYY
(cid:48)T
i WWW , WWW TYYY
(cid:48)T
p(WWW TYYY
d2
j WWW )
= 2−1/2(cid:107)WWW TYYY
(cid:48)T
(cid:48)
i WWW −WWW TYYY
iYYY
= 2−1/2tr(PPPAAAi jAAAT
i jPPP).
i −YYY
(cid:48)T
(cid:48)
jYYY
(cid:48)T
j WWW(cid:107)2
(2)
(cid:48)
iYYY
(cid:48)
jYYY
(cid:48)T
j and PPP = WWWWWW T . Since WWW is required to be a
where AAAi j = YYY
matrix with column full rank, PPP is a rank-d symmetric positive semidefinite
matrix of size D× D, which has a similar form as Mahalanobis matrix.
Discriminant Function. The discriminant function is designed to minimize
the projection distances of any within-class subspace pairs while to maxi-
mize the projection distances of between-class subspace pairs. The matrix
PPP is thus achieved by the objective function J(PPP) as:
PPP∗ = argmin
PPP
J(PPP) = argmin
PPP
(Jw(PPP)− αJb(PPP)).
(3)
where α reflects the trade-off between the within-class compactness term
Jw(PPP) and between-class dispersion term Jb(PPP), which are measured by av-
erage within-class scatter and average between-class scatter respectively as:
Jw(PPP) =
Jb(PPP) =
Nw
i=1
Nb
i=1
j:Ci=Cj
j:Ci(cid:54)=Cj
2−1/2tr(PPPAAAi jAAAT
i jPPP).
2−1/2tr(PPPAAAi jAAAT
i jPPP).
(4)
(5)
where Nw is the number of pairs of samples from the same class, Nb is the
(cid:48)T
number of pairs of samples from different classes, AAAi j = YYY
j and
PPP is the PSD matrix that needs to be learned.
i −YYY
(cid:48)T
(cid:48)
jYYY
(cid:48)
iYYY
[1] J. Hamm and D. D. Lee. Grassmann discriminant analysis: a unifying
view on subspace-based learning. In ICML, 2008.
[2] Jihun Hamm and Daniel D Lee. Extended grassmann kernels for
subspace-based learning. In NIPS, 2008.
[3] Mehrtash Tafazzoli Harandi, C. Sanderson, S. Shirazi, and B. C. Lovell.
Graph embedding discriminant analysis on grassmannian manifolds for
improved image set matching. In CVPR, 2011.
[4] Mehrtash Tafazzoli Harandi, Mathieu Salzmann, Sadeep Jayasumana,
Richard Hartley, and Hongdong Li. Expanding the family of grassman-
nian kernels: An embedding perspective. In ECCV. 2014.
[5] R. Vemulapalli, J. Pillai, and R. Chellappa. Kernel learning for extrinsic
classification of manifold features. In CVPR, 2013.
Figure 1: Conceptual illustration of the proposed Projection Metric Learn-
ing (PML) on the Grassmann Manifold. Traditional Grassmann discrimi-
nant analysis methods take the away (a)-(b)-(d)-(e) to first embed the origi-
nal Grassmann manifold G(q,D) (b) into high dimensional Hilbert space H
(d) and then learn a map from the Hilbert space to a lower-dimensional, op-
tionally more discriminative space Rd (e). In contrast, the newly proposed
approach goes the way (a)-(b)-(c) to learn the metric/mapping from the orig-
inal Grassmann manifold G(q,D) (b) to a new more discriminant Grssmann
manifold G(q,d) (c).
In video based face recognition, great success has been made by represent-
ing videos as linear subspaces, which typically reside on Grassmann mani-
fold endowed with the well-studied projection metric. Under the projection
metric framework, most of recent studies [1, 2, 3, 4, 5] exploited a series of
positive definite kernel functions on Grassmann manifold to first embed the
manifold into a high dimensional Hilbert space, and then map the flattened
manifold into a lower-dimensional Euclidean space (see Fig.1 (a)-(b)-(d)-
(e)). Although these methods can be employed for supervised classification,
they are limited to the Mercer kernels which yields implicit projection, and
thus restricted to use only kernel-based classifiers. Moreover, the computa-
tional complexity of these kernel-based methods increases with the number
of training sample.
To overcome the limitations of existing Grassmann discriminant anal-
ysis methods, by endowing the well-studied Projection Metric with Grass-
mann manifold, this paper attempt to learn a Mahalanobis-like matrix on the
Grassmann manifold without resorting to kernel Hilbert space embedding.
In contrast to the kernelization scheme, our approach directly works on the
original manifold and exploits its geometry to learn a representation that stil-
l benefits from useful properties of the Grassmann manifold. Furthermore,
the learned Mahalanobis-like matrix can be decomposed into the transfor-
mation for dimensionality reduction, which maps the original Grassmann
manifold to a lower-dimensional, more discriminative Grassmann manifold
(see Fig.1 (a)-(b)-(c)).
Formally, assume m video sequences are given as {XXX 1,XXX 2, . . . ,XXX m},
where XXX i ∈ RD×ni describes a data matrix of the i-th video containing ni
frames, each frame being expressed as a D-dimensional feature vector. In
these data, each video belongs to one of face classes denoted by Ci. The
i-th video XXX i is represented by a q-dimensional linear subspace spanned by
an orthonormal basis matrix YYY i ∈ RD×q, s.t. XXX iXXX T
i , where ΛΛΛi,
YYY i correspond to the matrices of the q largest eigenvalues and eigenvectors
respectively.
i (cid:39) YYY iΛΛΛiYYY T
Given a linear subspace span(YYY i) on Grassmann manifold (as discussed
i as the elements on the manifold), we
in the original paper, we denote YYY iYYY T
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
24aac045f1e1a4c13a58eab4c7618dccd4c0e671
240d5390af19bb43761f112b0209771f19bfb696
24f9248f01df3020351347c2a3f632e01de72090Reconstructing a Fragmented Face from a
Cryptographic Identification Protocol
The University of Texas at Austin
('39573884', 'Andy Luong', 'andy luong')
('2499821', 'Michael Gerbush', 'michael gerbush')
('1715120', 'Brent Waters', 'brent waters')
('1794409', 'Kristen Grauman', 'kristen grauman')
aluong,mgerbush,bwaters,grauman@cs.utexas.edu
24e099e77ae7bae3df2bebdc0ee4e00acca71250Robust face alignment under occlusion via regional predictive power
estimation.
© 2015 IEEE
For additional information about this publication click this link.
http://qmro.qmul.ac.uk/xmlui/handle/123456789/22467
Information about this research object was correct at the time of download; we occasionally
make corrections to records, please therefore check the published record when citing. For
('2966679', 'Heng Yang', 'heng yang')more information contact scholarlycommunications@qmul.ac.uk
24959d1a9c9faf29238163b6bcaf523e2b05a053High accuracy head pose tracking survey
Warsaw University of Technology, Poland
('1899063', 'Adam Strupczewski', 'adam strupczewski')
24f1febcdf56cd74cb19d08010b6eb5e7c81c362Synergistic Methods for using Language in Robotics
Ching L. Teo
University of Maryland
Dept of Computer Science
College Park, Maryland
+01 3014051762
University of Maryland
Dept of Computer Science
College Park, Maryland
+01 3014051762
University of Maryland
Institute for Advanced
Computer Studies
College Park, Maryland
+01 3014051743
University of Maryland
Dept of Computer Science
College Park, Maryland
+01 3014051768
('7607499', 'Yezhou Yang', 'yezhou yang')
('1759899', 'Cornelia Fermüller', 'cornelia fermüller')
('1697493', 'Yiannis Aloimonos', 'yiannis aloimonos')
cteo@cs.umd.edu
yzyang@cs.umd.edu
fer@umiacs.umd.edu
yiannis@cs.umd.edu
2450c618cca4cbd9b8cdbdb05bb57d67e63069b1A Connexionist Approach for Robust and Precise Facial Feature Detection in
Complex Scenes
Stefan Duffner and Christophe Garcia
France Telecom Research & Development
4, rue du Clos Courtel
35512 Cesson-S´evign´e, France
fstefan.duffner, christophe.garciag@francetelecom.com
244b57cc4a00076efd5f913cc2833138087e1258Warped Convolutions: Efficient Invariance to Spatial Transformations ('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
24cf9fe9045f50c732fc9c602358af89ae40a9f7YANG et al.: ATTRIBUTE RECOGNITION FROM ADAPTIVE PARTS
Attribute Recognition from Adaptive Parts
Ligeng Zhu2
Simon Fraser University
Vancouver, Canada
Zhejiang University
Hangzhou, China
3 Microsoft Research Asia
Beijing, China
Tongji University
Shanghai, China
('3202074', 'Luwei Yang', 'luwei yang')
('1732264', 'Yichen Wei', 'yichen wei')
('1729017', 'Shuang Liang', 'shuang liang')
('37291674', 'Ping Tan', 'ping tan')
luweiy@sfu.ca
zhuligeng@zju.edu.cn
yichenw@microsoft.com
shuangliang@tongji.edu.cn
pingtan@sfu.ca
24f022d807352abf071880877c38e53a98254dcdAre screening methods useful in feature selection? An
empirical study
Florida State University, Tallahassee, Florida, U.S.A
('6693611', 'Mingyuan Wang', 'mingyuan wang')
('2455529', 'Adrian Barbu', 'adrian barbu')
* abarbu@stat.fsu.edu
241d2c517dbc0e22d7b8698e06ace67de5f26fdfOnline, Real-Time Tracking
Using a Category-to-Individual Detector(cid:2)
California Institute of Technology, USA
('1990633', 'David Hall', 'david hall')
('1690922', 'Pietro Perona', 'pietro perona')
{dhall,perona}@vision.caltech.edu
24869258fef8f47623b5ef43bd978a525f0af60eUNIVERSITÉDEGRENOBLENoattribuéparlabibliothèqueTHÈSEpourobtenirlegradedeDOCTEURDEL’UNIVERSITÉDEGRENOBLESpécialité:MathématiquesetInformatiquepréparéeauLaboratoireJeanKuntzmanndanslecadredel’ÉcoleDoctoraleMathématiques,SciencesetTechnologiesdel’Information,InformatiqueprésentéeetsoutenuepubliquementparMatthieuGuillauminle27septembre2010ExploitingMultimodalDataforImageUnderstandingDonnéesmultimodalespourl’analysed’imageDirecteursdethèse:CordeliaSchmidetJakobVerbeekJURYM.ÉricGaussierUniversitéJosephFourierPrésidentM.AntonioTorralbaMassachusettsInstituteofTechnologyRapporteurMmeTinneTuytelaarsKatholiekeUniversiteitLeuvenRapporteurM.MarkEveringhamUniversityofLeedsExaminateurMmeCordeliaSchmidINRIAGrenobleExaminatriceM.JakobVerbeekINRIAGrenobleExaminateur
24e6a28c133b7539a57896393a79d43dba46e0f6ROBUST BAYESIAN METHOD FOR SIMULTANEOUS BLOCK SPARSE SIGNAL
RECOVERY WITH APPLICATIONS TO FACE RECOGNITION
Department of Electrical and Computer Engineering
University of California, San Diego
('32352411', 'Igor Fedorov', 'igor fedorov')
('3291075', 'Ritwik Giri', 'ritwik giri')
('1748319', 'Bhaskar D. Rao', 'bhaskar d. rao')
('1690269', 'Truong Q. Nguyen', 'truong q. nguyen')
248db911e3a6a63ecd5ff6b7397a5d48ac15e77aEnriching Texture Analysis with Semantic Data
Communications, Signal Processing and Control Group
School of Electronics and Computer Science
University of Southampton
('28637223', 'Tim Matthews', 'tim matthews')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('1697360', 'Mahesan Niranjan', 'mahesan niranjan')
{tm1e10,msn,mn}@soton.ac.uk
24d376e4d580fb28fd66bc5e7681f1a8db3b6b78
24f1e2b7a48c2c88c9e44de27dc3eefd563f6d39Recognition of Action Units in the Wild
with Deep Nets and a New Global-Local Loss
C. Fabian Benitez-Quiroz
Aleix M. Martinez
Dept. Electrical and Computer Engineering
The Ohio State University
('1678691', 'Yan Wang', 'yan wang'){benitez-quiroz.1,wang.9021,martinez.158}@osu.edu
243e9d490fe98d139003bb8dc95683b366866c57Distinctive Parts for Relative attributes
Thesis submitted in partial fulfillment
of the requirements for the degree of
Master of science( by research)
in
Computer Science Engineering
by
Ramachandruni Naga Sandeep
201207582
Center for Visual Information Technology
International Institute of Information Technology
Hyderabad - 500 032, INDIA
December 2014
nsandeep.ramachandruni@research.iiit.ac.in
2465fc22e03faf030e5a319479a95ef1dfc46e14______________________________________________________PROCEEDING OF THE 20TH CONFERENCE OF FRUCT ASSOCIATION
Influence of Different Feature Selection Approaches
on the Performance of Emotion Recognition
Methods Based on SVM
Ural Federal University (UrFU
Yekaterinburg, Russia
('11063038', 'Daniil Belkov', 'daniil belkov')
('3457868', 'Konstantin Purtov', 'konstantin purtov')
d.d.belkov, k.s.purtov@gmail.com, kublanov@mail.ru
24ff832171cb774087a614152c21f54589bf7523Beat-Event Detection in Action Movie Franchises
Jerome Revaud
Zaid Harchaoui
('2319574', 'Danila Potapov', 'danila potapov')
('3271933', 'Matthijs Douze', 'matthijs douze')
('2462253', 'Cordelia Schmid', 'cordelia schmid')
247a6b0e97b9447850780fe8dbc4f94252251133Facial Action Unit Detection: 3D versus 2D Modality
Electrical and Electronics Engineering
Bo gazic i University, Istanbul, Turkey
B¨ulent Sankur
Electrical and Electronics Engineering
Bo gazic i University, Istanbul, Turkey
Department of Psychology
Bo gazic i University, Istanbul, Turkey
('1839621', 'Arman Savran', 'arman savran')
('27414819', 'M. Taha Bilge', 'm. taha bilge')
arman.savran@boun.edu.tr
bulent.sankur@boun.edu.tr
taha.bilge@boun.edu.tr
24bf94f8090daf9bda56d54e42009067839b20df
240eb0b34872c431ecf9df504671281f59e7da37Cutout-Search: Putting a Name to the Picture
Carnegie Mellon University
Cornell University
('1746610', 'Dhruv Batra', 'dhruv batra')
('2371390', 'Adarsh Kowdle', 'adarsh kowdle')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
('1713589', 'Devi Parikh', 'devi parikh')
batradhruv@cmu.edu
apk64@cornell.edu dparikh@cmu.edu tsuhan@ece.cornell.edu
230527d37421c28b7387c54e203deda64564e1b7Person Re-identification: System Design and
Evaluation Overview
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('40156369', 'Rui Zhao', 'rui zhao')
23fdbef123bcda0f07d940c72f3b15704fd49a98
23ebbbba11c6ca785b0589543bf5675883283a57
23aef683f60cb8af239b0906c45d11dac352fb4eIncorporating Context Information into Deep
Neural Network Acoustic Models
July 2016
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Florian Metze, Chair (Carnegie Mellon University
Alan W Black (Carnegie Mellon University
Alex Waibel (Carnegie Mellon University
Jinyu Li (Microsoft)
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy.
('37467623', 'Yajie Miao', 'yajie miao')
('37467623', 'Yajie Miao', 'yajie miao')
235d5620d05bb7710f5c4fa6fceead0eb670dec5Who’s Doing What: Joint Modeling of Names and
Verbs for Simultaneous Face and Pose Annotation
Luo Jie
Idiap and EPF Lausanne
Idiap Research Institute
ETH Zurich
('3033284', 'Barbara Caputo', 'barbara caputo')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
jluo@idiap.ch
bcaputo@idiap.ch
ferrari@vision.ee.ethz.ch
23ce6f404c504592767b8bec7d844d87b462de71A Deep Face Identification Network Enhanced by Facial Attributes Prediction
West Virginia University
('34708406', 'Fariborz Taherkhani', 'fariborz taherkhani')
('8147588', 'Nasser M. Nasrabadi', 'nasser m. nasrabadi')
ft0009@mix.wvu.edu, nasser.nasrabadi@mail.wvu.edu, Jeremy.Dawson@mail.wvu.edu
23fd653b094c7e4591a95506416a72aeb50a32b5Emotion Recognition using Fuzzy Rule-based System
International Journal of Computer Applications (0975 – 8887)
Volume 93 – No.11, May 2014
Department of Computer Science
Amity University, Lucknow, India
Faculty in Department Of Computer Science
Amity University, Lucknow, India

('14559473', 'Akanksha Chaturvedi', 'akanksha chaturvedi')
23172f9a397f13ae1ecb5793efd81b6aba9b4537Proceedings of the 2015 Workshop on Vision and Language (VL’15), pages 10–17,
Lisbon, Portugal, 18 September 2015. c(cid:13)2015 Association for Computational Linguistics.
10
231a6d2ee1cc76f7e0c5912a530912f766e0b459Shape Primitive Histogram: A Novel Low-Level Face Representation for Face
Recognition
aCollege of Computer Science at Chongqing University, 400044, Chongqing, P.R.C
bSchool of Software Engineering at Chongqing Univeristy,400044,Chongqing,P.R.C
cSchool of Astronautics at Beihang University, 100191, Beijing, P.R.C
dState Key Laboratory of Management and Control for Complex Systems
Institute of Automation, Chinese Academy of Sciences, 100190, Beijing, P.R.C
eMinistry of Education Key Laboratory of Dependable Service Computing in Cyber Physical Society, 400044, Chongqing, P.R.C
('1786011', 'Sheng Huang', 'sheng huang')
('1698431', 'Dan Yang', 'dan yang')
('1737368', 'Haopeng Zhang', 'haopeng zhang')
236a4f38f79a4dcc2183e99b568f472cf45d27f41632
Randomized Clustering Forests
for Image Classification
Frederic Jurie, Member, IEEE Computer Society
('3128253', 'Frank Moosmann', 'frank moosmann')
('1975110', 'Eric Nowak', 'eric nowak')
230c4a30f439700355b268e5f57d15851bcbf41fEM Algorithms for Weighted-Data Clustering
with Application to Audio-Visual Scene Analysis
('1780201', 'Xavier Alameda-Pineda', 'xavier alameda-pineda')
('1785817', 'Florence Forbes', 'florence forbes')
('1794229', 'Radu Horaud', 'radu horaud')
237fa91c8e8098a0d44f32ce259ff0487aec02cfIEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 4, AUGUST 2006
863
Bidirectional PCA With Assembled Matrix
Distance Metric for Image Recognition
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('1711542', 'Kuanquan Wang', 'kuanquan wang')
23fc83c8cfff14a16df7ca497661264fc54ed746The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA, USA 15213
http://www.cs.cmu.edu/~face
Department of Psychology
University of Pittsburgh
The Robotics Institute
Carnegie Mellon University
4015 O'Hara Street
Pittsburgh, PA, USA 15260
Yingli Tian
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA, USA 15213
Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition
(FG'00), pp. 484-490, Grenoble, France.
Comprehensive Database for Facial Expression Analysis
('1733113', 'Takeo Kanade', 'takeo kanade')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
tk@cs.cmu.edu
yltian@cs.cmu.edu
jeffcohn+@pitt.edu
2331df8ca9f29320dd3a33ce68a539953fa87ff5Extended Isomap for Pattern Classification
Honda Fundamental Research Labs
Mountain View, CA 94041
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')myang@hra.com
232b6e2391c064d483546b9ee3aafe0ba48ca519Optimization problems for fast AAM fitting in-the-wild
1. School of Computer Science
University of Lincoln, U.K
2. Department of Computing
Imperial College London, U.K
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')gtzimiropoulos@lincoln.ac.uk
23ba9e462151a4bf9dfc3be5d8b12dbcfb7fe4c3CS 229 Project, Fall 2014
Determining Mood from Facial Expressions
Introduction
I
Facial expressions play an extremely important role in human communication. As
society continues to make greater use of human-machine interactions, it is important for
machines to be able to interpret facial expressions in order to improve their
authenticity. If machines can be trained to determine mood to a better extent than
humans can, especially for more subtle moods, then this could be useful in fields such as
counseling. This could also be useful for gauging reactions of large audiences in various
contexts, such as political talks.
The results of this project could also be applied to recognizing other features of facial
expressions, such as determining when people are purposefully suppressing emotions or
lying. The ability to recognize different facial expressions could also improve technology
that recognizes to whom specific faces belong. This could in turn be used to search a
large number of pictures for a specific photo, which is becoming increasingly difficult, as
storing photos digitally has been extremely common in the past decade. The possibilities
are endless.
II Data and Features
2.1 Data
Our data consists of 1166 frontal images of
people’s faces from three databases, with each
image labeled with one of eight emotions:
anger, contempt, disgust, fear, happiness,
neutral, sadness, and surprise. The TFEID [1],
CK+ [2], and JAFFE [3] databases primarily
consist of Taiwanese, Caucasian, and Japanese
subjects, respectively. The TFEID and JAFFE
images are both cropped with the faces
centered. Each image has a subject posing with
one of the emotions. The JAFFE database does
not have any images for contempt.
2.2 Features
On each face, there are many different facial landmarks. While some of these landmarks
(pupil position, nose tip, and face contour) are not as indicative of emotion, others
(eyebrow, mouth, and eye shape) are. To extract landmark data from images, we used
Happiness
Figure 1
Anger
('34482382', 'Matthew Wang', 'matthew wang')mmwang@stanford.edu
spencery@stanford.edu
237eba4822744a9eabb121fe7b50fd2057bf744cFacial Expression Synthesis Using PAD Emotional
Parameters for a Chinese Expressive Avatar
1 Department of Computer Science and Technology
Tsinghua University, 100084 Beijing, China
2 Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong, HKSAR, China
('2180849', 'Shen Zhang', 'shen zhang')
('3860920', 'Zhiyong Wu', 'zhiyong wu')
('1702243', 'Helen M. Meng', 'helen m. meng')
('7239047', 'Lianhong Cai', 'lianhong cai')
zhangshen05@mails.tsinghua.edu.cn, john.zy.wu@gmail.com
hmmeng@se.cuhk.edu.hk, clh-dcs@tsinghua.edu.cn
238fc68b2e0ef9f5ec043d081451902573992a032656
Enhanced Local Gradient Order Features and
Discriminant Analysis for Face Recognition
role in robust face recognition [5]. Many algorithms have
been proposed to deal with the effectiveness of feature design
and extraction [6], [7]; however, the performance of many
existing methods is still highly sensitive to variations of
imaging conditions, such as outdoor illumination, exaggerated
expression, and continuous occlusion. These complex varia-
tions are significantly affecting the recognition accuracy in
recent years [8]–[10].
Appearance-based subspace learning is one of the sim-
plest approach for feature extraction, and many methods
are usually based on linear correlation of pixel intensities.
For example, Eigenface [11] uses eigen system of pixel
intensities to estimate the lower rank linear subspace of
a set of training face images by minimizing the (cid:2)2 dis-
tance metric. The solution enjoys optimality properties when
noise is independent
identically distributed Gaussian only.
Fisherface [12] will suffer more due to the estimation of
inverse within-class covariance matrix [13],
thus the per-
formance will degenerate rapidly in the cases of occlusion
and small sample size. Laplacianfaces [14] refer to another
appearance-based approach which learns a locality preserv-
ing subspace and seeks to capture the intrinsic geometry
and local structure of the data. Other methods such as those
in [5] and [15] also provide valuable approaches to supervised
or unsupervised dimension reduction tasks.
A fundamental problem of appearance-based methods for
face recognition, however, is that they are sensitive to imag-
ing conditions [10]. As for data corrupted by illumination
changes, occlusions, and inaccurate alignment, the estimated
subspace will be biased, thus much of the efforts concentrate
on removing/shrinking the noise components. In contrast, local
feature descriptors [15]–[19] have certain advantages as they
are more stable to local changes. In the view of image pro-
cessing and vision, the basic imaging system can be simply
formulated as
(x, y) = A(x, y) × L(x, y)
(1)
('1688667', 'Chuan-Xian Ren', 'chuan-xian ren')
('1718623', 'Zhen Lei', 'zhen lei')
('1726138', 'Dao-Qing Dai', 'dao-qing dai')
('34679741', 'Stan Z. Li', 'stan z. li')
2322ec2f3571e0ddc593c4e2237a6a794c61251dJack, R. E. , Sun, W., Delis, I., Garrod, O. G. B. and Schyns, P. G. (2016)
Four not six: revealing culturally common facial expressions of
emotion.Journal of Experimental Psychology: General, 145(6), pp. 708-
730. (doi:10.1037/xge0000162)
This is the author’s final accepted version.
There may be differences between this version and the published version.
You are advised to consult the publisher’s version if you wish to cite from
it.
http://eprints.gla.ac.uk/116592/

Deposited on: 20 April 2016
Enlighten Research publications by members of the University of Glasgow
http://eprints.gla.ac.uk
23e75f5ce7e73714b63f036d6247fa0172d97cb6BioMed Central
Research
Facial expression (mood) recognition from facial images using
committee neural networks
Open Access
University of Akron, Akron
Engineering, University of Akron, Akron, OH 44325-3904, USA
* Corresponding author
Published: 5 August 2009
doi:10.1186/1475-925X-8-16
Received: 24 September 2008
Accepted: 5 August 2009
This article is available from: http://www.biomedical-engineering-online.com/content/8/1/16
© 2009 Kulkarni et al; licensee BioMed Central Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
('39890387', 'Saket S Kulkarni', 'saket s kulkarni')
('2484370', 'Narender P Reddy', 'narender p reddy')
('32173165', 'SI Hariharan', 'si hariharan')
Email: Saket S Kulkarni - saketkulkarni@gmail.com; Narender P Reddy* - npreddy@uakron.edu; SI Hariharan - hari@uakron.edu
23429ef60e7a9c0e2f4d81ed1b4e47cc2616522fA Domain Based Approach to Social Relation Recognition
Max Planck Institute for Informatics, Saarland Informatics Campus
Figure 1: We investigate the recognition of social relations in a domain-based approach. Our study is based on Bugental’s
social psychology theory [1] that partitions social life into 5 domains from which we derive 16 social relations.
('32222907', 'Qianru Sun', 'qianru sun')
('1697100', 'Bernt Schiele', 'bernt schiele')
('1739548', 'Mario Fritz', 'mario fritz')
{qsun, schiele, mfritz}@mpi-inf.mpg.de
23aba7b878544004b5dfa64f649697d9f082b0cfLocality-Constrained Discriminative Learning and Coding
1Department of Electrical & Computer Engineering,
College of Computer and Information Science
Northeastern University, Boston, MA, USA
('7489165', 'Shuyang Wang', 'shuyang wang')
('37771688', 'Yun Fu', 'yun fu')
{shuyangwang, yunfu}@ece.neu.edu
23120f9b39e59bbac4438bf4a8a7889431ae8adbAalborg Universitet
Improved RGB-D-T based Face Recognition
Nikisins, Olegs; Sun, Yunlian; Li, Haiqing; Sun, Zhenan; Moeslund, Thomas B.; Greitans,
Modris
Published in:
DOI (link to publication from Publisher):
10.1049/iet-bmt.2015.0057
Publication date:
2016
Document Version
Accepted manuscript, peer reviewed version
Link to publication from Aalborg University
Citation for published version (APA):
Oliu Simon, M., Corneanu, C., Nasrollahi, K., Guerrero, S. E., Nikisins, O., Sun, Y., ... Greitans, M. (2016).
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
? Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
? You may not further distribute the material or use it for any profit-making activity or commercial gain
? You may freely distribute the URL identifying the publication in the public portal ?
Take down policy
the work immediately and investigate your claim.
Downloaded from vbn.aau.dk on: October 11, 2016
('7855312', 'Sergio Escalera', 'sergio escalera')If you believe that this document breaches copyright please contact us at vbn@aub.aau.dk providing details, and we will remove access to
2303d07d839e8b20f33d6e2ec78d1353cac256cfSqueeze-and-Excitation on Spatial and Temporal
Deep Feature Space for Action Recognition
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China
('2896701', 'Gaoyun An', 'gaoyun an')
('3027947', 'Wen Zhou', 'wen zhou')
('47095962', 'Yuxuan Wu', 'yuxuan wu')
('4464686', 'ZhenXing Zheng', 'zhenxing zheng')
('46398737', 'Yongwen Liu', 'yongwen liu')
Email:{gyan, 16125155, 16120307, zhxzheng, 17120314}@bjtu.edu.cn
23d55061f7baf2ffa1c847d356d8f76d78ebc8c1Solmaz et al. IPSJ Transactions on Computer Vision and
Applications (2017) 9:22
DOI 10.1186/s41074-017-0033-4
IPSJ Transactions on Computer
Vision and Applications
RESEARCH PAPER
Open Access
Generic and attribute-specific deep
representations for maritime vessels
('2827750', 'Berkan Solmaz', 'berkan solmaz')
('2131286', 'Erhan Gundogdu', 'erhan gundogdu')
('32499620', 'Aykut Koc', 'aykut koc')
23c3eb6ad8e5f18f672f187a6e9e9b0d94042970Deep Domain Adaptation for Describing People Based on
Fine-Grained Clothing Attributes
IBM Research, Australia, 2 IBM T.J. Watson Research Center, 3 National University of Singapore
Source domain
RCNN
body
detection
Alignment
cost layer
Multi-label
attributes
objective
Target domain
Alignment cost layer
Extra Info
(e.g. Labels)
Figure 1: Our proposed Deep Domain Adaptation Network (DDAN).
Source and target domains are modeled jointly with knowledge transfer oc-
curring at multiple levels of the hierarchy through alignment cost layers.
Describing people in detail is an important task for many applications.
For instance, criminal investigation processes often involve searching for
suspects based on detailed descriptions provided by eyewitnesses or com-
piled from images captured by surveillance cameras. The FBI list of na-
tionwide wanted bank robbers (https://bankrobbers.fbi.gov/) has clear exam-
ples of such ne-grained descriptions, including attributes covering detailed
color information (e.g., “light blue” “khaki”, “burgundy”), a variety of cloth-
ing types (e.g., ‘leather jacket”, “polo-style shirt”, “zip-up windbreaker”)
and also detailed clothing patterns (e.g., “narrow horizontal stripes”, “LA
printed text”, “checkered”).
Traditional computer vision methods for describing people, however,
have only focused on a small set of coarse-grained attributes. As an exam-
ple, the recent work of Zhang et al. [7] achieves impressive attribute predic-
tion performance in unconstrained scenarios, but only considers nine human
attributes. Existing systems for fashion analysis [1, 4, 6] and people search
in surveillance videos [2, 5] also rely on a relatively small set of clothing
attributes. Our work instead addresses the problem of describing people
with very fine-grained clothing attributes. A natural question that arises in
this setting is how to obtain a sufficient number of training samples for each
attribute without significant annotation cost.
Data collection: We observe that online shopping stores such as Ama-
zon.com and TMALL.com have a large set of garment images with associ-
ated descriptions. We created a huge dataset of clothing images with fine-
grained attribute labels by crawling data from these shopping websites. Our
dataset contains 1,108,013 clothing images with 25 different kinds attribute
categories (e.g.
type, color, pattern, season, occasion). The attribute la-
bels are very fine-detailed. For instance, we can find thousands of different
values for the “color” category. After data curation, we considered a subset
of this data that is meaningful from our application perspective.
Deep Domain Adaptation: Although we have collected a large-scale
dataset with fine-grained attributes, these images are taken in ideal pose /
lighting / background conditions, so it is unreliable to directly use them as
training data for attribute prediction in the domain of unconstrained images
captured, for example, by mobile phones or surveillance cameras. In or-
der to bridge this gap, we design a specific double-path deep convolutional
neural network for the domain adaptation problem. Each path receives one
domain image as the input, i.e., the street domain and the shop domain im-
ages. Each path consists of several convolutional layers which are stacked
layer-by-layer and normally higher layers represent higher-level concept ab-
stractions. Both of the two network paths share the same architecture, e.g.,
the same number of convolutional filters and number of middle layers. This
('35370244', 'Qiang Chen', 'qiang chen')
('1753492', 'Junshi Huang', 'junshi huang')
('2106286', 'Jian Dong', 'jian dong')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
23dd8d17ce09c22d367e4d62c1ccf507bcbc64daDeep Density Clustering of Unconstrained Faces
(Supplementary Material)
University of Maryland, College Park
A. Mathematical Details
Let S = {i | 0 < αi < C}. We have the following results:
nV(cid:88)
nV(cid:88)
i=1
c∗ =
w∗ =
αiΨθ(xi),
¯R∗ = (cid:107)Ψθ(xs) − c∗(cid:107)2 ,
αiΨθ(xi),
ρ∗ = w∗T Ψθ(xs),
where s ∈ S. Substituting into (3) and (4), we obtain
hSVDD(x) = 2 · hOC-SVM(x) = 2
αiK(xi, x) − ρ∗
(cid:34) nV(cid:88)
i=1
(1)
(2)
(5)
(6)
(cid:35)
(7)
A.2. Proof of Theorem 1
Theorem 1. If 1/nV < ν ≤ 1 and c∗T Ψθ(xs) (cid:54)= 0 for
some support vector xs, hSVDD(x) defined in (3) is asymp-
totically a Parzen window density estimator in the feature
space with Epanechnikov kernel.
Proof. Given the condition, according to Lemma 1,
hSVDD(x) is equivalent to hOC-SVM(x) with ρ∗ (cid:54)= 0. From
the results in [10] and the fact that(cid:80) αi = 1, we obtain:
(cid:21)
(cid:20)
hOC-SVM(x) =
αi
1 − 1
(cid:107)Ψθ(x) − Ψθ(xi)(cid:107)2
(cid:18)(cid:107)Ψθ(x) − Ψθ(xi)(cid:107)
(cid:19)
− ρ∗
− ρ∗ − 1,
αiKE
nV(cid:88)
nV(cid:88)
i=1
i=1
4 (1 − u2), |u| ≤ 1 is the Epanechnikov
where KE(u) = 3
kernel. As a consequence of Proposition 4 in [10] and the
proof of Proposition 1 in [11], as nV → ∞, the fraction
of support vector is ν, and the fraction of points with 0 <
αi < 1/(ν · nV ) vanishes. Therefore, either αi = 0 or
αi = 1/(ν · nV ). We introduce the notation ¯S = {i | αi =
ξ(z)
i=1
In this section, we first provide the two core mathe-
matical formulations and then present detailed proofs for
Lemma 1 and Theorem 1.
SVDD formulation:
(cid:88)
z∈V (x)
¯R +
ν · nV
min
c, ¯R, ξ
s.t.
(cid:107)Ψθ(z) − c(cid:107)2 ≤ ¯R + ξ(z),
ξ ≥ 0, ∀z ∈ V (x),
OC-SVM formulation:
(cid:88)
min
w, ρ, ξ
s.t.
(cid:107)w(cid:107)2 +
ν · nV
wT Ψθ(z) ≥ ρ − ξz,
z∈V (x)
ξz − ρ
ξz ≥ 0, ∀z ∈ V (x).
A.1. Proof of Lemma 1
Lemma 1. If 1/nV < ν ≤ 1, the SVDD formulation in (1)
is equivalent to the OC-SVM formulation in (2) when the
evaluation functions for the two are given by
hSVDD(x) = ¯R∗ − (cid:107)Ψθ(x) − c∗(cid:107)2 ,
hOC-SVM(x) = w∗T Ψθ(x) − ρ∗,
(3)
(4)
with the correspondence w∗ = c∗, and ρ∗ = c∗T Ψθ(xs),
where xs is a support vector in (1) that lies on the learned
enclosing sphere.
Proof. The condition corresponds to the case 1/nV ≤ C <
1 in [1] with C = 1/(ν · nV ). We introduce the kernel
function K(xi, xj) = Ψθ(xi)T Ψθ(xj). Since K(xi, xi)
is constant in our setting, the same dual formulation for (1)
and (2) can be written as:
(cid:88)
min
αiαjK(xi, xj)
s.t.
0 ≤ αi ≤ C,
ij
i=1
nV(cid:88)
αi = 1.
('3329881', 'Wei-An Lin', 'wei-an lin')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
walin@umd.edu pullpull@cs.umd.edu carlos@cs.umd.edu rama@umiacs.umd.edu
23a8d02389805854cf41c9e5fa56c66ee4160ce3Multimed Tools Appl
DOI 10.1007/s11042-013-1568-8
Influence of low resolution of images on reliability
of face detection and recognition
© The Author(s) 2013. This article is published with open access at SpringerLink.com
('2553748', 'Tomasz Marciniak', 'tomasz marciniak')
('2009993', 'Radoslaw Weychan', 'radoslaw weychan')
('40397247', 'Adam Dabrowski', 'adam dabrowski')
23b37c2f803a2d4b701e2f39c5f623b2f3e14d8eAvailable Online at www.ijcsmc.com
International Journal of Computer Science and Mobile Computing
A Monthly Journal of Computer Science and Information Technology
ISSN 2320–088X
IJCSMC, Vol. 2, Issue. 4, April 2013, pg.646 – 649
RESEARCH ARTICLE
Modified Approaches on Face Recognition
By using Multisensory Image
Bharath University, India
Bharath University, India
4f9e00aaf2736b79e415f5e7c8dfebda3043a97dMachine Audition:
Principles, Algorithms
and Systems
University of Surrey, UK
InformatIon scIence reference
Hershey • New York
('46314841', 'WenWu Wang', 'wenwu wang')
4fd29e5f4b7186e349ba34ea30738af7860cf21f
4f0d9200647042e41dea71c35eb59e598e6018a7
Experiments of Image Retrieval Using Weak Attributes
Columbia University, New York, NY
('1815972', 'Felix X. Yu', 'felix x. yu')
('1725599', 'Rongrong Ji', 'rongrong ji')
('3138710', 'Ming-Hen Tsai', 'ming-hen tsai')
('35984288', 'Guangnan Ye', 'guangnan ye')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
yfyuxinnan, rrji, yegng@ee.columbia.edu
xfminghen, sfchangg@cs.columbia.edu
4f051022de100241e5a4ba8a7514db9167eabf6eFace Parsing via a Fully-Convolutional Continuous
CRF Neural Network
('48207414', 'Lei Zhou', 'lei zhou')
('36300239', 'Zhi Liu', 'zhi liu')
('1706670', 'Xiangjian He', 'xiangjian he')
4faded442b506ad0f200a608a69c039e92eaff11STANBUL TECHNICAL UNIVERSITY INSTITUTE OF SCIENCE AND TECHNOLOGY
FACE RECOGNITION UNDER VARYING
ILLUMINATION
Master Thesis by
Department : Computer Engineering
Programme: Computer Engineering
JUNE 2006
('1968256', 'Erald VUÇINI', 'erald vuçini')
('1766445', 'Muhittin GÖKMEN', 'muhittin gökmen')
4f7967158b257e86d66bdabfdc556c697d917d24Guaranteed Parameter Estimation of Discrete Energy
Minimization for 3D Scene Parsing
CMU-RI-TR-16-49
July 2016
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Daniel Huber, Advisor
Submitted in partial fulfillment of the requirements
for the degree of Master of Science in Robotics.
('3439037', 'Mengtian Li', 'mengtian li')
('1691629', 'Alexander J. Smola', 'alexander j. smola')
('1786435', 'David Fouhey', 'david fouhey')
('3439037', 'Mengtian Li', 'mengtian li')
4fc936102e2b5247473ea2dd94c514e320375abbGuess Where? Actor-Supervision for Spatiotemporal Action Localization
KAUST1, University of Amsterdam2, Qualcomm Technologies, Inc
('2795139', 'Victor Escorcia', 'victor escorcia')
('3409955', 'Cuong D. Dao', 'cuong d. dao')
('40027484', 'Mihir Jain', 'mihir jain')
('2931652', 'Bernard Ghanem', 'bernard ghanem')
('1706203', 'Cees Snoek', 'cees snoek')
4f6adc53798d9da26369bea5a0d91ed5e1314df2IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. , NO. , 2016
Online Nonnegative Matrix Factorization with
General Divergences
('2345985', 'Renbo Zhao', 'renbo zhao')
('1678675', 'Huan Xu', 'huan xu')
4fbef7ce1809d102215453c34bf22b5f9f9aab26
4fa0d73b8ba114578744c2ebaf610d2ca9694f45
4fcd19b0cc386215b8bd0c466e42934e5baaa4b7Human Action Recognition using Factorized Spatio-Temporal
Convolutional Networks
Hong Kong University of Science and Technology
Hong Kong University of Science and Technology
cid:93) Faculty of Science and Technology, University of Macau
§ Lenovo Corporate Research Hong Kong Branch
('1750501', 'Lin Sun', 'lin sun')
('2370507', 'Kui Jia', 'kui jia')
('1739816', 'Dit-Yan Yeung', 'dit-yan yeung')
('2131088', 'Bertram E. Shi', 'bertram e. shi')
lsunece@ust.hk, kuijia@gmail.com, dyyeung@cse.ust.hk, eebert@ust.hk
4f591e243a8f38ee3152300bbf42899ac5aae0a5SUBMITTED TO TPAMI
Understanding Higher-Order Shape
via 3D Shape Attributes
('1786435', 'David F. Fouhey', 'david f. fouhey')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
4f9958946ad9fc71c2299847e9ff16741401c591Facial Expression Recognition with Recurrent Neural Networks
Robotics and Embedded Systems Lab, Department of Computer Science
Image Understanding and Knowledge-Based Systems, Department of Computer Science
Technische Universit¨at M¨unchen, Germany
('1753223', 'Alex Graves', 'alex graves')
('1685773', 'Christoph Mayer', 'christoph mayer')
('32131501', 'Matthias Wimmer', 'matthias wimmer')
('1699132', 'Bernd Radig', 'bernd radig')
[graves,juergen.schmidhuber]@in.tum.de
[mayerc,wimmerm,radig]@informatik.tu-muenchen.de
4f773c8e7ca98ece9894ba3a22823127a70c6e6cA Real-Time System for Head Tracking
and Pose Estimation
Robotics Institute, Carnegie Mellon University
2 Electrical & Controls Integration Lab, General Motors R&D
('29915644', 'Zengyin Zhang', 'zengyin zhang')
('2918263', 'Minyoung Kim', 'minyoung kim')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
('9399514', 'Wende Zhang', 'wende zhang')
4ff11512e4fde3d1a109546d9c61a963d4391addProceedings of the Twenty-Ninth International
Florida Artificial Intelligence Research Society Conference
Selecting Vantage Points for an Autonomous Quadcopter Videographer
Google
Mountain View, CA
Gita Sukthankar
University of Central Florida
Orlando, FL
Google
Mountain View, CA
('3391381', 'Rey Coaguila', 'rey coaguila')
('1694199', 'Rahul Sukthankar', 'rahul sukthankar')
reyc@google.com
gitars@eecs.ucf.edu
sukthankar@google.com
4f028efe6708fc252851eee4a14292b7ce79d378An Integrated Shape and Intensity Coding Scheme for Face Recognition
Department of Computer Science
George Mason University
Fairfax, VA 22030-4444
('39664966', 'Chengjun Liu', 'chengjun liu')
('1781577', 'Harry Wechsler', 'harry wechsler')
fcliu, wechslerg@cs.gmu.edu
4f0bf2508ae801aee082b37f684085adf0d06d23
4ff4c27e47b0aa80d6383427642bb8ee9d01c0acDeep Convolutional Neural Networks and Support
Vector Machines for Gender Recognition
Institute of Arti cial Intelligence and Cognitive Engineering
Faculty of Mathematics and Natural Sciences
University of Groningen, The Netherlands
('3405120', 'Jos van de Wolfshaar', 'jos van de wolfshaar')
4fefd1bc8dc4e0ab37ee3324ddfa43ad9d6a04a7Fashion Landmark Detection in the Wild
The Chinese University of Hong Kong
Shenzhen Key Lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced
Technology, CAS, China
('3243969', 'Ziwei Liu', 'ziwei liu')
('1979911', 'Sijie Yan', 'sijie yan')
('1693209', 'Ping Luo', 'ping luo')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{lz013,siyan,pluo,xtang}@ie.cuhk.edu.hk, xgwang@ee.cuhk.edu.hk
4f4f920eb43399d8d05b42808e45b56bdd36a929International Journal of Computer Applications (0975 – 8887)
Volume 123 – No.4, August 2015
A Novel Method for 3D Image Segmentation with Fusion
of Two Images using Color K-means Algorithm
Neelam Kushwah
Dept. of CSE
ITM Universe
Gwalior
Priusha Narwariya
Dept. of CSE
ITM Universe
Gwalior
two
4f0b8f730273e9f11b2bfad2415485414b96299fBDD100K: A Diverse Driving Video Database with
Scalable Annotation Tooling
1UC Berkeley
Georgia Institute of Technology
Peking University
4Uber AI Labs
('1807197', 'Fisher Yu', 'fisher yu')
('32324034', 'Fangchen Liu', 'fangchen liu')
('8309711', 'Vashisht Madhavan', 'vashisht madhavan')
('1753210', 'Trevor Darrell', 'trevor darrell')
4f77a37753c03886ca9c9349723ec3bbfe4ee967Localizing Facial Keypoints with Global Descriptor Search,
Neighbour Alignment and Locally Linear Models
1 ´Ecole Polytechnique de Montr´eal, Universit´e de Montr´eal
University of Toronto and Recognyz Systems Technologies
also focused on emotion recognition in the wild [9].
('1972076', 'Christopher Pal', 'christopher pal')
('9422894', 'Sharon Moalem', 'sharon moalem')
md-kamrul.hasan@polymtl.ca, christohper.pal@polymtl.ca, sharon@recognyz.com
4f7b92bd678772552b3c3edfc9a7c5c4a8c60a8eDeep Density Clustering of Unconstrained Faces
University of Maryland, College Park
('3329881', 'Wei-An Lin', 'wei-an lin')
('36407236', 'Jun-Cheng Chen', 'jun-cheng chen')
walin@umd.edu pullpull@cs.umd.edu carlos@cs.umd.edu rama@umiacs.umd.edu
4f36c14d1453fc9d6481b09c5a09e91d8d9ee47aDU,CHELLAPPA: VIDEO-BASED FACE RECOGNITION
Video-Based Face Recognition Using the
Intra/Extra-Personal Difference Dictionary
Department of Electrical and Computer
Engineering
University of Maryland
College Park, USA
('35554856', 'Ming Du', 'ming du')
('9215658', 'Rama Chellappa', 'rama chellappa')
mingdu@umd.edu
rama@umiacs.umd.edu
8d71872d5877c575a52f71ad445c7e5124a4b174
8de06a584955f04f399c10f09f2eed77722f6b1cAuthor manuscript, published in "International Conference on Computer Vision Theory and Applications (VISAPP 2013) (2013)"
8d4f0517eae232913bf27f516101a75da3249d15ARXIV SUBMISSION, MARCH 2018
Event-based Dynamic Face Detection and
Tracking Based on Activity
('2500521', 'Gregor Lenz', 'gregor lenz')
('1773138', 'Sio-Hoi Ieng', 'sio-hoi ieng')
('1750848', 'Ryad Benosman', 'ryad benosman')
8de2dbe2b03be8a99628ffa000ac78f8b66a1028´Ecole Nationale Sup´erieure dInformatique et de Math´ematiques Appliqu´ees de Grenoble
INP Grenoble – ENSIMAG
UFR Informatique et Math´ematiques Appliqu´ees de Grenoble
Rapport de stage de Master 2 et de projet de fin d’´etudes
Effectu´e au sein de l’´equipe LEAR, I.N.R.I.A., Grenoble
Action Recognition in Videos
3e ann´ee ENSIMAG – Option I.I.I.
M2R Informatique – sp´ecialit´e I.A.
04 f´evrier 2008 – 04 juillet 2008
LEAR,
I.N.R.I.A., Grenoble
655 avenue de l’Europe
38 334 Montbonnot
France
Responsable de stage
Mme. Cordelia Schmid
Tuteur ´ecole
Jury
('16585941', 'Gaidon Adrien', 'gaidon adrien')
('31899928', 'M. Augustin Lux', 'm. augustin lux')
('12844736', 'Roger Mohr', 'roger mohr')
('40419740', 'M. James Crowley', 'm. james crowley')
8d3fbdb9783716c1832a0b7ab1da6390c2869c1412
Discriminant Subspace Analysis for Uncertain
Situation in Facial Recognition
School of Computing and Communications University of Technology, Sydney
Australia
1. Introduction
Facial analysis and recognition have received substential attention from researchers in
biometrics, pattern recognition, and computer vision communities. They have a large
number of applications, such as security, communication, and entertainment. Although a
great deal of efforts has been devoted to automated face recognition systems, it still remains
a challenging uncertainty problem. This is because human facial appearance has potentially
of very large intra-subject variations of head pose, illumination, facial expression, occlusion
due to other objects or accessories, facial hair and aging. These misleading variations may
cause classifiers to degrade generalization performance.
It is important for face recognition systems to employ an effective feature extraction scheme
to enhance separability between pattern classes which should maintain and enhance
features of the input data that make distinct pattern classes separable (Jan, 2004). In general,
there exist a number of different feature extraction methods. The most common feature
extraction methods are subspace analysis methods such as principle component analysis
(PCA) (Kirby & Sirovich, 1990) (Jolliffe, 1986) (Turk & Pentland, 1991b), kernel principle
component analysis (KPCA) (Schölkopf et al., 1998) (Kim et al., 2002) (all of which extract
the most informative features and reduce the feature dimensionality), Fisher’s linear
discriminant analysis (FLD) (Duda et al., 2000) (Belhumeur et al., 1997), and kernel Fisher’s
discriminant analysis (KFLD) (Mika et al., 1999) (Scholkopf & Smola, 2002) (which
discriminate different patterns; that is, they minimize the intra-class pattern compactness
while enhancing the extra-class separability). The discriminant analysis is necessary because
the patterns may overlap in decision space.
Recently, Lu et al. (Lu et al., 2003) stated that PCA and LDA are the most widely used
conventional tools for dimensionality reduction and feature extraction in the appearance-
based face recognition. However, because facial features are naturally non-linear and the
inherent linear nature of PCA and LDA, there are some limitations when applying these
methods to the facial data distribution (Bichsel & Pentland, 1994) (Lu et al., 2003). To
overcome such problems, nonlinear methods can be applied to better construct the most
discriminative subspace.
In real world applications, overlapping classes and various environmental variations can
significantly impact face recognition accuracy and robustness. Such misleading information
make Machine Learning difficult in modelling facial data. According to Adini et al. (Adini et
al., 1997), it is desirable to have a recognition system which is able to recognize a face
insensitive to these within-personal variations.
('3333820', 'Pohsiang Tsai', 'pohsiang tsai')
('2184946', 'Tich Phuoc Tran', 'tich phuoc tran')
('1801256', 'Tom Hintz', 'tom hintz')
('2567343', 'Tony Jan', 'tony jan')
8d42a24d570ad8f1e869a665da855628fcb1378fCVPR
#987
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
CVPR 2009 Submission #987. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
An Empirical Study of Context in Object Detection
Anonymous CVPR submission
Paper ID 987
8d8461ed57b81e05cc46be8e83260cd68a2ebb4dAge identification of Facial Images using Neural
Network
CSE Department,CSVTU
RIT, Raipur, Chhattisgarh , INDIA
('7530203', 'Sneha Thakur', 'sneha thakur')
8d4f12ed7b5a0eb3aa55c10154d9f1197a0d84f3Cascaded Pose Regression
Piotr Doll´ar
California Institute of Technology
('2930640', 'Peter Welinder', 'peter welinder')
('1690922', 'Pietro Perona', 'pietro perona')
{pdollar,welinder,perona}@caltech.edu
8de6deefb90fb9b3f7d451b9d8a1a3264b768482Multibiometric Systems: Fusion Strategies and
Template Security
By
A Dissertation
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy
Department of Computer Science and Engineering
2008
('34633765', 'Karthik Nandakumar', 'karthik nandakumar')
8d2c0c9155a1ed49ba576ac0446ec67725468d87A Study of Two Image Representations for Head Pose Estimation
Dept. of Computer Science and Technology,
Tsinghua University, Beijing, China
('1968464', 'Ligeng Dong', 'ligeng dong')
('3265275', 'Linmi Tao', 'linmi tao')
('1797002', 'Guangyou Xu', 'guangyou xu')
dongligeng99@mails.thu.edu.cn,
{linmi, xgy-dcs}@tsinghua.edu.cn
8d384e8c45a429f5c5f6628e8ba0d73c60a51a89Temporal Dynamic Graph LSTM for Action-driven Video Object Detection
The Hong Kong University of Science and Technology 2 Carneige Mellon University
('38937910', 'Yuan Yuan', 'yuan yuan')yyuanad@ust.hk, xiaodan1@cs.cmu.edu, xiaolonw@cs.cmu.edu, dyyeung@cse.ust.hk, abhinavg@cs.cmu.edu
8d0243b8b663ca0ab7cbe613e3b886a5d1c8c152Development of Optical Computer Recognition (OCR) for Monitoring Stress and Emotions in Space
Center for Computational Biomedicine Imaging and Modeling Center, Rutgers University, New Brunswick, NJ
USA, 2Unit for Experimental Psychiatry, University of Pennsylvania School of Medicine
Philadelphia, PA, USA
INTRODUCTION. While in space, astronauts are required to perform mission-critical tasks on very expensive
equipment at a high level of functional capability. Stressors can compromise their ability to do so, thus it is very
important to have a system that can unobtrusively and objectively detect neurobehavioral problems involving
elevated levels of behavioral stress and negative emotions. Computerized approaches involving inexpensive cameras
offer an unobtrusive way to detect distress and to monitor observable emotions of astronauts during critical
operations in space, by tracking and analyzing facial expressions and body gestures in video streams. Such systems
can have applications beyond space flight, e.g., surveillance, law enforcement and human computer interaction.
TECHNOLOGY DEVELOPMENT. We developed a framework [1-9] that is capable of real time tracking of faces
and skin blobs of heads and hands. Face tracking uses a group of deformable statistical models of facial shape
variation and local texture distribution to robustly track facial landmarks (e.g., eyes, eyebrows, nose, mouth). The
model tolerates partial occlusions, it automatically detects and recovers from lost track, and it handles head rotations
up to full profile view. The skin blob tracker is initialized with a generic skin color model, dynamically learning the
specific color distribution online for adaptive tracking. Detected facial landmarks and blobs are filtered online, both
in terms of shape and motion, using eigenspace analysis and temporal dynamical models to prune false detections.
We then extract geometric and appearance features to learn models that detect relevant gestures and facial
expressions. In particular, our method utilizes the relative intensity ordering of facial expressions (i.e., neutral, onset,
apex, offset) found in the training set to learn a ranking model (Rankboost) for their recognition and intensity
estimation, which improves our average recognition rate (~87.5% on CMU benchmark database [4,10]). In relation
to stress detection, we piloted an experiment to learn subject-specific models of deception detection using behavioral
cues to discriminate stressed and relaxed behaviors. We video recorded 147 subjects in 12-question interviews after
a mock crime scenario, tracking their facial expressions and body gestures using our algorithm. Using leave-one-out
cross validation we acquired separate Nearest Neighbor models per subject, discriminating deceptive from truthful
responses with an average accuracy of 81.6% [7, 9]. We are currently experimenting with structured sparsity [14]
and super-resolution [11-13] techniques to obtain better quality image features to improve tracking and recognition
('11788023', 'N. Michael', 'n. michael')
('1748881', 'F. Yang', 'f. yang')
('29384491', 'D. Metaxas', 'd. metaxas')
8d6c4af9d4c01ff47fe0be48155174158a9a5e08Labeling, Discovering, and Detecting Objects in
Images
by
Bryan Christopher Russell
A.B., Computer Science
Dartmouth College
S.M., Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Submitted to the Department of Electrical Engineering and Computer
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy in Electrical Engineering and Computer Science
Science
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
February 2008
c(cid:13) Bryan Christopher Russell, MMVIII. All rights reserved.
The author hereby grants to MIT permission to reproduce and
distribute publicly paper and electronic copies of this thesis document
in whole or in part.
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Electrical Engineering and Computer Science
January 28, 2007
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
William T. Freeman
Professor
Thesis Supervisor
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terry P. Orlando
Chairman, Department Committee on Graduate Students
8dcc95debd07ebab1721c53fa50d846fef265022MicroExpNet: An Extremely Small and Fast Model For Expression Recognition
From Frontal Face Images
˙Ilke C¸ u˘gu, Eren S¸ener, Emre Akbas¸
Middle East Technical University
06800 Ankara, Turkey
{cugu.ilke, sener.eren}@metu.edu.tr, emre@ceng.metu.edu.tr
8dbe79830713925affc48d0afa04ed567c54724b
8d1adf0ac74e901a94f05eca2f684528129a630aFacial Expression Recognition Using Facial
Movement Features
8d91f06af4ef65193f3943005922f25dbb483ee4Facial Expression Classification Using Rotation
Slepian-based Moment Invariants
Faculty of Science and Technology, University of Macau
Macao, China
('2888882', 'Cuiming Zou', 'cuiming zou')
('3369665', 'Kit Ian Kou', 'kit ian kou')
8dc9de0c7324d098b537639c8214543f55392a6bPose-invariant 3d object recognition using linear combination of 2d views and
evolutionary optimisation
Department of Computer Science,
University College London
Malet Place, London, WC1E 6BT
('1797883', 'Vasileios Zografos', 'vasileios zografos')
('31557997', 'Bernard F. Buxton', 'bernard f. buxton')
{v.zografos, b.buxton}@cs.ucl.ac.uk
8d712cef3a5a8a7b1619fb841a191bebc2a17f15
8d646ac6e5473398d668c1e35e3daa964d9eb0f6MEMORY-EFFICIENT GLOBAL REFINEMENT OF DECISION-TREE ENSEMBLES AND
ITS APPLICATION TO FACE ALIGNMENT
Nenad Markuˇs†
Ivan Gogi´c†
Igor S. Pandˇzi´c†
J¨orgen Ahlberg‡
University of Zagreb, Faculty of Electrical Engineering and Computing, Unska 3, 10000 Zagreb, Croatia
Computer Vision Laboratory, Link oping University, SE-581 83 Link oping, Sweden
8dffbb6d75877d7d9b4dcde7665888b5675deee1Emotion Recognition with Deep-Belief
Networks
Introduction
For our CS229 project, we studied the problem of
reliable computerized emotion recognition in images of
human
faces. First, we performed a preliminary
exploration using SVM classifiers, and then developed an
approach based on Deep Belief Nets. Deep Belief Nets, or
DBNs, are probabilistic generative models composed of
multiple layers of stochastic latent variables, where each
“building block” layer is a Restricted Boltzmann Machine
(RBM). DBNs have a greedy layer-wise unsupervised
learning algorithm as well as a discriminative fine-tuning
procedure for optimizing performance on classification
tasks. [1].
We trained our classifier on three databases: the
Cohn-Kanade Extended Database (CK+) [2], the Japanese
Female Facial Expression Database (JAFFE) [3], and the
Yale Face Database (YALE) [4]. We tested several
different database configurations, image pre-processing
settings, and DBN parameters, and obtained test errors as
low as 20% on a limited subset of the emotion labels.
Finally, we created a real-time system which takes
images of a single subject using a computer webcam and
classifies the emotion shown by the subject.
Part 1: Exploration of SVM-based approaches
To set a baseline for comparison, we applied an
SVM classifier to the emotion images in the CK+
database, using the LIBLINEAR library and its MATLAB
interface [5]. This database contains 593 image sequences
across 123 human subjects, beginning with a “neutral
“expression and showing the progression to one of seven
“peak” emotions. When given both a neutral and an
expressive face to compare, the SVM obtained accuracy
as high as 90%. This
the
implementation of the SVM classifier. For additional
details on this stage of the project, please see our
Milestone document.
Part 1.1 Choice of labels (emotion numbers vs. FACS
features)
The CK+ database offers two sets of emotion
features: “emotion numbers” and FACS features. Emotion
numbers are integer values representing the main emotion
shown in the “peak emotion” image. The emotions are
coded as follows: 1=anger, 2=contempt, 3=disgust,
4=fear, 5=happiness, 6=sadness, and 7=surprise.
The other labeling option is called FACS, or the
Facial Action Coding System. FACS decomposes every
summarizes
section
facial emotion into a set of Action Units (AUs), which
describe the specific muscle groups involved in forming
the emotion. We chose not to use FACS because accurate
labeling currently requires trained human experts [8], and
we are interesting in creating an automated system.

Part 1.2 Features
Part 1.2.1 Norm of differences between neutral face
and full emotion
Each of the CK+ images has been hand-labeled with
68 standard Active Appearance Models (AAM) face
landmarks that describe the X and Y position of these
landmarks on the image (Figure 1).
Figure 1. AAM Facial Landmarks
We initially trained the SVM on the norm of the
vector differences in landmark positions between the
neutral and peak expressions. With this approach, the
training error was approximately 35% for hold out cross
validation (see Figure 2).
with
Figure 3. Accuracy of
SVM with separate X, Y
displacement features.
Figure 2. Accuracy of
SVM
norm-
displacement features.
Part 1.2.2 Separate X and Y differences between
neutral face and full emotion
Because the initial approach did not differentiate
between displacements of
in different
directions, we also provided the differences in the X and
Y components of each landmark separately. This doubled
the size of our feature vector, and resulting in a significant
(about 20%) improvement in accuracy (Figure 3).
Part 1.2.3 Feature Selection
landmarks
Finally, we visualized which features were the most
important for classifying each emotion; the results can be
seen in Figure 4. The figure shows the X and Y
('39818775', 'Tom McLaughlin', 'tom mclaughlin')
8d5998cd984e7cce307da7d46f155f9db99c6590ChaLearn Looking at People:
A Review of Events and Resources
1 Dept. Mathematics and Computer Science, UB, Spain,
2 Computer Vision Center, UAB, Barcelona, Spain,
EIMT, Open University of Catalonia, Barcelona, Spain
4 ChaLearn, California, USA, 5 INAOE, Puebla, Mexico,
6 Universit´e Paris-Saclay, Paris, France,
http://chalearnlap.cvc.uab.es
('7855312', 'Sergio Escalera', 'sergio escalera')
('1742688', 'Hugo Jair Escalante', 'hugo jair escalante')
('1743797', 'Isabelle Guyon', 'isabelle guyon')
sergio.escalera.guerrero@gmail.com
8dce38840e6cf5ab3e0d1b26e401f8143d2a6bffTowards large scale multimedia indexing:
A case study on person discovery in broadcast news
Idiap Research Institute and EPFL, 2 LIMSI, CNRS, Univ. Paris-Sud, Universit Paris-Saclay
3 CNRS, Irisa & Inria Rennes, 4 PUC de Minas Gerais, Belo Horizonte,
Universitat Polit cnica de Catalunya, 6 University of Vigo, 7 LIUM, University of Maine
('39560344', 'Nam Le', 'nam le')
('2578933', 'Hervé Bredin', 'hervé bredin')
('2710421', 'Gabriel Sargent', 'gabriel sargent')
('2613332', 'Miquel India', 'miquel india')
('1794658', 'Paula Lopez-Otero', 'paula lopez-otero')
('1802247', 'Claude Barras', 'claude barras')
('1804407', 'Camille Guinaudeau', 'camille guinaudeau')
('1708671', 'Guillaume Gravier', 'guillaume gravier')
('23556030', 'Gabriel Barbosa da Fonseca', 'gabriel barbosa da fonseca')
('32255257', 'Izabela Lyon Freire', 'izabela lyon freire')
('37401316', 'Gerard Martí', 'gerard martí')
('2585946', 'Josep Ramon Morros', 'josep ramon morros')
('1726311', 'Javier Hernando', 'javier hernando')
('2446815', 'Sylvain Meignier', 'sylvain meignier')
('1719610', 'Jean-Marc Odobez', 'jean-marc odobez')
nle@idiap.ch,bredin@limsi.fr,gabriel.sargent@irisa.fr,miquel.india@tsc.upc.edu,plopez@gts.uvigo.es
153f5ad54dd101f7f9c2ae17e96c69fe84aa9de4Overview of algorithms for face detection and
tracking
Nenad Markuˇs
155199d7f10218e29ddaee36ebe611c95cae68c4Towards Scalable Visual Navigation of
Micro Aerial Vehicles
Robotics Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
April 2016
Thesis Supervisors:
Prof. Dr. Martial Hebert
Prof. Dr. J. Andrew Bagnell
Submitted in partial fulfillment of the requirements
for the degree of Master of Science in Robotics.
CMU-RI-TR-16-07
('2739544', 'Shreyansh Daftry', 'shreyansh daftry')
('2739544', 'Shreyansh Daftry', 'shreyansh daftry')
daftry@cmu.edu
15cd05baa849ab058b99a966c54d2f0bf82e7885Structured Sparse Subspace Clustering: A Unified Optimization Framework
SICE, Beijing University of Posts and Telecommunications. 2Center for Imaging Science, Johns Hopkins University
In many real-world applications, we need to deal with high-dimensional
datasets, such as images, videos, text, and more. In practice, such high-
dimensional datasets can be well approximated by multiple low-dimensional
subspaces corresponding to multiple classes or categories. For example, the
feature point trajectories associated with a rigidly moving object in a video
lie in an affine subspace (of dimension up to 4), and face images of a subject
under varying illumination lie in a linear subspace (of dimension up to 9).
Therefore, the task, known in the literature as subspace clustering [6], is
to segment the data into the corresponding subspaces and finds multiple
applications in computer vision.
State of the art approaches [1, 2, 3, 4, 5, 7] for solving this problem fol-
low a two-stage approach: a) Construct an affinity matrix between points by
exploiting the ‘self-expressiveness’ property of the data, which allows any
data point to be represented as a linear (or affine) combination of the other
data points; b) Apply spectral clustering on the affinity matrix to recover
the data segmentation. Dividing the problem in two steps is, on the one
hand, appealing because the first step can be solved using convex optimiza-
tion techniques, while the second one can be solved using existing spectral
clustering techniques. On the other hand, its major disadvantage is that the
natural relationship between the affinity matrix and the segmentation of the
data is not explicitly captured.
In this paper, we attempt to integrate the two separate stages into one
unified optimization framework. One important motivating observation is
that a perfect subspace clustering can often be obtained from an imperfec-
t affinity matrix. In other words, the spectral clustering step can clean up
the disturbance in the affinity matrix – which can be viewed as a process of
information gain by denoising. Because of this, if we feed back the infor-
mation gain properly, it may help the self-expressiveness model to yield a
better affinity matrix.
To jointly estimate the clustering and affinity matrix, we define a sus-
pace structured (cid:96)1 norm as follows:
(cid:107)Z(cid:107)1,Q
= (cid:107)(11(cid:62) + αΘ)(cid:12) Z(cid:107)1
(1)
where α > 0 is a tradeoff parameter, Θi j ∈ {0,1} indicates whether two data
points belong to the same subspace in which Θi j = 0 if point i and j lie in
the same subspace and otherwise Θi j = 1, and 1 is the vector of all ones of
appropriate dimension.
Equipped with the subspace structured (cid:96)1 norm of Z, we then define the
unified optimization framework for subspace clustering as follows:
min
Z,E,Q
(cid:107)Z(cid:107)1,Q + λ(cid:107)E(cid:107)(cid:96) s.t. X = XZ + E, diag(Z) = 0, Q ∈ Q,
where Q is the set of all valid binary segmentation matrices defined as
(2)
Q = {Q ∈ {0,1}N×k : Q1 = 1 and rank(Q) = k},
(3)
and the norm (cid:107)·(cid:107)(cid:96) on the error term E depends upon the prior knowledge
about the pattern of noise or corruptions. We call problem (2) Structured
Sparse Subspace Clustering (SSSC or S3C).
The solution to the optimization problem in (2) is based on solving the
following two subproblems alternatively: a) Find Z and E given Q by solv-
ing a weighted sparse representation problem; b) Find Q given Z and E by
spectral clustering. We solve this problem efficiently via a combination of an
alternating direction method of multipliers with spectral clustering. Experi-
ments on a synthetic data, the Hopkins 155 motion segmentation database,
and the Extended Yale B data set demonstrate its effectiveness.
Some results are presented in Figure 1, Table 1 and 2. Figure 1 shows
the improvement in both the affinity matrix and the subspace clustering us-
ing S3C over SSC on a subset of face images of three subjects from the
('9171002', 'Chun-Guang Li', 'chun-guang li')
('1745721', 'René Vidal', 'rené vidal')
15136c2f94fd29fc1cb6bedc8c1831b7002930a6Deep Learning Architectures for Face
Recognition in Video Surveillance
('2805645', 'Saman Bashbaghi', 'saman bashbaghi')
('1697195', 'Eric Granger', 'eric granger')
('1744351', 'Robert Sabourin', 'robert sabourin')
('3046171', 'Mostafa Parchami', 'mostafa parchami')
15affdcef4bb9d78b2d3de23c9459ee5b7a43fcbSemi-Supervised Classification Using Linear
Neighborhood Propagation
Tsinghua University, Beijing 100084, P.R.China
The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
Semi-Supervised Classi(cid:12)cation
A Toy Example
Shape Ranking
Digits Ranking
(a)
(b)
Interactive Image Segmentation
1.2
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1.5
1.2
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1.5
4−NN Connected Graph
−1
−0.5
0.5
(a)
1.5
2.5
Classification Results By Nearst Neighbor
class 1
class 2
−1
−0.5
0.5
(c)
1.5
2.5
1.2
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1.5
1.2
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1.5
Classification Results By LNP
class 1
class 2
−1
−0.5
0.5
(b)
1.5
2.5
Classification Results By Transductive SVM
class 1
class 2
−1
−0.5
0.5
(d)
1.5
2.5
Multi-Class Semi-Supervised Classi(cid:12)cation
(cid:15) Label set: L = f1; 2; (cid:1) (cid:1) (cid:1) ; cg
(cid:15) M be a set of n (cid:2) c matrices with non-negative real-value entries
(cid:15) F = [f1; f2; (cid:1) (cid:1) (cid:1) ; fc] 2 M corresponds to a speci(cid:12)c classi(cid:12)cation on X
(cid:15) The entry of Fij can be regarded as the likelihood that xi belongs to
class j
(cid:15) The label of xi can be computed by yi = arg maxj6c Fij
Induction
minimize (cid:17)?(xt) = (cid:13)(cid:13)(cid:13)(cid:13)
ft (cid:0) Xxj2N (xt)
(cid:15) plug a test example xt into the cost function
(cid:15) keep the labels of all xi 2 X (cid:12)xed when inducing the label of xt
w(xt; xj)fj(cid:13)(cid:13)(cid:13)(cid:13)
(5)
Learning from partially labeled data
(cid:15) Face/object recognition
(cid:15) Image / video retrieval
(cid:15) Interactive image segmentation
Graph-Based Semi-Supervised Classi(cid:12)cation
Represent the dataset as an weighted undirected graph G =< V; E >
(cid:15) V: the node set, corresponding to the dataset
(cid:15) E: the edge set, corresponding to the pairwise relationships
wij = expn(cid:0)2(cid:12)kxi (cid:0) xjk2o
(1)
Cluster Assumption
(cid:15) nearby points are likely to have the same label
(cid:15) points on the same structure (such as a cluster or a submanifold) are
prone to have the same label
=) Similar to manifold analysis (ISOMAP, LLE, Laplacian Eigen-
map(cid:1) (cid:1) (cid:1) )
=) Incorporate the neighborhood information into graph construction
Linear Neighborhoods
The data point can be linearly reconstructed from its k-nearest neigh-
bors.
minimize "i = (cid:13)(cid:13)(cid:13)(cid:13)
s:t: Xj
xi (cid:0) Xxj2N (xi)
wij = 1; wij > 0
wij xj(cid:13)(cid:13)(cid:13)(cid:13)
(2)
(cid:15) wij re(cid:13)ects the similarity between xj and xi
(cid:15) How to solve it?=)Quadratic programming.
Collaborative Label Prediction
The label of an unlabeled point can be linearly reconstructed from its
neighbors’ labels
minimize (cid:17) = Xn
i=1
fi (cid:0) Xxj2N (xi)
(cid:13)(cid:13)(cid:13)(cid:13)
wijfj(cid:13)(cid:13)(cid:13)(cid:13)
s:t:
fi = li (f or all labeled point xi)
(3)
(cid:15) wij is calculated through solving Eq.(2).
(cid:15) The neighborhood information are incorporated into label prediction.
How to solve Eq.(3)?
i=1
fi (cid:0) Xxj2N (xi)
(cid:17) = Xn
=) minimize (cid:17) ()(I (cid:0) W)f = 0; s:t: fi = li
(cid:15) Refer to the following paper
wijfj(cid:13)(cid:13)(cid:13)(cid:13)
(cid:13)(cid:13)(cid:13)(cid:13)
= f T (I (cid:0) W)T (I (cid:0) W)f (4)
Recognition
Recognition accuracies on ORL database
LNP
Consistency
Kernel Eigenface
Fisherface
Eigenface
0.9
0.8
0.7
0.6
0.5
0.4
Recognition accuracies on COIL database
LNP
Consistency
Kernel PCA
PCA+LDA
PCA
10
12
14
16
18
References
(cid:15) S.T. Roweis and L.K. Saul, Noninear Dimensionality Reduction by
Locally Linear Embedding. Science: vol. 290, 2323-2326. 2000.
(cid:15) O. Chapelle, et al. (eds.): Semi-Supervised Learning. MIT Press:
Cambridge, MA. 2006.
(cid:15) A. Levin D. Lischinski and Y. Weiss. Colorization using Optimization.
SIGGRAPH, ACM Transactions on Graphics, Aug 2004.
Data Ranking
Ranking Result by Euclidean Distance
1.5
0.5
−0.5
Ranking Result by LNP
0.95
0.9
0.85
0.8
0.75
0.7
0.65
1.5
0.5
−0.5
Zhu, X., Ghahramani, Z., & La(cid:11)erty, J.(2003). Semi-Supervised Learn-
ing Using Gaussian Fields and Harmonic Functions. In Proceedings of
the 20th International Conference on Machine Learning
−1
−1.5
−1
−0.5
0.5
(a)
1.5
2.5
−1
−1.5
−1
−0.5
0.5
(b)
1.5
2.5
('34410258', 'Fei Wang', 'fei wang')
('1688516', 'Jingdong Wang', 'jingdong wang')
('1700883', 'Changshui Zhang', 'changshui zhang')
('7969645', 'Helen C. Shen', 'helen c. shen')
15d653972d176963ef0ad2cc582d3b35ca542673CSVideoNet: A Real-time End-to-end Learning Framework for
High-frame-rate Video Compressive Sensing
School of Computing, Informatics, and Decision Systems Engineering
Arizona State University, Tempe AZ
('47831601', 'Kai Xu', 'kai xu')
('40615963', 'Fengbo Ren', 'fengbo ren')
{kaixu, renfengbo}@asu.edu
159e792096756b1ec02ec7a980d5ef26b434ff78Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence
Signed Laplacian Embedding for Supervised Dimension Reduction
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University
Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney
('1710691', 'Chen Gong', 'chen gong')
('1692693', 'Dacheng Tao', 'dacheng tao')
('39264954', 'Jie Yang', 'jie yang')
('1847070', 'Keren Fu', 'keren fu')
{goodgongchen, jieyang, fkrsuper}@sjtu.edu.cn
dacheng.tao@uts.edu.au
153e5cddb79ac31154737b3e025b4fb639b3c9e7PREPRINT SUBMITTED TO IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Active Dictionary Learning in Sparse
Representation Based Classification
('1935596', 'Jin Xu', 'jin xu')
('2198278', 'Haibo He', 'haibo he')
('1881104', 'Hong Man', 'hong man')
1586871a1ddfe031b885b94efdbff647cf03eff1A Visual Historical Record of American High School Yearbooks
A Century of Portraits:
University of California Berkeley
Brown University
University of California Berkeley
('2361255', 'Shiry Ginosar', 'shiry ginosar')
('2660664', 'Kate Rakelly', 'kate rakelly')
('33385802', 'Sarah Sachs', 'sarah sachs')
('2130100', 'Brian Yin', 'brian yin')
('1763086', 'Alexei A. Efros', 'alexei a. efros')
15cf7bdc36ec901596c56d04c934596cf7b43115(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 8, No. 9, 2017
Face Extraction from Image based on K-Means
Clustering Algorithms
Faculty of Computer, Khoy Branch, Islamic Azad University, Khoy, Iran
('2062871', 'Yousef Farhang', 'yousef farhang')
1576ed0f3926c6ce65e0ca770475bca6adcfdbb4Keep it Accurate and Diverse: Enhancing Action Recognition Performance by
Ensemble Learning
Faculty of Computer Science, Dalhousie University, Halifax, Canada
Computer Vision Center, UAB
Edificio O, Campus UAB, 08193, Bellaterra (Cerdanyola), Barcelona, Spain
University of Barcelona
Gran Via de les Corts Catalanes, 585, 08007, Barcelona
Visual Analysis of People (VAP) Laboratory
Rendsburggade 14, 9000 Aalborg, Denmark
('1921285', 'Mohammad Ali Bagheri', 'mohammad ali bagheri')
('3212432', 'Qigang Gao', 'qigang gao')
('7855312', 'Sergio Escalera', 'sergio escalera')
('1803459', 'Kamal Nasrollahi', 'kamal nasrollahi')
('1876184', 'Michael B. Holte', 'michael b. holte')
('1700569', 'Thomas B. Moeslund', 'thomas b. moeslund')
bagheri@cs.dal.ca
sergio@maia.ub.es, aclapes@cvc.uab.cat
{kn, mbh, tbm}@create.aau.dk
156cd2a0e2c378e4c3649a1d046cd080d3338bcaExemplar based approaches on Face Fiducial Detection and
Frontalization
Thesis submitted in partial fulfillment
of the requirements for the degree of
MS by Research
in
Computer Science & Engineering
by
Mallikarjun B R
201307681
International Institute of Information Technology
Hyderabad - 500 032, India
May 2017
mallikarjun.br@research.iiit.ac.in
157eb982da8fe1da4c9e07b4d89f2e806ae4ceb6MITSUBISHI ELECTRIC RESEARCH LABORATORIES
http://www.merl.com
Connecting the Dots in Multi-Class Classification: From
Nearest Subspace to Collaborative Representation
Chi, Y.; Porikli, F.
TR2012-043
June 2012
15e0b9ba3389a7394c6a1d267b6e06f8758ab82bXu et al. IPSJ Transactions on Computer Vision and
Applications (2017) 9:24
DOI 10.1186/s41074-017-0035-2
IPSJ Transactions on Computer
Vision and Applications
TECHNICAL NOTE
Open Access
The OU-ISIR Gait Database comprising the
Large Population Dataset with Age and
performance evaluation of age estimation
('7513255', 'Chi Xu', 'chi xu')
('1689334', 'Yasushi Makihara', 'yasushi makihara')
('12881056', 'Gakuto Ogi', 'gakuto ogi')
('1737850', 'Xiang Li', 'xiang li')
('1715071', 'Yasushi Yagi', 'yasushi yagi')
('6120396', 'Jianfeng Lu', 'jianfeng lu')
151481703aa8352dc78e2577f0601782b8c41b34Appearance Manifold of Facial Expression
Queen Mary, University of London, London E1 4NS, UK
Department of Computer Science
('10795229', 'Caifeng Shan', 'caifeng shan')
('2073354', 'Shaogang Gong', 'shaogang gong')
('2803283', 'Peter W. McOwan', 'peter w. mcowan')
{cfshan,sgg,pmco}@dcs.qmul.ac.uk
15aa6c457678e25f6bc0e818e5fc39e42dd8e533
15cf1f17aeba62cd834116b770f173b0aa614bf4International Journal of Computer Applications (0975 – 8887)
Volume 77 – No.5, September 2013
Facial Expression Recognition using Neural Network with
Regularized Back-propagation Algorithm
Research Scholar
Department of ECE,

Phagwara, India
Assistant Professor
Department of ECE,

Phagwara, India
Research Scholar
Department of ECE,
Gyan Ganga Institute of
Technology & Sciences,
Jabalpur, India
('35358999', 'Ashish Kumar Dogra', 'ashish kumar dogra')
('50227570', 'Nikesh Bajaj', 'nikesh bajaj')
1565721ebdbd2518224f54388ed4f6b21ebd26f3Face and Landmark Detection by Using Cascade of Classifiers
Eskisehir Osmangazi University
Eskisehir, Turkey
Laboratoire Jean Kuntzmann
Grenoble Cedex 9, France
Czech Technical University
Praha, Czech Republic
('2277308', 'Hakan Cevikalp', 'hakan cevikalp')
('1756114', 'Bill Triggs', 'bill triggs')
('1778663', 'Vojtech Franc', 'vojtech franc')
hakan.cevikalp@gmail.com
Bill.Triggs@imag.fr
xfrancv@cmp.felk.cvut.cz
15f3d47b48a7bcbe877f596cb2cfa76e798c6452Automatic face analysis tools for interactive digital games
Anonymised for blind review
Anonymous
Anonymous
Anonymous
15728d6fd5c9fc20b40364b733228caf63558c31('2831988', 'IAN N. ENDRES', 'ian n. endres')
15252b7af081761bb00535aac6bd1987391f9b79ESTIMATION OF EYE GAZE DIRECTION ANGLES BASED ON ACTIVE APPEARANCE
MODELS
School of E.C.E., National Technical University of Athens, 15773 Athens, Greece
('2539459', 'Petros Koutras', 'petros koutras')
('1750686', 'Petros Maragos', 'petros maragos')
Email: {pkoutras, maragos}@cs.ntua.gr
1513949773e3a47e11ab87d9a429864716aba42d
15ee80e86e75bf1413dc38f521b9142b28fe02d1Towards a Deep Learning Framework for
Unconstrained Face Detection
CyLab Biometrics Center and the Department of Electrical and Computer Engineering,
Carnegie Mellon University, Pittsburgh, PA, USA
('3049981', 'Yutong Zheng', 'yutong zheng')
('3117715', 'Chenchen Zhu', 'chenchen zhu')
('6131978', 'T. Hoang Ngan Le', 't. hoang ngan le')
('1769788', 'Khoa Luu', 'khoa luu')
('2043374', 'Chandrasekhar Bhagavatula', 'chandrasekhar bhagavatula')
('1794486', 'Marios Savvides', 'marios savvides')
{yutongzh, chenchez, kluu, cbhagava, thihoanl}@andrew.cmu.edu, msavvid@ri.cmu.edu
153c8715f491272b06dc93add038fae62846f498('33047058', 'JONGWOO LIM', 'jongwoo lim')
15e27f968458bf99dd34e402b900ac7b34b1d5758362
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
University of Toronto
1. INTRODUCTION
('2030736', 'Mohammad Shahin Mahanta', 'mohammad shahin mahanta')
('1705037', 'Konstantinos N. Plataniotis', 'konstantinos n. plataniotis')
Email: {mahanta, kostas} @ece.utoronto.ca
15f70a0ad8903017250927595ae2096d8b263090Learning Robust Deep Face Representation
University of Science and Technology Beijing
Beijing, China
('2225749', 'Xiang Wu', 'xiang wu')alfredxiangwu@gmail.com
1564bf0a268662df752b68bee5addc4b08868739With Whom Do I Interact?
Detecting Social Interactions in Egocentric
Photo-streams
University of Barcelona
Barcelona, Spain
Computer Vision Center and
University of Barcelona
Barcelona, Spain
Computer Vision Center and
University of Barcelona
Barcelona, Spain
('2084534', 'Maedeh Aghaei', 'maedeh aghaei')
('2837527', 'Mariella Dimiccoli', 'mariella dimiccoli')
('1724155', 'Petia Radeva', 'petia radeva')
Email:maghaeigavari@ub.edu
158e32579e38c29b26dfd33bf93e772e6211e188Automated Real Time Emotion Recognition using
Facial Expression Analysis
by
A thesis submitted to the Faculty of Graduate and Postdoctoral
Affairs in partial fulfillment of the requirements for the degree of
Master
of
Computer Science
Carleton University
Ottawa, Ontario
122f51cee489ba4da5ab65064457fbe104713526Long Short Term Memory Recurrent Neural Network based
Multimodal Dimensional Emotion Recognition
Recognition
Recognition
Recognition
National Laboratory of Pattern
National Laboratory of Pattern
National Laboratory of Pattern
Institute of Automation
Chinese Academy of Sciences
Institute of Automation
Chinese Academy of Sciences
Institute of Automation
Chinese Academy of Sciences
National Laboratory of Pattern Recognition
National Laboratory of Pattern Recognition
Institute of Automation
Chinese Academy of Sciences
('1850313', 'Linlin Chao', 'linlin chao')
('37670752', 'Jianhua Tao', 'jianhua tao')
('2740129', 'Minghao Yang', 'minghao yang')
('1704841', 'Ya Li', 'ya li')
linlin.chao@nlpr.ia.ac.cn
jhtao@nlpr.ia.ac.cn
mhyang@nlpr.ia.ac.cn
yli@nlpr.ia.ac.cn
121503705689f46546cade78ff62963574b4750bWe don’t need no bounding-boxes:
Training object class detectors using only human verification
University of Edinburgh
('1749373', 'Dim P. Papadopoulos', 'dim p. papadopoulos')
('1823362', 'Jasper R. R. Uijlings', 'jasper r. r. uijlings')
('48716849', 'Frank Keller', 'frank keller')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
dim.papadopoulos@ed.ac.uk
jrr.uijlings@ed.ac.uk
keller@inf.ed.ac.uk
vferrari@inf.ed.ac.uk
125d82fee1b9fbcc616622b0977f3d06771fc152Hierarchical Face Parsing via Deep Learning
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1693209', 'Ping Luo', 'ping luo')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
pluo.lhi@gmail.com
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
1255afbf86423c171349e874b3ac297de19f00cdRobust Face Recognition by Computing Distances
from Multiple Histograms of Oriented Gradients
Institute of Arti cial Intelligence and Cognitive Engineering (ALICE), University of Groningen
Nijenborgh 9, Groningen, The Netherlands
('3351361', 'Mahir Faik Karaaba', 'mahir faik karaaba')
('1728531', 'Olarik Surinta', 'olarik surinta')
('1799278', 'Lambert Schomaker', 'lambert schomaker')
Email: {m.f.karaaba, o.surinta, l.r.b.schomaker, m.a.wiering}@rug.nl
1275d6a800f8cf93c092603175fdad362b69c191Deep Face Recognition: A Survey
School of Information and Communication Engineering,
Beijing University of Posts and Telecommunications, Beijing, China
still have an inevitable limitation on robustness against the
complex nonlinear facial appearance variations.
In general, traditional methods attempted to solve FR prob-
lem by one or two layer representation, such as filtering
responses or histogram of the feature codes. The research com-
munity studied intensively to separately improve the prepro-
cessing, local descriptors, and feature transformation, which
improve face recognition accuracy slowly. By the continuous
improvement of a decade, “shallow” methods only improve the
accuracy of the LFW benchmark to about 95% [26], which
indicates that “shallow” methods are insufficient to extract
stable identity feature against unconstrained facial variations.
Due to the technical insufficiency, facial recognition systems
were often reported with unstable performance or failures with
countless false alarms in real-world applications.
('2285767', 'Mei Wang', 'mei wang')
('1774956', 'Weihong Deng', 'weihong deng')
wm0245@126.com, whdeng@bupt.edu.cn
126535430845361cd7a3a6f317797fe6e53f5a3bRobust Photometric Stereo via Low-Rank Matrix
Completion and Recovery (cid:63)
School of Optics and Electronics, Beijing Institute of Technology, Beijing
Coordinated Science Lab, University of Illinois at Urbana-Champaign
Key Laboratory of Machine Perception, Peking University, Beijing
§Visual Computing Group, Microsoft Research Asia, Beijing
('2417838', 'Lun Wu', 'lun wu')
('1701028', 'Arvind Ganesh', 'arvind ganesh')
('35580784', 'Boxin Shi', 'boxin shi')
('1774618', 'Yasuyuki Matsushita', 'yasuyuki matsushita')
('1692621', 'Yongtian Wang', 'yongtian wang')
('1700297', 'Yi Ma', 'yi ma')
lun.wu@hotmail.com, abalasu2@illinois.edu, shiboxin@cis.pku.edu.cn,
yasumat@microsoft.com, wyt@bit.edu.cn, mayi@microsoft.com
122ee00cc25c0137cab2c510494cee98bd504e9fThe Application of
Active Appearance Models to
Comprehensive Face Analysis
Technical Report
TU M¨unchen
April 5, 2007
('2866162', 'Simon Kriegel', 'simon kriegel')kriegel@mmer-systems.eu
1287bfe73e381cc8042ac0cc27868ae086e1ce3b
121fe33daf55758219e53249cf8bcb0eb2b4db4bCHAKRABARTI et al.: EMPIRICAL CAMERA MODEL
An Empirical Camera Model
for Internet Color Vision
http://www.eecs.harvard.edu/~ayanc/
http://www.cs.middlebury.edu/~schar/
Todd Zickler1
http://www.eecs.harvard.edu/~zickler/
1 Harvard School of Engineering and
Applied Sciences
Cambridge, MA, USA 02139
2 Department of Computer Science
Middlebury College
Middlebury, VT, USA 05753
('38534744', 'Ayan Chakrabarti', 'ayan chakrabarti')
('1709053', 'Daniel Scharstein', 'daniel scharstein')
12408baf69419409d228d96c6f88b6bcde303505Temporal Tessellation: A Unified Approach for Video Analysis
The Blavatnik School of Computer Science, Tel Aviv University, Israel
Information Sciences Institute, USC, CA, USA
The Open University of Israel, Israel
4Facebook AI Research
('48842639', 'Dotan Kaufman', 'dotan kaufman')
('36813724', 'Gil Levi', 'gil levi')
('1756099', 'Tal Hassner', 'tal hassner')
('1776343', 'Lior Wolf', 'lior wolf')
120bcc9879d953de7b2ecfbcd301f72f3a96fb87Report on the FG 2015 Video Person Recognition Evaluation
Zhenhua Feng
Colorado State University
Fort Collins, CO, USA
University of Notre Dame
Notre Dame, IN, USA
University of Surrey
United Kingdom
1 Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences
Institute of Computing Technology, CAS, Beijing, 100190, China
University of Chinese Academy of Sciences, Beijing, 100049, China
Stevens Institute of Technology
Hoboken, NJ, USA
Vitomir ˇStruc
Janez Kriˇzaj
University of Ljubljana
Ljubljana, Slovenia
University of Technology, Sydney
Sydney, Australia
National Institute of Standards and Technology
Gaithersburg, MD, USA
('1757322', 'J. Ross Beveridge', 'j. ross beveridge')
('1694404', 'Bruce A. Draper', 'bruce a. draper')
('1704876', 'Patrick J. Flynn', 'patrick j. flynn')
('39976184', 'Patrik Huber', 'patrik huber')
('1748684', 'Josef Kittler', 'josef kittler')
('7945869', 'Zhiwu Huang', 'zhiwu huang')
('1688086', 'Shaoxin Li', 'shaoxin li')
('38751558', 'Yan Li', 'yan li')
('1693589', 'Meina Kan', 'meina kan')
('3373117', 'Ruiping Wang', 'ruiping wang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('3131569', 'Haoxiang Li', 'haoxiang li')
('37990555', 'Changxing Ding', 'changxing ding')
('32028519', 'P. Jonathon Phillips', 'p. jonathon phillips')
ross@cs.colostate.edu
12cb3bf6abf63d190f849880b1703ccc183692feGuess Who?: A game to crowdsource the labeling of affective facial
expressions is comparable to expert ratings.
Graduation research project, june 2012
Supervised by: Dr. Joost Broekens
mail@barryborsboom.nl
12095f9b35ee88272dd5abc2d942a4f55804b31eDenseReg: Fully Convolutional Dense Shape Regression In-the-Wild
Rıza Alp G¨uler1
1INRIA-CentraleSup´elec, France
Imperial College London, UK
Stefanos Zafeiriou2
3Amazon, Berlin, Germany
University College London, UK
('2814229', 'George Trigeorgis', 'george trigeorgis')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2796644', 'Patrick Snape', 'patrick snape')
('48111527', 'Iasonas Kokkinos', 'iasonas kokkinos')
riza.guler@inria.fr
2{g.trigeorgis, p.snape, s.zafeiriou}@imperial.ac.uk
antonak@amazon.com
i.kokkinos@cs.ucl.ac.uk
12cd96a419b1bd14cc40942b94d9c4dffe5094d229
Proceedings of the 5th Workshop on Vision and Language, pages 29–38,
Berlin, Germany, August 12 2016. c(cid:13)2016 Association for Computational Linguistics
1275852f2e78ed9afd189e8b845fdb5393413614A Transfer Learning based Feature-Weak-Relevant Method for
Image Clustering
Dalian Maritime University
Dalian, China
('3852923', 'Bo Dong', 'bo dong')
('2860808', 'Xinnian Wang', 'xinnian wang')
{dukedong,wxn}@dlmu.edu.cn
1297ee7a41aa4e8499c7ddb3b1fed783eba19056University of Nebraska - Lincoln
US Army Research
2015
U.S. Department of Defense
Effects of emotional expressions on persuasion
Gale Lucas
University of Southern California
University of Southern California
University of Southern California
University of Southern California
Follow this and additional works at: http://digitalcommons.unl.edu/usarmyresearch
Wang, Yuqiong; Lucas, Gale; Khooshabeh, Peter; de Melo, Celso; and Gratch, Jonathan, "Effects of emotional expressions on
persuasion" (2015). US Army Research. 340.
http://digitalcommons.unl.edu/usarmyresearch/340
('49416640', 'Yuqiong Wang', 'yuqiong wang')
('2635945', 'Peter Khooshabeh', 'peter khooshabeh')
('1977901', 'Celso de Melo', 'celso de melo')
('1730824', 'Jonathan Gratch', 'jonathan gratch')
DigitalCommons@University of Nebraska - Lincoln
University of Southern California, wangyuqiong@ymail.com
This Article is brought to you for free and open access by the U.S. Department of Defense at DigitalCommons@University of Nebraska - Lincoln. It has
been accepted for inclusion in US Army Research by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln.
12055b8f82d5411f9ad196b60698d76fbd07ac1e1475
Multiview Facial Landmark Localization in RGB-D
Images via Hierarchical Regression
With Binary Patterns
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('40647981', 'Wei Zhang', 'wei zhang')
('7137861', 'Jianzhuang Liu', 'jianzhuang liu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
126214ef0dcef2b456cb413905fa13160c73ec8eModelling human perception of static facial expressions
M.Sorci,J.Ph.Thiran
J.Cruz,T.Robin,M.Bierlaire
Electrical Engineering Institute, EPFL
Transport and Mobility Laboratory,EPFL
Station 11, CH-1015, Lausanne
Station 11, CH-1015, Lausanne
G.Antonini
IBM Zurich Lab
Saumerstrasse 4 ,Ruschlikon
B.Cerretani
University of Siena
DII,Siena
{matteo.sorci,JP.Thiran}@epfl.ch
{javier.cruz,thomas.robin,michel.bierlaire}@epfl.ch
gan@zurich.ibm.com
barbara.cerretani@gmail.com
120785f9b4952734818245cc305148676563a99bDiagnostic automatique de l’état dépressif
S. Cholet
H. Paugam-Moisy
Laboratoire de Mathématiques Informatique et Applications (LAMIA - EA 4540)
Université des Antilles, Campus de Fouillole - Guadeloupe
Résumé
Les troubles psychosociaux sont un problème de santé pu-
blique majeur, pouvant avoir des conséquences graves sur
le court ou le long terme, tant sur le plan professionnel que
personnel ou familial. Le diagnostic de ces troubles doit
être établi par un professionnel. Toutefois, l’IA (l’Intelli-
gence Artificielle) peut apporter une contribution en four-
nissant au praticien une aide au diagnostic, et au patient
un suivi permanent rapide et peu coûteux. Nous proposons
une approche vers une méthode de diagnostic automatique
de l’état dépressif à partir d’observations du visage en
temps réel, au moyen d’une simple webcam. A partir de
vidéos du challenge AVEC’2014, nous avons entraîné un
classifieur neuronal à extraire des prototypes de visages
selon différentes valeurs du score de dépression de Beck
(BDI-II).
Stephane.Cholet@univ-antilles.fr
12692fbe915e6bb1c80733519371bbb90ae07539Object Bank: A High-Level Image Representation for Scene
Classification & Semantic Feature Sparsification
Stanford University
Carnegie Mellon University
('33642044', 'Li-Jia Li', 'li-jia li')
('2888806', 'Hao Su', 'hao su')
('1752601', 'Eric P. Xing', 'eric p. xing')
('3216322', 'Li Fei-Fei', 'li fei-fei')
1251deae1b4a722a2155d932bdfb6fe4ae28dd22A Large-scale Attribute Dataset for Zero-shot Learning
1 National Engineering Laboratory for Video Technology,
Key Laboratory of Machine Perception (MoE),
Cooperative Medianet Innovation Center, Shanghai,
School of EECS, Peking University, Beijing, 100871, China
School of Data Science, Fudan University
3 Sinovation Ventures
('49217762', 'Bo Zhao', 'bo zhao')
('35782003', 'Yanwei Fu', 'yanwei fu')
('1705512', 'Rui Liang', 'rui liang')
('3165417', 'Jiahong Wu', 'jiahong wu')
('47904050', 'Yonggang Wang', 'yonggang wang')
('36637369', 'Yizhou Wang', 'yizhou wang')
bozhao, yizhou.wang@pku.edu.cn, yanweifu@fudan.edu.cn
liangrui, wujiahong, wangyonggang@chuangxin.com
12ccfc188de0b40c84d6a427999239c6a379cd66Sparse Adversarial Perturbations for Videos
1 Tsinghua National Lab for Information Science and Technology
1 State Key Lab of Intelligent Technology and Systems
Tsinghua University
1 Center for Bio-Inspired Computing Research
('2769710', 'Xingxing Wei', 'xingxing wei')
('40062221', 'Jun Zhu', 'jun zhu')
('37409747', 'Hang Su', 'hang su')
{xwei11, dcszj, suhangss}@mail.tsinghua.edu.cn
12c713166c46ac87f452e0ae383d04fb44fe4eb2
12ebeb2176a5043ad57bc5f3218e48a96254e3e9International Journal of Computer Applications (0975 – 8887)
Volume 120 – No.24, June 2015
Traffic Road Sign Detection and Recognition for
Automotive Vehicles
Zakir Hyder
Department of Electrical Engineering and
Department of Electrical Engineering and
Computer Science North South University, Dhaka
Computer Science North South University, Dhaka
Bangladesh
Bangladesh
1270044a3fa1a469ec2f4f3bd364754f58a1cb56Video-Based Face Recognition Using Probabilistic Appearance Manifolds
yComputer Science
Urbana, IL 61801
zComputer Science & Engineering
University of Illinois, Urbana-Champaign University of California, San Diego
La Jolla, CA 92093
David Kriegmanz
Honda Research Institute
800 California Street
Mountain View, CA 94041
('2457452', 'Kuang-chih Lee', 'kuang-chih lee')
('1788818', 'Jeffrey Ho', 'jeffrey ho')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
klee10@uiuc.edu
jho@cs.ucsd.edu myang@honda-ri.com kriegman@cs.ucsd.edu
12150d8b51a2158e574e006d4fbdd3f3d01edc93Deep End2End Voxel2Voxel Prediction
Presented by: Ahmed Osman
Ahmed Osman
('1687325', 'Du Tran', 'du tran')
('2276554', 'Rob Fergus', 'rob fergus')
('2210374', 'Manohar Paluri', 'manohar paluri')
12003a7d65c4f98fb57587fd0e764b44d0d10125Face Recognition in the Wild with the Probabilistic Gabor-Fisher
Classifier
Simon Dobriˇsek, Vitomir ˇStruc, Janez Kriˇzaj, France Miheliˇc
Faculty of Electrical Engineering, University of Ljubljana, Tr za ska cesta 25, SI-1000 Ljubljana, Slovenia
124538b3db791e30e1b62f81d4101be435ee12efORIGINAL RESEARCH ARTICLE
published: 29 August 2013
doi: 10.3389/fpsyg.2013.00506
Basic level scene understanding: categories, attributes and
structures
Computer Science, Princeton University, Princeton, NJ, USA
Computer Science, Brown University, Providence, RI, USA
Computer Science and Engineering, University of Washington, Seattle, WA, USA
Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
Computer Science and Arti cial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
Computer Science and Arti cial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
Edited by:
Tamara Berg, Stony Brook
University, USA
Reviewed by:
Andrew M. Haun, Harvard Medical
School, USA
Devi Parikh, Virginia Tech, USA
*Correspondence:
Brown University
115 Waterman Street, Box 1910,
Providence, RI 02912, USA
A longstanding goal of computer vision is to build a system that can automatically
understand a 3D scene from a single image. This requires extracting semantic concepts
and 3D information from 2D images which can depict an enormous variety of
environments that comprise our visual world. This paper summarizes our recent efforts
toward these goals. First, we describe the richly annotated SUN database which is a
collection of annotated images spanning 908 different scene categories with object,
attribute, and geometric labels for many scenes. This database allows us to systematically
study the space of scenes and to establish a benchmark for scene and object recognition.
We augment the categorical SUN database with 102 scene attributes for every image and
explore attribute recognition. Finally, we present an integrated system to extract the 3D
structure of the scene and objects depicted in an image.
Keywords: SUN database, basic level scene understanding, scene recognition, scene attributes, geometry
recognition, 3D context
1. INTRODUCTION
The ability to understand a 3D scene depicted in a static 2D image
goes to the very heart of the computer vision problem. By “scene”
we mean a place in which a human can act within or navigate.
What does it mean to understand a scene? There is no univer-
sal answer as it heavily depends on the task involved, and this
seemingly simple question hides a lot of complexity.
The dominant view in the current computer vision literature
is to name the scene and objects present in an image. However,
this level of understanding is rather superficial. If we can reason
about a larger variety of semantic properties and structures of
scenes it will enable richer applications. Furthermore, working on
an over-simplified task may distract us from exploiting the natu-
ral structures of the problem (e.g., relationships between objects
and 3d surfaces or the relationship between scene attributes and
object presence), which may be critical for a complete scene
understanding solution.
What is the ultimate goal of computational scene under-
standing? One goal might be to pass the Turing test for scene
understanding: Given an image depicting a static scene, a human
judge will ask a human or a machine questions about the picture.
If the judge cannot reliably tell the machine from the human, the
machine is said to have passed the test. This task is beyond the
current state-of-the-art as humans could ask a huge variety of
meaningful visual questions about an image, e.g., Is it safe to cross
this road? Who ate the last cupcake? Is this a fun place to vacation?
Are these people frustrated? Where can I set these groceries? etc.
Therefore, we propose a set of goals that are suitable for the
current state of research in computer vision that are not too
simplistic nor challenging and lead to a natural representation of
scenes. Based on these considerations, we define the task of scene
understanding as predicting the scene category, scene attributes,
the 3D enclosure of the space, and all the objects in the images.
For each object, we want to know its category and 3D bound-
ing box, as well as its 3D orientation relative to the scene. As an
image is a viewer-centric observation of the space, we also want
to recover the camera parameters, such as observer viewpoint
and field of view. We call this taskbasic level scene understand-
ing, with analogy to basic level in cognitive categorization (Rosch,
1978). It has practical applications for providing sufficient infor-
mation for simple interaction with the scene, such as navigation
and object manipulation.
1.1. OUTLINE
In this paper we discuss several aspects of basic level scene under-
standing. First, we quickly review our recent work on categorical
(section 2) and attribute-based scene representations (section 3).
Finally, we go into greater detail about novel work in 3d scene
understanding using structured learning to simultaneously rea-
son about many aspects of scenes (section 4).
Supporting these research efforts is the Scene UNderstanding
(SUN) database. By modern standards, the SUN database is not
especially large, containing on the order of 100,000 scenes. But
the SUN database is, instead, richly annotated with scene cat-
egories, scene attributes, geometric properties, “memorability”
measurements (Isola et al., 2011), and object segmentations.
There are 326,582 manually segmented objects for the 5650
object categories labeled (Barriuso and Torralba, 2012). Object
www.frontiersin.org
August 2013 | Volume 4 | Article 506 | 1
('40599257', 'Jianxiong Xiao', 'jianxiong xiao')
('12532254', 'James Hays', 'james hays')
('2537592', 'Bryan C. Russell', 'bryan c. russell')
('40541456', 'Genevieve Patterson', 'genevieve patterson')
('1865091', 'Krista A. Ehinger', 'krista a. ehinger')
('38611723', 'Antonio Torralba', 'antonio torralba')
('31735139', 'Aude Oliva', 'aude oliva')
('12532254', 'James Hays', 'james hays')
e-mail: hays@cs.brown.edu
12d8730da5aab242795bdff17b30b6e0bac82998Persistent Evidence of Local Image Properties in Generic ConvNets
CVAP, KTH (Royal Institute of Technology), Stockholm, SE
('2835963', 'Ali Sharif Razavian', 'ali sharif razavian')
('2622491', 'Hossein Azizpour', 'hossein azizpour')
('1801052', 'Atsuto Maki', 'atsuto maki')
('1736906', 'Josephine Sullivan', 'josephine sullivan')
('2484138', 'Carl Henrik Ek', 'carl henrik ek')
('1826607', 'Stefan Carlsson', 'stefan carlsson')
{razavian,azizpour,atsuto,sullivan,chek,stefanc}@csc.kth.se
8c13f2900264b5cf65591e65f11e3f4a35408b48A GENERIC FACE REPRESENTATION APPROACH FOR
LOCAL APPEARANCE BASED FACE VERIFICATION
Interactive Systems Labs, Universität Karlsruhe (TH)
76131 Karlsruhe, Germany
web: http://isl.ira.uka.de/face_recognition/
('3025777', 'Hazim Kemal Ekenel', 'hazim kemal ekenel')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
{ekenel, stiefel}@ira.uka.de
8cb3f421b55c78e56c8a1c1d96f23335ebd4a5bf
8c955f3827a27e92b6858497284a9559d2d0623aBuletinul Ştiinţific al Universităţii "Politehnica" din Timişoara
Seria ELECTRONICĂ şi TELECOMUNICAŢII
TRANSACTIONS on ELECTRONICS and COMMUNICATIONS
Tom 53(67), Fascicola 1-2, 2008
Facial Expression Recognition under Noisy Environment
Using Gabor Filters
('2336758', 'Ioan Buciu', 'ioan buciu')
('2526319', 'I. Nafornita', 'i. nafornita')
('29835181', 'I. Pitas', 'i. pitas')
8c8525e626c8857a4c6c385de34ffea31e7e41d1Cross-domain Image Retrieval with a Dual Attribute-aware Ranking Network
National University of Singapore, Singapore
2IBM Research
('1753492', 'Junshi Huang', 'junshi huang')
('35370244', 'Qiang Chen', 'qiang chen')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
{a0092558, eleyans}@nus.edu.sg
rsferis@us.ibm.com
qiangchen@au1.ibm.com
8c66378df977606d332fc3b0047989e890a6ac76Hierarchical-PEP Model for Real-world Face Recognition
Stevens Institute of Technology
Pose variation remains one of the major factors adversely affect the accuracy
of real-world face recognition systems. The same face in different poses
can look drastically different to each other. Belhumeur et al. [1] empiri-
cally demonstrate that frontal faces can be projected to a low-dimensional
subspace invariant to variation in illumination and facial expressions. This
observation highlights the importance of addressing pose variation because
it can greatly help relieve the adverse effects of the other visual variations.
A set of methods build pose-invariant face representations by locating
the facial landmarks. For example, Chen et al. [2] concatenate dense fea-
tures around the facial landmarks to build the face representation. The pose-
invariance is achieved in this way, because it always extracts features from
the face part surrounded around the facial landmarks regardless of their loca-
tions in the image. The elastic matching methods [5] generalize this design
with a probabilistic elastic part (PEP) model unsupervisedly learned from
face image patches.
While this procedure – locating the face parts and stacking the face part
features to build face representation – is empirically demonstrated to be ef-
fective by both Chen et al. [2] and Li et al. [5], we argue that directly de-
scribing the face parts with naive dense extraction of low-level features may
not be optimal.
In this work, we propose to build a better face part model to construct
an improved face representation. Inspired by the probabilistic elastic part
(PEP) model and the success of the deep hierarchical architecture in a num-
ber of visual tasks, we propose the Hierarchical-PEP model to approach the
unconstrained face recognition problem.
As shown in Figure 1, we apply the PEP model hierarchically to decom-
pose a face image into face parts at different levels of details to build pose-
invariant part-based face representations. Following the hierarchy from bottom-
up, we stack the face part representations at each layer, discriminatively re-
duce its dimensionality, and hence aggregate the face part representations
layer-by-layer to build a compact and invariant face representation. The
Hierarchical-PEP model exploits the fine-grained structures of the face parts
at different levels of details to address the pose variations. It is also guided
by supervised information in constructing the face part/face representations.
We empirically verify the Hierarchical-PEP model on two public bench-
marks and a face recognition challenge for image-based and video-based
face verification. The state-of-the-art performance demonstrates the poten-
tial of our method. We show the performance comparison on the YouTube
faces dataset [9] in Table 1.
Table 1: Performance comparison on YouTube Faces dataset under the re-
stricted with no outside data protocol.
Algorithm
MBGS [9]
MBGS+SVM- [8]
STFRD+PMML [10]
VF2 [7]
DDML (combined) [3]
Eigen-PEP [6]
LM3L [4]
Hierarchical-PEP (layers fusion)
Accuracy ± Error(%)
76.4± 1.8
78.9± 1.9
79.5± 2.5
84.7± 1.4
82.3± 1.5
84.8± 1.4
81.3± 1.2
87.00± 1.50
[1] Peter N. Belhumeur, Jo ˜ao P. Hespanha, and David J. Kriegman. Eigen-
faces vs. Fisherfaces: Recognition using class specific linear projec-
tion. PAMI, 1997.
[2] Dong Chen, Xudong Cao, Fang Wen, and Jian Sun. Blessing of di-
mensionality: High dimensional feature and its efficient compression
for face verification. In CVPR, 2013.
[3] Junlin Hu, Jiwen Lu, and Yap-Peng Tan. Discriminative deep metric
learning for face verification in the wild. In CVPR, 2014.
[4] Junlin Hu, Jiwen Lu, Junsong Yuan, and Yap-Peng Tan. Large margin
multi-metric learning for face and kinship verification in the wild. In
ACCV, 2014.
Yang. Probabilistic elastic matching for pose variant face verification.
In CVPR, 2013.
Eigen-pep for video face recognition. In ACCV, 2014.
[7] O. M. Parkhi, K. Simonyan, A. Vedaldi, and A. Zisserman. A compact
and discriminative face track descriptor. In CVPR, 2014.
[8] Lior Wolf and Noga Levy. The svm-minus similarity score for video
face recognition. In CVPR, 2013.
[9] Lior Wolf, Tal Hassner, and Itay Maoz. Face recognition in uncon-
strained videos with matched background similarity. In CVPR, 2011.
[10] Cui Zhen, Wen Li, Dong Xu, Shiguang Shan, and Xilin Chen. Fus-
ing robust face region descriptors via multiple metric learning for face
recognition in the wild. In CVPR, 2013.
Figure 1: Construction of the face representation with an example 2-layer Hierarchical-PEP model: PCA at layer t keeps dt dimensions.
('3131569', 'Haoxiang Li', 'haoxiang li')
('1745420', 'Gang Hua', 'gang hua')
('3131569', 'Haoxiang Li', 'haoxiang li')
('1745420', 'Gang Hua', 'gang hua')
('3131569', 'Haoxiang Li', 'haoxiang li')
('1745420', 'Gang Hua', 'gang hua')
8c9c8111e18f8798a612e7386e88536dfe26455eCOMPARING BAYESIAN NETWORKS TO CLASSIFY FACIAL
EXPRESSIONS
Institute of Systems and Robotics
University of Coimbra, Portugal
Institute Polythechnic of Leiria, Portugal
Jorge Dias
Institute of Systems and Robotics
University of Coimbra, Portugal
Institute of Systems and Robotics
University of Coimbra, Portugal
('2700157', 'Carlos Simplício', 'carlos simplício')
('40031257', 'José Prado', 'josé prado')
carlos.simplicio@ipleiria.pt
jaugusto@isr.uc.pt
jorge@isr.uc.pt
8c7f4c11b0c9e8edf62a0f5e6cf0dd9d2da431faDataset Augmentation for Pose and Lighting
Invariant Face Recognition
Vision Systems, Inc
†Systems and Technology Research
('2103732', 'Octavian Biris', 'octavian biris')
('3390731', 'Nate Crosswhite', 'nate crosswhite')
('36067742', 'Jeffrey Byrne', 'jeffrey byrne')
('3453447', 'Joseph L. Mundy', 'joseph l. mundy')
8c81705e5e4a1e2068a5bd518adc6955d49ae4343D Object Recognition with Enhanced
Grassmann Discriminant Analysis
Graduate School of Systems and Information Engineering,
University of Tsukuba, Japan
('9641567', 'Lincon Sales de Souza', 'lincon sales de souza')
('34581814', 'Hideitsu Hino', 'hideitsu hino')
('1770128', 'Kazuhiro Fukui', 'kazuhiro fukui')
lincons@cvlab.cs.tsukuba.ac.jp, {hinohide, kfukui}@cs.tsukuba.ac.jp
8cb403c733a5f23aefa6f583a17cf9b972e35c90Learning the semantic structure of objects
from Web supervision
David Novotny1
1Visual Geometry Group
University of Oxford
2Computer Vision Group
Xerox Research Centre Europe
('2295553', 'Diane Larlus', 'diane larlus')
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
{david,andrea}@robots.ox.ac.uk
diane.larlus@xrce.xerox.com
8ccde9d80706a59e606f6e6d48d4260b60ccc736RotDCF: Decomposition of Convolutional Filters for
Rotation-Equivariant Deep Networks
Duke University
Duke University
('1823644', 'Xiuyuan Cheng', 'xiuyuan cheng')
('2077648', 'Qiang Qiu', 'qiang qiu')
('1699339', 'Guillermo Sapiro', 'guillermo sapiro')
8c6b9c9c26ead75ce549a57c4fd0a12b46142848Facial expression recognition using shape and
texture information
I. Kotsia1 and I. Pitas1
Aristotle University of Thessaloniki
Department of Informatics
Box 451 54124
Thessaloniki, Greece
Summary. A novel method based on shape and texture information is proposed in
this paper for facial expression recognition from video sequences. The Discriminant
Non-negative Matrix Factorization (DNMF) algorithm is applied at the image cor-
responding to the greatest intensity of the facial expression (last frame of the video
sequence), extracting that way the texture information. A Support Vector Machines
(SVMs) system is used for the classi(cid:12)cation of the shape information derived from
tracking the Candide grid over the video sequence. The shape information consists
of the di(cid:11)erences of the node coordinates between the (cid:12)rst (neutral) and last (fully
expressed facial expression) video frame. Subsequently, fusion of texture and shape
information obtained is performed using Radial Basis Function (RBF) Neural Net-
works (NNs). The accuracy achieved is equal to 98,2% when recognizing the six
basic facial expressions.
1.1 Introduction
During the past two decades, many studies regarding facial expression recog-
nition, which plays a vital role in human centered interfaces, have been
conducted. Psychologists have de(cid:12)ned the following basic facial expressions:
anger, disgust, fear, happiness, sadness and surprise [?]. A set of muscle move-
ments, known as Action Units, was created. These movements form the so
called F acial Action Coding System (F ACS) [?]. A survey on auto-
matic facial expression recognition can be found in [?].
In the current paper, a novel method for video based facial expression
recognition by fusing texture and shape information is proposed. The texture
information is obtained by applying the DNMF algorithm [?] on the last
frame of the video sequence, i.e. the one that corresponds to the greatest
intensity of the facial expression depicted. The shape information is calculated
as the di(cid:11)erence of Candide facial model grid node coordinates between the
(cid:12)rst and the last frame of a video sequence [?]. The decision made regarding
pitas@aiia.csd.auth.gr
8ce9b7b52d05701d5ef4a573095db66ce60a7e1cStructured Sparse Subspace Clustering: A Joint
Affinity Learning and Subspace Clustering
Framework
('9171002', 'Chun-Guang Li', 'chun-guang li')
('1878841', 'Chong You', 'chong you')
8cb6daba2cb1e208e809633133adfee0183b8dd2Know Before You Do: Anticipating Maneuvers
via Learning Temporal Driving Models
Cornell University and Stanford University
('1726066', 'Ashesh Jain', 'ashesh jain')
('3282281', 'Bharad Raghavan', 'bharad raghavan')
('1681995', 'Ashutosh Saxena', 'ashutosh saxena')
{ashesh,hema,asaxena}@cs.cornell.edu {bharad,shanesoh}@stanford.edu
8c4ea76e67a2a99339a8c4decd877fe0aa2d8e82Article
Gated Convolutional Neural Network for Semantic
Segmentation in High-Resolution Images
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
University of Chinese Academy of Sciences, Beijing 101408, China
Academic Editors: Qi Wang, Xiaofeng Li and Prasad S. Thenkabail
Received: 2 April 2017; Accepted: 1 May 2017; Published: 5 May 2017
('2206625', 'Hongzhen Wang', 'hongzhen wang')
('1738352', 'Ying Wang', 'ying wang')
('1737486', 'Qian Zhang', 'qian zhang')
('1683738', 'Shiming Xiang', 'shiming xiang')
('3364363', 'Chunhong Pan', 'chunhong pan')
95 Zhongguancun East Road, Beijing 100190, China; hongzhen.wang@nlpr.ia.ac.cn (H.W.);
ywang@nlpr.ia.ac.cn (Y.W.); chpan@nlpr.ia.ac.cn (C.P.)
3 Alibaba Group, Beijing 100102, China; zhangqiancsuia@163.com
* Correspondence: smxiang@nlpr.ia.ac.cn; Tel.: +86-136-7118-9070
8c6c0783d90e4591a407a239bf6684960b72f34eSESSION
KNOWLEDGE ENGINEERING AND
MANAGEMENT + KNOWLEDGE ACQUISITION
Chair(s)
TBA
Int'l Conf. Information and Knowledge Engineering | IKE'13 |1
8cb55413f1c5b6bda943697bba1dc0f8fc880d28Video-based Face Recognition on Real-World Data
Hazım K. Ekenel
Interactive System Labs
University of Karlsruhe, Germany
('1842921', 'Johannes Stallkamp', 'johannes stallkamp')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
{jstallkamp,ekenel,stiefel}@ira.uka.de
8cc07ae9510854ec6e79190cc150f9f1fe98a238Article
Using Deep Learning to Challenge Safety Standard
for Highly Autonomous Machines in Agriculture
Aarhus University, Finlandsgade 22 8200 Aarhus N, Denmark
† These authors contributed equally to this work.
Academic Editors: Francisco Rovira-Más and Gonzalo Pajares Martinsanz
Received: 18 December 2015; Accepted: 2 February 2016; Published: 15 February 2016
('32688812', 'Kim Arild Steen', 'kim arild steen')
('2139204', 'Peter Christiansen', 'peter christiansen')
('2550309', 'Henrik Karstoft', 'henrik karstoft')
pech@eng.au.dk (P.C.); hka@eng.au.dk (H.K.); rnj@eng.au.dk (R.N.J.)
* Correspondence: kim.steen@eng.au.dk; Tel.: +45-3116-8628
8509abbde2f4b42dc26a45cafddcccb2d370712fImproving precision and recall of face recognition in SIPP with combination of
modified mean search and LSH
Xihua.Li
lixihua9@126.com
855bfc17e90ec1b240efba9100fb760c068a8efa
858ddff549ae0a3094c747fb1f26aa72821374ecSurvey on RGB, 3D, Thermal, and Multimodal
Approaches for Facial Expression Recognition:
History, Trends, and Affect-related Applications
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('7855312', 'Sergio Escalera', 'sergio escalera')
85041e48b51a2c498f22850ce7228df4e2263372Subspace Regression: Predicting
a Subspace from One Sample
Robotics Institute, Carnegie Mellon University
‡ Electrical & Controls Integration Lab, General Motors R&D
('34299925', 'Minyoung Kim', 'minyoung kim')
85fd2bda5eb3afe68a5a78c30297064aec1361f6702003 PSSXXX10.1177/0956797617702003Carr et al.Are You Smiling, or Have I Seen You Before?
research-article2017
Research Article
Are You Smiling, or Have I Seen You
Before? Familiarity Makes Faces Look
Happier
2017, Vol. 28(8) 1087 –1102
© The Author(s) 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0956797617702003
https://doi.org/10.1177/0956797617702003
www.psychologicalscience.org/PS
Columbia Business School, University of California, San Diego
Behavioural Science Group, Warwick Business School, University of Warwick; and 4Faculty of Psychology
SWPS University of Social Sciences and Humanities
('5907729', 'Evan W. Carr', 'evan w. carr')
('3122131', 'Piotr Winkielman', 'piotr winkielman')
857ad04fca2740b016f0066b152bd1fa1171483fSample Images can be Independently Restored from
Face Recognition Templates
School of Information Technology and Engineering, University of Ottawa, Ontario, Canada
are being piloted or implemented at airports, for
government identification systems such as passports
and drivers licenses, and in surveillance applications.
In this paper, we consider the identifiability of stored
biometric
information, and
for
biometric privacy and security.
implications
its
Biometric authentication is typically performed by
a sophisticated software application, which manages
the user interface and database, and interacts with a
vendor specific, proprietary biometric algorithm.
Algorithms undertake the following processing steps:
1) acquisition of a biometric sample image, 2)
conversion of the sample image to a biometric
template, 3) comparison of the new (or "live")
template to previously stored templates, to calculate a
match score. High match scores indicate a likelihood
that the corresponding images are from the same
individual. The biometric template is a (typically
vendor specific) compact digital representation of the
essential features of the sample image. Biometric
algorithm vendors have uniformly claimed that it is
impossible or infeasible to recreate the image from the
template. [2, 3, 4, 7] These claims are supported by: 1)
the template records features (such as fingerprint
minutiae) and not image primitives, 2) templates are
typically calculated using only a small portion of the
image, 3) templates are small − a few hundred bytes −
much smaller than the sample image, and 4) the
proprietary nature of
the storage format makes
templates infeasible to "hack". For these reasons,
biometric templates are considered to be effectively
non-identifiable data, much like a password hash [7].
In fact, these arguments are not valid: this paper
demonstrates a simple algorithm to recreate sample
images from templates using only match score results.
2. METHODS
A software application (figure 1) was designed with
the goal of recreating a face image of a specific person
in a face recognition database. The application has
local access to a database of face images, and has
network access to a Face Recognition Server (FRS)
('2478519', 'Andy Adler', 'andy adler')aadler@uottawa.ca
858901405086056361f8f1839c2f3d65fc86a748ON TENSOR TUCKER DECOMPOSITION: THE CASE FOR AN
ADJUSTABLE CORE SIZE
('2424633', 'BILIAN CHEN', 'bilian chen')
('1792785', 'ZHENING LI', 'zhening li')
('1789588', 'SHUZHONG ZHANG', 'shuzhong zhang')
85188c77f3b2de3a45f7d4f709b6ea79e36bd0d9Author manuscript, published in "Workshop on Faces in 'Real-Life' Images: Detection, Alignment, and Recognition, Marseille :
France (2008)"
858b51a8a8aa082732e9c7fbbd1ea9df9c76b013Can Computer Vision Problems Benefit from
Structured Hierarchical Classification?
Sandor Szedmak2
INTELSIG, Monte ore Institute, University of Li`ege, Belgium
Intelligent and Interactive Systems, Institute of Computer Science, University of
Innsbruck, Austria
('3104165', 'Thomas Hoyoux', 'thomas hoyoux')
('1772389', 'Justus H. Piater', 'justus h. piater')
856317f27248cdb20226eaae599e46de628fb696A Method Based on Convex Cone Model for
Image-Set Classification with CNN Features
Graduate School of Systems and Information Engineering, University of Tsukuba
1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8573, Japan
('46230115', 'Naoya Sogi', 'naoya sogi')
('2334316', 'Taku Nakayama', 'taku nakayama')
('1770128', 'Kazuhiro Fukui', 'kazuhiro fukui')
Email: {sogi, nakayama}@cvlab.cs.tsukuba.ac.jp, kfukui@cs.tsukuba.ac.jp
8518b501425f2975ea6dcbf1e693d41e73d0b0afRelative Hidden Markov Models for Evaluating Motion Skills
Computer Science and Engineering
Arizona State Univerisity, Tempe, AZ 85281
('1689161', 'Qiang Zhang', 'qiang zhang')
('2913552', 'Baoxin Li', 'baoxin li')
qzhang53,baoxin.li@asu.edu
855184c789bca7a56bb223089516d1358823db0bAutomatic Procedure to Fix Closed-Eyes Image
University of California, Berkeley
Figure 1: Pipeline to Fix Closed-Eyes Image
('31781046', 'Hung Vu', 'hung vu')
853bd61bc48a431b9b1c7cab10c603830c488e39Learning Face Representation from Scratch
Center for Biometrics and Security Research & National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences (CASIA
('1716143', 'Dong Yi', 'dong yi')
('1718623', 'Zhen Lei', 'zhen lei')
('40397682', 'Shengcai Liao', 'shengcai liao')
('34679741', 'Stan Z. Li', 'stan z. li')
dong.yi, zlei, scliao, szli@nlpr.ia.ac.cn
85639cefb8f8deab7017ce92717674d6178d43ccAutomatic Analysis of Spontaneous Facial Behavior:
A Final Project Report
(UCSD MPLab TR 2001.08, October 31 2001)
cid:1)Institute for Neural Computation
(cid:2)Department of Cognitive Science
University of California, San Diego
cid:3)The Salk Institute and Howard Hughes Medical Institute
('2218905', 'Marian S. Bartlett', 'marian s. bartlett')
('33937541', 'Bjorn Braathen', 'bjorn braathen')
('2039025', 'Ian Fasel', 'ian fasel')
('1714528', 'Terrence J. Sejnowski', 'terrence j. sejnowski')
('1741200', 'Javier R. Movellan', 'javier r. movellan')
854dbb4a0048007a49df84e3f56124d387588d99JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
Spatial-Temporal Recurrent Neural Network for
Emotion Recognition
('38144094', 'Tong Zhang', 'tong zhang')
('40608983', 'Wenming Zheng', 'wenming zheng')
('10338111', 'Zhen Cui', 'zhen cui')
('2378869', 'Yuan Zong', 'yuan zong')
('1678662', 'Yang Li', 'yang li')
85674b1b6007634f362cbe9b921912b697c0a32cOptimizing Facial Landmark Detection by
Facial Attribute Learning
The Chinese University of Hong Kong
('3152448', 'Zhanpeng Zhang', 'zhanpeng zhang')
('1693209', 'Ping Luo', 'ping luo')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
1d21e5beef23eecff6fff7d4edc16247f0fd984aFace Recognition from Video using the Generic
Shape-Illumination Manifold
Department of Engineering
University of Cambridge
Cambridge, CB2 1PZ, UK
('1745672', 'Roberto Cipolla', 'roberto cipolla'){oa214,cipolla}@eng.cam.ac.uk
1dbbec4ad8429788e16e9f3a79a80549a0d7ac7b
1d7ecdcb63b20efb68bcc6fd99b1c24aa6508de91860
The Hidden Sides of Names—Face Modeling
with First Name Attributes
('2896700', 'Huizhong Chen', 'huizhong chen')
('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1739786', 'Bernd Girod', 'bernd girod')
1d846934503e2bd7b8ea63b2eafe00e29507f06a
1d19c6857e798943cd0ecd110a7a0d514c671fecDo Deep Neural Networks Learn Facial Action Units
When Doing Expression Recognition?
Beckman Institute for Advanced Science and Technology
University of Illinois at Urbana-Champaign
('1911177', 'Pooya Khorrami', 'pooya khorrami')
('40470211', 'Tom Le Paine', 'tom le paine')
('1739208', 'Thomas S. Huang', 'thomas s. huang')
{pkhorra2,paine1,t-huang1}@illinois.edu
1d1a7ef193b958f9074f4f236060a5f5e7642fc1Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'13 \
675
Ensemble of Patterns of Oriented Edge Magnitudes
Descriptors For Face Recognition
Computer Information Systems, Missouri State University, 901 S. National, Springfield, MO 65804, USA
faces; and 3) face tagging, which is a particular case of face
identification.
('1804258', 'Loris Nanni', 'loris nanni')
('1707759', 'Alessandra Lumini', 'alessandra lumini')
('2292370', 'Sheryl Brahnam', 'sheryl brahnam')
*DEI, University o f Padua, viale Gradenigo 6, Padua, Italy, {loris.nanni, mauro.migliardi}@unipd.it;
2DEI, Universita di Bologna, Via Venezia 52, 47521 Cesena, Italy, alessandra.lumini@ unibo.it;
sbrahnam@missouristate.edu
1d696a1beb42515ab16f3a9f6f72584a41492a03Deeply learned face representations are sparse, selective, and robust
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1681656', 'Yi Sun', 'yi sun')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
sy011@ie.cuhk.edu.hk
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
1d1caaa2312390260f7d20ad5f1736099818d358Resource-Allocating Codebook for Patch-based Face Recognition
School of Electronics and Computer Science
University of Southampton, SO17 1BJ, UK
('34672932', 'Amirthalingam Ramanan', 'amirthalingam ramanan')
('1697360', 'Mahesan Niranjan', 'mahesan niranjan')
{ar07r,mn}@ecs.soton.ac.uk
1dc241ee162db246882f366644171c11f7aed96dDeep Action- and Context-Aware Sequence Learning for Activity Recognition
and Anticipation
Australian National University, 2Smart Vision Systems, CSIRO, 3CVLab, EPFL
('35441838', 'Fatemehsadat Saleh', 'fatemehsadat saleh')
('1688071', 'Basura Fernando', 'basura fernando')
('2862871', 'Mathieu Salzmann', 'mathieu salzmann')
('2370776', 'Lars Petersson', 'lars petersson')
('34234277', 'Lars Andersson', 'lars andersson')
firstname.lastname@data61.csiro.au, basura.fernando@anu.edu.au, mathieu.salzmann@epfl.ch
1d0128b9f96f4c11c034d41581f23eb4b4dd7780Automatic Construction Of Robust Spherical Harmonic Subspaces
Imperial College London
('2796644', 'Patrick Snape', 'patrick snape')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
{p.snape,i.panagakis,s.zafeiriou}@imperial.ac.uk
1d3dd9aba79a53390317ec1e0b7cd742cba43132A Maximum Entropy Feature Descriptor for Age Invariant Face Recognition
(cid:31)
1Shenzhen Key Lab of Computer Vision and Pattern Recognition
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
Centre for Quantum Computation and Intelligent Systems, Faculty of Engineering and IT, University of
Technology, Sydney, NSW 2007, Australia
the Chinese University of Hong Kong
4Media Lab, Huawei Technologies Co. Ltd., China
Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences
('2856494', 'Dihong Gong', 'dihong gong')
('7137861', 'Jianzhuang Liu', 'jianzhuang liu')
('1911510', 'Zhifeng Li', 'zhifeng li')
('1692693', 'Dacheng Tao', 'dacheng tao')
('1720243', 'Xuelong Li', 'xuelong li')
dh.gong@siat.ac.cn
zhifeng.li@siat.ac.cn
dacheng.tao@uts.edu.au
liu.jianzhuang@huawei.com
xuelong_li@opt.ac.cn
1d0dd20b9220d5c2e697888e23a8d9163c7c814bNEGREL ET AL.: BOOSTED METRIC LEARNING FOR FACE RETRIEVAL
Boosted Metric Learning for Efficient
Identity-Based Face Retrieval
Frederic Jurie
GREYC, CNRS UMR 6072, ENSICAEN
Université de Caen Basse-Normandie
France
('2838835', 'Romain Negrel', 'romain negrel')
('2504258', 'Alexis Lechervy', 'alexis lechervy')
romain.negrel@unicaen.fr
alexis.lechervy@unicaen.fr
frederic.jurie@unicaen.fr
1d5aad4f7fae6d414ffb212cec1f7ac876de48bfFace Retriever: Pre-filtering the Gallery via Deep Neural Net
Department of Computer Science and Engineering
Michigan State University, East Lansing, MI 48824, U.S.A
('7496032', 'Dayong Wang', 'dayong wang')
('6680444', 'Anil K. Jain', 'anil k. jain')
{dywang, jain}@msu.edu
1db23a0547700ca233aef9cfae2081cd8c5a04d7www.ijecs.in
International Journal Of Engineering And Computer Science ISSN:2319-7242
Volume 4 Issue 5 May 2015, Page No. 11945-11951
Comparative study and evaluation of various data classification
techniques in data mining
1Research scholar
Department of computer science
Raipur institute of technology
Raipur, India
2Asst. professor
Department of computer science
Raipur institute of technology
Raipur, India
('1977125', 'Vivek Verma', 'vivek verma')E-mail: vivekverma.exe@gmail.com
1d776bfe627f1a051099997114ba04678c45f0f5Deployment of Customized Deep Learning based
Video Analytics On Surveillance Cameras
AitoeLabs (www.aitoelabs.com)
('46175439', 'Pratik Dubal', 'pratik dubal')
('22549601', 'Rohan Mahadev', 'rohan mahadev')
('9745898', 'Suraj Kothawade', 'suraj kothawade')
('46208440', 'Kunal Dargan', 'kunal dargan')
1d97735bb0f0434dde552a96e1844b064af08f62Weber Binary Pattern and Weber Ternary Pattern
for Illumination-Robust Face Recognition
Tsinghua University, China
Shenzhen Key Laboratory of Information Science and Technology, Guangdong, China
('35160104', 'Zuodong Yang', 'zuodong yang')
('2312541', 'Yinyan Jiang', 'yinyan jiang')
('40398990', 'Yong Wu', 'yong wu')
('2265693', 'Zongqing Lu', 'zongqing lu')
('1718891', 'Weifeng Li', 'weifeng li')
('2883861', 'Qingmin Liao', 'qingmin liao')
(cid:3) E-mail: yangzd13@mails.tsinghua.edu.cn
y E-mail: Li.Weifeng@sz.tsinghua.edu.cn
1d3e01d5e2721dcfafe5a3b39c54ee1c980350bb
1dff919e51c262c22630955972968f38ba385d8aToward an Affect-Sensitive Multimodal
Human–Computer Interaction
Invited Paper
The ability to recognize affective states of a person we are com-
municating with is the core of emotional intelligence. Emotional
intelligenceisa facet of human intelligence thathas been argued to be
indispensable and perhaps the most important for successful inter-
personal social interaction. This paper argues that next-generation
human–computer interaction (HCI) designs need to include the
essence of emotional intelligence—the ability to recognize a user’s
affective states—in order to become more human-like, more effec-
tive, and more efficient. Affective arousal modulates all nonverbal
communicative cues (facial expressions, body movements, and vocal
and physiological reactions). In a face-to-face interaction, humans
detect and interpret those interactive signals of their communicator
with little or no effort. Yet design and development of an automated
system that accomplishes these tasks is rather difficult. This paper
surveys the past work in solving these problems by a computer
and provides a set of recommendations for developing the first
part of an intelligent multimodal HCI—an automatic personalized
analyzer of a user’s nonverbal affective feedback.
Keywords—Affective computing, affective states, automatic
analysis of nonverbal communicative cues, human–computer
interaction (HCI), multimodal human–computer
interaction,
personalized human–computer interaction.
I. INTRODUCTION
The exploration of how we as human beings react to the
world and interact with it and each other remains one of
the greatest scientific challenges. Perceiving, learning, and
adapting to the world around us are commonly labeled as
“intelligent” behavior. But what does it mean being intelli-
gent? Is IQ a good measure of human intelligence and the
best predictor of somebody’s success in life? There is now
growing research in the fields of neuroscience, psychology,
and cognitive science which argues that our common view of
intelligence is too narrow, ignoring a crucial range of abilities
Manuscript received October 25, 2002; revised March 5, 2003. The work
of M. Pantic was supported by the Netherlands Organization for Scientific
Research (NWO) Grant EW-639.021.202.
The authors are with the Delft University of Technology, Data and Knowl
edge Systems Group, Mediamatics Department, 2600 AJ Delft, The Nether-
Digital Object Identifier 10.1109/JPROC.2003.817122
that matter immensely to how we do in life. This range
of abilities is called emotional intelligence [44], [96] and
includes the ability to have, express, and recognize affective
states, coupled with the ability to regulate them, employ them
for constructive purpose, and skillfully handle the affective
arousal of others. The skills of emotional intelligence have
been argued to be a better predictor than IQ for measuring
aspects of success in life [44], especially in interpersonal
communication, and learning and adapting to what
is
important [10], [96].
When it comes to the world of computers, not all of them
will need emotional skills and probably none will need all
of the skills that humans need. Yet there are situations where
the man–machine interaction could be improved by having
machines capable of adapting to their users and where the in-
formation about how, when, and how important it is to adapt
involves information on the user’s affective state. In addition,
it seems that people regard computers as social agents with
whom “face-to-(inter)face” interaction may be most easy and
serviceable [11], [75], [90], [101], [110]. Human–computer
interaction (HCI) systems capable of sensing and responding
appropriately to the user’s affective feedback are, therefore,
likely to be perceived as more natural [73], more efficacious
and persuasive [93], and more trustworthy [14], [78].
These findings, together with recent advances in sensing,
tracking, analyzing, and animating human nonverbal com-
municative signals, have produced a surge of interest in
affective computing by researchers of advanced HCI. This
intriguing new field focuses on computational modeling of
human perception of affective states, synthesis/animation of
affective expressions, and design of affect-sensitive HCI.
Indeed, the first step toward an intelligent HCI having the
abilities to sense and respond appropriately to the user’s af-
fective feedback is to detect and interpret affective states
shown by the user in an automatic way. This paper focuses
further on surveying the past work done on solving these
problems and providing an advanced HCI with one of the
key skills of emotional intelligence: the ability to recognize
the user’s nonverbal affective feedback.
0018-9219/03$17.00 © 2003 IEEE
1370
PROCEEDINGS OF THE IEEE, VOL. 91, NO. 9, SEPTEMBER 2003
('1694605', 'MAJA PANTIC', 'maja pantic')lands (e-mail: M.Pantic@cs.tudelft.nl; L.J.M.Rothkrantz@cs.tudelft.nl).
1de8f38c35f14a27831130060810cf9471a62b45Int J Comput Vis
DOI 10.1007/s11263-017-0989-7
A Branch-and-Bound Framework for Unsupervised Common
Event Discovery
Received: 3 June 2016 / Accepted: 12 January 2017
© Springer Science+Business Media New York 2017
('39336289', 'Wen-Sheng Chu', 'wen-sheng chu')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
('1874236', 'Daniel S. Messinger', 'daniel s. messinger')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
1da83903c8d476c64c14d6851c85060411830129Iterated Support Vector Machines for Distance
Metric Learning
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('6292353', 'Faqiang Wang', 'faqiang wang')
('1698371', 'David Zhang', 'david zhang')
('1737218', 'Liang Lin', 'liang lin')
('2224875', 'Yuchi Huang', 'yuchi huang')
('1803714', 'Deyu Meng', 'deyu meng')
('36685537', 'Lei Zhang', 'lei zhang')
1d6068631a379adbcff5860ca2311b790df3a70f
1dacc2f4890431d867a038fd81c111d639cf4d7e2016, Vol. 125, No. 2, 310 –321
0021-843X/16/$12.00
© 2016 American Psychological Association
http://dx.doi.org/10.1037/abn0000139
Using Social Outcomes to Inform Decision-Making in Schizophrenia:
Relationships With Symptoms and Functioning
Timothy R. Campellone, Aaron J. Fisher, and Ann M. Kring
University of California, Berkeley
The outcomes of the decisions we make can be used to inform subsequent choices and behavior. We
investigated whether and how people with and without schizophrenia use positive and negative social
outcomes and emotional displays to inform decisions to place trust in social partners. We also investi-
gated the impact of reversals in social partners’ behavior on decisions to trust. Thirty-two people with
schizophrenia and 29 control participants completed a task in which they decided how much trust to place
in social partners’ showing either a dynamic emotional (smiling, scowling) or neutral display. Interac-
tions were predetermined to result in positive (trust reciprocated) or negative (trust abused) outcomes,
and we modeled changes in trust decisions over the course of repeated interactions. Compared to
controls, people with schizophrenia were less sensitive to positive social outcomes in that they placed less
trust in trustworthy social partners during initial interactions. By contrast, people with schizophrenia were
more sensitive to negative social outcomes during initial interactions with untrustworthy social partners,
placing less trust in these partners compared to controls. People with schizophrenia did not differ from
controls in detecting social partner behavior reversals from trustworthy to untrustworthy; however, they
had difficulties detecting reversals from untrustworthy to trustworthy. Importantly, decisions to trust
were associated with real-world social functioning. We discuss the implications of these findings for
understanding social engagement among people with schizophrenia and the development of psychosocial
interventions for social functioning.
General Scientific Summary
People with schizophrenia can have difficulties using decision outcomes to guide subsequent
decision-making and behavior. This study extends previous work by showing that people with
schizophrenia also have difficulties using social interaction outcomes to guide subsequent social
decision-making and behavior. These findings have implications for understanding decreased social
networks common among people with schizophrenia.
Keywords: schizophrenia, decision-making, social interactions, trust
Decision-making is an important part of daily life, with the
outcomes of decisions influencing subsequent choices and deci-
sions. While prior research has shown that people with schizo-
phrenia have difficulty using monetary outcomes to guide subse-
quent decisions (Heerey & Gold, 2007; Barch & Dowd, 2010), we
know considerably less about whether people with schizophrenia
have difficulty using social outcomes to inform decision-making
in the context of social interactions. We investigated the extent to
Timothy R. Campellone, Aaron J. Fisher, and Ann M. Kring, Depart-
ment of Psychology, University of California, Berkeley
Funding was provided by the U.S. National Institutes of Mental
Health (Grant 5T32MH089919 to Timothy R. Campellone and Grant
1R01MH082890 to Ann M. Kring). We are grateful to Janelle Painter,
Erin Moran, and Jasmine Mote for their help in collecting this data. We are
also grateful to Stephen Hinshaw for reading a previous version of this
article. We would also like to thank all the participants in this study.
Correspondence concerning this article should be addressed to Timothy
R. Campellone, 3210 Tolman Hall, University of California, Berkeley
310
which people with schizophrenia use social outcomes to inform
decision-making, and how this is related to motivation/pleasure
negative symptoms and psychosocial functioning. Because social
interactions often involve emotion, we also examined whether and
how people with schizophrenia use social partners’ emotional
displays to guide learning from social outcomes and inform sub-
sequent decision-making.
Monetary Decision-Making and Reversal Learning
in Schizophrenia
Studies using reward-learning paradigms with monetary out-
comes have consistently shown that compared to controls, people
with schizophrenia have difficulty using positive outcomes to
inform decision-making (Strauss et al., 2011; Gold et al., 2012).
These difficulties are associated with poorer functioning (Somlai,
Moustafa, Kéri, Myers, & Gluck, 2011) as well as greater moti-
vation/pleasure negative symptoms (Strauss et al., 2011; Gold et
al., 2012), which are part of the two-factor solution of negative
symptoms and refer to diminished engagement in and/or pleasure
derived from social, vocational, and recreational life domains
(Kring, Gur, Blanchard, Horan, & Reise, 2013). By contrast,
Berkeley, CA 94720-1690. E-mail: tcampellone@berkeley.edu
1de690714f143a8eb0d6be35d98390257a3f4a47Face Detection Using Spectral Histograms and SVMs
The Florida State University
Tallahassee, FL 32306
('3209925', 'Christopher A. Waring', 'christopher a. waring')
('1800002', 'Xiuwen Liu', 'xiuwen liu')
chwaring@cs.fsu.edu liux@cs.fsu.edu
1d7df3df839a6aa8f5392310d46b2a89080a3c25Large-Margin Softmax Loss for Convolutional Neural Networks
Meng Yang4
School of ECE, Peking University 2School of EIE, South China University of Technology
Carnegie Mellon University 4College of CS and SE, Shenzhen University
('36326884', 'Weiyang Liu', 'weiyang liu')
('2512949', 'Yandong Wen', 'yandong wen')
('1751019', 'Zhiding Yu', 'zhiding yu')
WYLIU@PKU.EDU.CN
WEN.YANDONG@MAIL.SCUT.EDU.CN
YZHIDING@ANDREW.CMU.EDU
YANG.MENG@SZU.EDU.CN
1d6c09019149be2dc84b0c067595f782a5d17316Encoding Video and Label Priors for Multi-label Video Classification
on YouTube-8M dataset
Seoul National University
Seoul National University
Seoul National University
SK Telecom Video Tech. Lab
Seoul National University
('19255603', 'Seil Na', 'seil na')
('7877122', 'Youngjae Yu', 'youngjae yu')
('1693291', 'Sangho Lee', 'sangho lee')
('2077253', 'Jisung Kim', 'jisung kim')
('1743920', 'Gunhee Kim', 'gunhee kim')
seil.na@vision.snu.ac.kr
yj.yu@vision.snu.ac.kr
sangho.lee@vision.snu.ac.kr
joyful.kim@sk.com
gunhee@snu.ac.kr
1d58d83ee4f57351b6f3624ac7e727c944c0eb8dEnhanced Local Texture
Feature Sets for Face
Recognition under Difficult
Lighting Conditions
INRIA & Laboratoire Jean
Kuntzmann,
655 avenue de l'Europe, Montbonnot 38330, France
('2248421', 'Xiaoyang Tan', 'xiaoyang tan')
('1756114', 'Bill Triggs', 'bill triggs')
1d729693a888a460ee855040f62bdde39ae273afPhotorealistic Face de-Identification by Aggregating
Donors’ Face Components
To cite this version:
gating Donors’ Face Components. Asian Conference on Computer Vision, Nov 2014, Singapore.
pp.1-16, 2014.
HAL Id: hal-01070658
https://hal.archives-ouvertes.fr/hal-01070658
Submitted on 2 Oct 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
('3095534', 'Saleh Mosaddegh', 'saleh mosaddegh')
('3095534', 'Saleh Mosaddegh', 'saleh mosaddegh')
1d4c25f9f8f08f5a756d6f472778ab54a7e6129dInternational Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Index Copernicus Value (2014): 6.14 | Impact Factor (2014): 4.438
An Innovative Mean Approach for Plastic Surgery
Face Recognition
1 Student of M.E., Department of Electronics & Telecommunication Engineering,
P. R. Patil College of Engineering, Amravati Maharashtra India
2 Assistant Professor, Department of Electronics & Telecommunication Engineering,
P. R. Patil College of Engineering, Amravati Maharashtra India
('2936550', 'Umesh W. Hore', 'umesh w. hore')
71b376dbfa43a62d19ae614c87dd0b5f1312c966The Temporal Connection Between Smiles and Blinks ('2048839', 'Laura C. Trutoiu', 'laura c. trutoiu')
('1788773', 'Jessica K. Hodgins', 'jessica k. hodgins')
('1737918', 'Jeffrey F. Cohn', 'jeffrey f. cohn')
71b07c537a9e188b850192131bfe31ef206a39a0Image and Vision Computing 47 (2016) 3–18
Contents lists available at ScienceDirect
Image and Vision Computing
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / i m a v i s
300 Faces In-The-Wild Challenge: database and results夽,夽夽
aImperial College London, London, UK
bUniversity of Nottingham, School of Computer Science, Nottingham, UK
cFaculty of Electrical Engineering, Mathematics, and Computer Science, University of Twente, The Netherlands
A R T I C L E
I N F O
A B S T R A C T
Article history:
Received 19 March 2015
Received in revised form 2 October 2015
Accepted 4 January 2016
Available online 25 January 2016
Keywords:
Facial landmark localization
Challenge
Semi-automatic annotation tool
Facial database
Computer Vision has recently witnessed great research advance towards automatic facial points detection.
Numerous methodologies have been proposed during the last few years that achieve accurate and efficient
performance. However, fair comparison between these methodologies is infeasible mainly due to two issues.
(a) Most existing databases, captured under both constrained and unconstrained (in-the-wild) conditions
have been annotated using different mark-ups and, in most cases, the accuracy of the annotations is low. (b)
Most published works report experimental results using different training/testing sets, different error met-
rics and, of course, landmark points with semantically different locations. In this paper, we aim to overcome
the aforementioned problems by (a) proposing a semi-automatic annotation technique that was employed
to re-annotate most existing facial databases under a unified protocol, and (b) presenting the 300 Faces In-
The-Wild Challenge (300-W), the first facial landmark localization challenge that was organized twice, in
2013 and 2015. To the best of our knowledge, this is the first effort towards a unified annotation scheme
of massive databases and a fair experimental comparison of existing facial landmark localization systems.
The images and annotations of the new testing database that was used in the 300-W challenge are available
from http://ibug.doc.ic.ac.uk/resources/300-W_IMAVIS/.
© 2016 Elsevier B.V. All rights reserved.
1. Introduction
During the last decades we notice a wealth of scientific research
in computer vision for the problem of facial landmark points localiza-
tion using visual deformable models. The main reason behind this are
the countless applications that the problem has in human-computer
interaction and facial expression recognition. Numerous methodolo-
gies have been proposed that are shown to achieve great accuracy
and efficiency. They can be roughly divided into two categories:
generative and discriminative. The generative techniques, which aim
to find the parameters that maximize the probability of the test
image being generated by the model, include Active Appearance
Models (AAMs) [1,2], their improved extensions [3–10] and Pictorial
夽 The contribution of the first two authors on writing this paper is equal, with
various steps needed to run 300-W successfully including data annotation, annotation
tool development, and running the experiments.
夽夽 This paper has been recommended for acceptance by Richard Bowden, PhD.
* Corresponding author.
http://dx.doi.org/10.1016/j.imavis.2016.01.002
0262-8856/© 2016 Elsevier B.V. All rights reserved.
Structures [11–13]. The discriminative techniques can be further
divided to those that use discriminative response map functions,
such as Active Shape Models (ASMs) [14], Constrained Local Models
(CLMs) [15–17] and Deformable Part Models (DPMs) [18], those that
learn a cascade of regression functions, such as Supervised Descent
Method (SDM) [19] and others [20–22], and, finally, those that
employ random forests [23,24].
Arguably, the main reason why many researchers of the field
focus on the problem of face alignment is the plethora of publicly
available annotated facial databases. These databases can be sepa-
rated in two major categories: (a) those captured under controlled
conditions, e.g. Multi-PIE [25], XM2VTS [26], FRGC-V2 [27], and
AR [28], and (b) those captured under totally unconstrained condi-
tions (in-the-wild), e.g. LFPW [29], HELEN [30], AFW[18], AFLW[31],
and IBUG [32]. All of them cover large variations, including different
subjects, poses, illumination conditions, expressions and occlusions.
However, for most of them, the provided annotations appear to have
several limitations. Specifically:
• The majority of them provide annotations for a relatively small
subset of images.
('3320415', 'Christos Sagonas', 'christos sagonas')
('2788012', 'Epameinondas Antonakos', 'epameinondas antonakos')
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
('3320415', 'Christos Sagonas', 'christos sagonas')
E-mail address: c.sagonas@imperial.ac.uk (C. Sagonas).
71fd29c2ae9cc9e4f959268674b6b563c06d9480End-to-end 3D shape inverse rendering of different classes
of objects from a single input image
1Computer Science and Engineering and Information Technology, Shiraz
university, Shiraz, Iran
November 17, 2017
('34649340', 'Shima Kamyab', 'shima kamyab')
('2014752', 'Zohreh Azimifar', 'zohreh azimifar')
7142ac9e4d5498037aeb0f459f278fd28dae8048Semi-Supervised Learning for Optical Flow
with Generative Adversarial Networks
University of California, Merced
2Virginia Tech
3Nvidia Research
('2268189', 'Wei-Sheng Lai', 'wei-sheng lai')
('3068086', 'Jia-Bin Huang', 'jia-bin huang')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
1{wlai24|mhyang}@ucmerced.edu
2jbhuang@vt.edu
71f36c8e17a5c080fab31fce1ffea9551fc49e47Predicting Failures of Vision Systems
1Virginia Tech
2Univ. of Texas at Austin
3Univ. of Washington
Carnegie Mellon University
('40409467', 'Peng Zhang', 'peng zhang')
('2537394', 'Jiuling Wang', 'jiuling wang')
1{zhangp, parikh}@vt.edu
2jiuling@utexas.edu
3ali@cs.uw.edu
4hebert@ri.cmu.edu
7117ed0be436c0291bc6fb6ea6db18de74e2464aUnder review as a conference paper at ICLR 2017
WARPED CONVOLUTIONS: EFFICIENT INVARIANCE TO
SPATIAL TRANSFORMATIONS
Visual Geometry Group
University of Oxford
('36478254', 'João F. Henriques', 'joão f. henriques'){joao,vedaldi}@robots.ox.ac.uk
71e6a46b32a8163c9eda69e1badcee6348f1f56aVisually Interpreting Names as Demographic Attributes
by Exploiting Click-Through Data
National Taiwan University, Taipei, Taiwan
FX Palo Alto Laboratory, Inc., California, USA
('35081710', 'Yan-Ying Chen', 'yan-ying chen')
('1692811', 'Yin-Hsi Kuo', 'yin-hsi kuo')
('2580465', 'Chun-Che Wu', 'chun-che wu')
('1716836', 'Winston H. Hsu', 'winston h. hsu')
{yanying,kuonini,kenwu0721}@gmail.com, whsu@ntu.edu.tw
713594c18978b965be87651bb553c28f8501df0aFast Proximal Linearized Alternating Direction Method of Multiplier with
Parallel Splitting
National University of Singapore
Key Laboratory of Machine Perception (MOE), School of EECS, Peking University
Cooperative Medianet Innovation Center, Shanghai Jiaotong University
('33224509', 'Canyi Lu', 'canyi lu')
('1775194', 'Huan Li', 'huan li')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
canyilu@gmail.com, lihuan ss@126.com, zlin@pku.edu.cn, eleyans@nus.edu.sg
718824256b4461d62d192ab9399cfc477d3660b4Selecting Training Data for Cross-Corpus Speech Emotion Recognition:
Prototypicality vs. Generalization
Institute for Human-Machine Communication, Technische Universit at M unchen, Germany
('30512170', 'Zixing Zhang', 'zixing zhang')
('1740602', 'Felix Weninger', 'felix weninger')
('1705843', 'Gerhard Rigoll', 'gerhard rigoll')
{schuller|zixing.zhang|weninger|rigoll}@tum.de
718d3137adba9e3078fa1f698020b666449f3336(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 8, No. 10, 2017
Accuracy Based Feature Ranking Metric for
Multi-Label Text Classification
Al-Khwarizmi Institute of Computer Science
University of Engineering and Technology
Department of Computer
Science,
Department of Computer
Science,
Lahore, Pakistan
University of Gujrat, Pakistan
University of Gujrat, Pakistan
('35637737', 'Muhammad Nabeel Asim', 'muhammad nabeel asim')
('3245405', 'Abdur Rehman', 'abdur rehman')
('1981732', 'Umar Shoaib', 'umar shoaib')
714d487571ca0d676bad75c8fa622d6f50df953beBear: An Expressive Bear-Like Robot ('49470290', 'Xiao Zhang', 'xiao zhang')
('2314025', 'Ali Mollahosseini', 'ali mollahosseini')
('29764067', 'Evan Boucher', 'evan boucher')
('1783240', 'Richard M. Voyles', 'richard m. voyles')
716d6c2eb8a0d8089baf2087ce9fcd668cd0d4c0SMITH & DYER: 3D FACIAL LANDMARK ESTIMATION
Pose-Robust 3D Facial Landmark Estimation
from a Single 2D Image
http://www.cs.wisc.edu/~bmsmith
http://www.cs.wisc.edu/~dyer
Department of Computer Sciences
University of Wisconsin-Madison
Madison, WI USA
('2721523', 'Brandon M. Smith', 'brandon m. smith')
('1724754', 'Charles R. Dyer', 'charles r. dyer')
7143518f847b0ec57a0ff80e0304c89d7e924d9aSpeeding-up Age Estimation in Intelligent
Demographics System via Network Optimization
School of Computer and Information, Hefei University of Technology, Hefei, China
School of Computer Science and Engineering, Nanyang Technological University, Singapore
('49941674', 'Zhenzhen Hu', 'zhenzhen hu')
('7739626', 'Peng Sun', 'peng sun')
('40096128', 'Yonggang Wen', 'yonggang wen')
huzhen.ice@gmail.com, {sunp0003, ygwen}@ntu.edu.sg
710011644006c18291ad512456b7580095d628a2Learning Residual Images for Face Attribute Manipulation
Fujitsu Research & Development Center, Beijing, China.
('48157627', 'Wei Shen', 'wei shen')
('2113095', 'Rujie Liu', 'rujie liu')
{shenwei, rjliu}@cn.fujitsu.com
713db3874b77212492d75fb100a345949f3d3235Deep Semantic Face Deblurring
Beijing Institute of Technology
University of California, Merced
3Nvidia
4Google Cloud
https://sites.google.com/site/ziyishenmi/cvpr18_face_deblur
('2182388', 'Ziyi Shen', 'ziyi shen')
('2268189', 'Wei-Sheng Lai', 'wei-sheng lai')
('39001620', 'Tingfa Xu', 'tingfa xu')
('1690538', 'Jan Kautz', 'jan kautz')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
715b69575dadd7804b4f8ccb419a3ad8b7b7ca891
Testing separability and independence of perceptual
dimensions with general recognition theory: A tutorial and
new R package (grtools)1
Florida International University
University of California, Santa Barbara
Florida International University
University of California, Santa Barbara
('2850756', 'Fabian A. Soto', 'fabian a. soto')
('33897174', 'Johnny Fonseca', 'johnny fonseca')
('5854837', 'F. Gregory Ashby', 'f. gregory ashby')
71e56f2aebeb3c4bb3687b104815e09bb4364102Video Co-segmentation for Meaningful Action Extraction
National University of Singapore, Singapore
National University of Singapore Research Institute, Suzhou, China
('3036190', 'Jiaming Guo', 'jiaming guo')
('3119455', 'Zhuwen Li', 'zhuwen li')
('1809333', 'Steven Zhiying Zhou', 'steven zhiying zhou')
{guo.jiaming, lizhuwen, eleclf, elezzy}@nus.edu.sg
711bb5f63139ee7a9b9aef21533f959671a7d80eHelsinki University of Technology Laboratory of Computational Engineering Publications
Teknillisen korkeakoulun Laskennallisen tekniikan laboratorion julkaisuja
Espoo 2007
REPORT B68
OBJECTS EXTRACTION AND RECOGNITION FOR
CAMERA-BASED INTERACTION: HEURISTIC AND
STATISTICAL APPROACHES
TEKNILLINEN KORKEAKOULU
TEKNILLINEN KORKEAKOULU
TEKNISKA HÖGSKOLAN
TEKNISKA HÖGSKOLAN
HELSINKI UNIVERSITY OF TECHNOLOGY
HELSINKI UNIVERSITY OF TECHNOLOGY
TECHNISCHE UNIVERSITÄT HELSINKI
TECHNISCHE UNIVERSITÄT HELSINKI
UNIVERSITE DE TECHNOLOGIE D'HELSINKI
UNIVERSITE DE TECHNOLOGIE D'HELSINKI
('37522511', 'Hao Wang', 'hao wang')
76fd801981fd69ff1b18319c450cb80c4bc78959Proceedings of the 11th International Conference on Computational Semantics, pages 76–81,
London, UK, April 15-17 2015. c(cid:13)2015 Association for Computational Linguistics
76
76dc11b2f141314343d1601635f721fdeef86fdbWeighted Decoding ECOC for Facial
Action Unit Classification
('1732556', 'Terry Windeatt', 'terry windeatt')
76673de6d81bedd6b6be68953858c5f1aa467e61Discovering a Lexicon of Parts and Attributes
Toyota Technological Institute at Chicago
Chicago, IL 60637, USA
('35208858', 'Subhransu Maji', 'subhransu maji')smaji@ttic.edu
76cd5e43df44e389483f23cb578a9015d1483d70BORGHI ET AL.: FACE VERIFICATION FROM DEPTH
Face Verification from Depth using
Privileged Information
Department of Engineering
"Enzo Ferrari"
University of Modena and Reggio
Emilia
Modena, Italy
('12010968', 'Guido Borghi', 'guido borghi')
('2035969', 'Stefano Pini', 'stefano pini')
('32044032', 'Filippo Grazioli', 'filippo grazioli')
('1723285', 'Roberto Vezzani', 'roberto vezzani')
('1741922', 'Rita Cucchiara', 'rita cucchiara')
guido.borghi@unimore.it
stefano.pini@unimore.it
filippo.grazioli@unimore.it
roberto.vezzani@unimore.it
rita.cucchiara@unimore.it
7643861bb492bf303b25d0306462f8fb7dc29878Speeding up 2D-Warping for Pose-Invariant Face Recognition
Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Germany
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('1685956', 'Hermann Ney', 'hermann ney')
surname@cs.rwth-aachen.de
760a712f570f7a618d9385c0cee7e4d0d6a78ed2
76b11c281ac47fe6d95e124673a408ee9eb568e3International Journal of Latest Engineering and Management Research (IJLEMR)
ISSN: 2455-4847
www.ijlemr.com || Volume 02 - Issue 03 || March 2017 || PP. 59-71
REAL-TIME MULTI VIEW FACE DETECTION AND POSE
ESTIMATION
U. G STUDENTS, DEPT OF CSE, ALPHA COLLEGE OF ENGINEERING, CHENNAI
ALPHA COLLEGE OF ENGINEERING, CHENNAI
76ce3d35d9370f0e2e27cfd29ea0941f1462895fHindawi Publishing Corporation
e Scientific World Journal
Volume 2014, Article ID 528080, 13 pages
http://dx.doi.org/10.1155/2014/528080
Research Article
Efficient Parallel Implementation of Active Appearance
Model Fitting Algorithm on GPU
School of Computer Science and Technology, Tianjin University, Tianjin 300072, China
College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China
Received 25 August 2013; Accepted 19 January 2014; Published 2 March 2014
Academic Editors: I. Lanese and G. Wei
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which
has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming
computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing
units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the
computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs.
Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU
threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the
compute unified device architecture (CUDA) on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare
the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures.
The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very
high-dimensional textures.
1. Introduction
Detecting and tracking moving deformable objects in a video
sequence is a complex and difficult task and has been a
very important part of many applications, such as human
computer interaction [1], automated surveillance [2], and
emotion recognition [3]. This task allows us to determine the
state of objects and helps us analyze their behaviors.
The active appearance model (AAM) [4], first proposed
by Cootes et al. [5], is one of the most powerful model-based
object detecting and tracking algorithms. It is a nonlinear,
generative, and parametric model and can be traced back
to the active contour model (or “snakes,” [6]) and the active
shape model (ASM) [7]. Particularly, the AAM decouples and
models the shape and the texture of the deformable object
to generate a variety of instant photos realistically. Therefore,
the AAM has been widely used in various situations [8–10].
The most frequent application of AAMs to date has been face
modeling and tracking [11].
Although the AAM possesses powerful modeling and
efficient fitting ability, the high computational complexity
caused by the high-dimensional texture representation limits
its application in many conditions, for example, real-time
systems. To make the AAM more applicable to practical
applications, additional effort must be spent to accelerate the
computation of the AAM. Therefore, several improvements
are proposed to achieve this aim. Some methods are proposed
to reduce the dimension of the texture, such as the Haar
wavelet [12], the wedgelet-based regression tree [13], and
the local sampling [14]. However, these methods improve
efficiency at the expense of decreasing accuracy or losing
detail information. From another perspective, researchers
[15, 16] suggest reformulating the AAM in an analytic way to
speed up the model fitting. A famous method is the inverse
compositional image alignment (ICIA) [17] algorithm that
avoids updating texture parameters every frame and is a very
fast-fitting algorithm for the AAM. However, the limitation
of this algorithm is that it cannot be applied to the AAMs
('1762397', 'Jinwei Wang', 'jinwei wang')
('2518530', 'Xirong Ma', 'xirong ma')
('34854285', 'Yuanping Zhu', 'yuanping zhu')
('35900806', 'Jizhou Sun', 'jizhou sun')
('1762397', 'Jinwei Wang', 'jinwei wang')
Correspondence should be addressed to Jinwei Wang; wangjinwei@tju.edu.cn
76b9fe32d763e9abd75b427df413706c4170b95c
768c332650a44dee02f3d1d2be1debfa90a3946cBayesian Face Recognition Using Support Vector Machine and Face Clustering
Department of Information Engineering
The Chinese University of Hong Kong
Shatin, Hong Kong
('1911510', 'Zhifeng Li', 'zhifeng li')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{zli0, xtang}@ie.cuhk.edu.hk
769461ff717d987482b28b32b1e2a6e46570e3ffMIC-TJU in MediaEval 2017 Emotional Impact of Movies Task
Gannan Normal University, Ganzhou 341000, China
Tongji University, Shanghai 201804, China
('40290178', 'Yun Yi', 'yun yi')
('2774427', 'Hanli Wang', 'hanli wang')
('28933059', 'Jiangchuan Wei', 'jiangchuan wei')
76d9f5623d3a478677d3f519c6e061813e58e833FAST ALGORITHMS FOR THE GENERALIZED FOLEY-SAMMON
DISCRIMINANT ANALYSIS
('35789819', 'Lei-Hong Zhang', 'lei-hong zhang')
('14372428', 'Li-Zhi Liao', 'li-zhi liao')
('1678715', 'Michael K. Ng', 'michael k. ng')
76e2d7621019bd45a5851740bd2742afdcf62837Article
Real-Time Detection and Measurement of Eye
Features from Color Images
Technical University of Cluj Napoca, 28 Memorandumului Street
Babes Bolyai University, 58-60 Teodor Mihali, C333, Cluj Napoca
Academic Editors: Changzhi Li, Roberto Gómez-García and José-María Muñoz-Ferreras
Received: 28 April 2016; Accepted: 14 July 2016; Published: 16 July 2016
('31630857', 'Diana Borza', 'diana borza')
('1821352', 'Adrian Sergiu Darabant', 'adrian sergiu darabant')
('3331727', 'Radu Danescu', 'radu danescu')
Cluj Napoca 400114, Romania; borza_diana@yahoo.com
Romania; adrian.darabant@tvarita.ro
* Correspondence: Radu.Danescu@cs.utcluj.ro; Tel.: +40-740-502-223
765b2cb322646c52e20417c3b44b81f89860ff71PoseShop: Human Image Database
Construction and Personalized
Content Synthesis
('29889388', 'Tao Chen', 'tao chen')
('37291674', 'Ping Tan', 'ping tan')
('1678872', 'Li-Qian Ma', 'li-qian ma')
('37535930', 'Ming-Ming Cheng', 'ming-ming cheng')
('2947946', 'Ariel Shamir', 'ariel shamir')
('1686809', 'Shi-Min Hu', 'shi-min hu')
7644d90efef157e61fe4d773d8a3b0bad5feccec
763158cef9d1e4041f24fce4cf9d6a3b7a7f08ffHierarchical Modeling and
Applications to Recognition Tasks
Thesis submitted for the degree of
”Doctor of Philosophy”
by
Submitted to the Senate of the Hebrew University
August / 2013
('39161025', 'Alon Zweig', 'alon zweig')
764882e6779fbee29c3d87e00302befc52d2ea8dDeep Approximately Orthogonal Nonnegative
Matrix Factorization for Clustering
School of Automation
School of Automation
School of Automation
Guangdong University of Technology
Guangdong University of Technology
Guangdong University of Technology
Guangzhou, China
Guangzhou, China
Guangzhou, China
('30185240', 'Yuning Qiu', 'yuning qiu')
('1764724', 'Guoxu Zhou', 'guoxu zhou')
('2454506', 'Kan Xie', 'kan xie')
yn.qiu@foxmail.com
guoxu.zhou@qq.com
kanxiegdut@gmail.com
76d939f73a327bf1087d91daa6a7824681d76ea1A Thermal Facial Emotion Database
and Its Analysis
Japan Advanced Institute of Science and Technology
1-1 Asahidai, Nomi, Ishikawa, Japan
University of Science, Ho Chi Minh city
227 Nguyen Van Cu, Ho Chi Minh city, Vietnam
('2319415', 'Hung Nguyen', 'hung nguyen')
('1791753', 'Kazunori Kotani', 'kazunori kotani')
('1753878', 'Fan Chen', 'fan chen')
{nvhung,ikko,chen-fan}@jaist.ac.jp
lhbac@hcmuns.edu.vn
760ba44792a383acd9ca8bef45765d11c55b48d4~
I .
INTRODUCTION AND BACKGROUND
The purpose of this article is to introduce the
reader to the basic principles of classification with
class-specific features. It is written both for readers
interested in only the basic concepts as well as those
interested in getting started in applying the method.
For in-depth coverage, the reader is referred to a more
detailed article [l].
Class-Specific Classifier:
Avoiding the Curse of
Dimensionality
PAUL M. BAGGENSTOSS, Member. lEEE
US. Naval Undersea Warfare Center
This article describes a new probabilistic method called the
“class-specific method” (CSM). CSM has the potential to avoid
the “curse of dimensionality” which plagues most clmiiiers
which attempt to determine the decision boundaries in a
highdimensional featue space. In contrast, in CSM, it is possible
to build classifiers without a ” n o n feature space. Separate
Law-dimensional features seta may be de6ned for each class, while
the decision funetions are projected back to the common raw data
space. CSM eflectively extends the classical classification theory
to handle multiple feature spaw.. It is completely general, and
requires no s i m p l i n g assumption such as Gaussianity or that
data lies in linear subspaces.
Manuscript received September 26, 2W2; revised February 12,
2003.
This work was supported by the Office of Naval Research.
Author’s address: US. Naval Undersea Warfare Center, Newport
Classification is the process of assigning data
to one of a set of pre-determined class labels [2].
Classification is a fundamental problem that has
to be solved if machines are to approximate the
human functions of recognizing sounds, images, or
other sensory inputs. This is why classification is so
important for automation in today’s commercial and
military arenas.
Many of us have first-hand knowledge of
successful automated recognition systems from
cameras that recognize faces in airports to computers
that can scan and read printed and handwritten text,
or systems that can recognize human speech. These
systems are becoming more and more reliable and
accurate. Given reasonably clean input data, the
performance is often quite good if not perfect. But
many of these systems fail in applications where
clean, uncorrupted data is not available or if the
problem is complicated by variability of conditions
or by proliferation of inputs from unknown sources.
In military environments, the targets to he recognized
are often uncooperative and hidden in clutter and
interference. In short, military uses of such systems
still fall far short of what a well-trained alert human
operator can achieve.
We are often perplexed by the wide gap of
as a car door slamming. From
performance between humans and automated systems.
Allow a human listener to hear two or three examples
of a sound-such
these few examples, the human can recognize
the sound again and not confuse it with similar
interfering sounds. But try the same experiment with
general-purpose classifiers using neural networks
and the story is quite different. Depending on the
problem, the automated system may require hundreds,
thousands, even millions of examples for training
before it becomes both robust and reliable.
Why? The answer lies in what is known as the
“curse of dimensionality.” General-purpose classifiers
need to extract a large number of measurements,
or features, from the data to account for all the
different possibilities of data types. The large
collection of features form a high-dimensional space
that the classifier has to sub-divide into decision
boundaries. It is well-known that the complexity of
a high-dimensional space increases exponentially
with the number of measurements [31-and
so does
the difficulty of finding the hest decision boundaries
from a fixed amount of training data. Unless a lot
EEE A&E SYSTEMS MAGAZINE VOL. 19, NO. 1 JANUARY 2004 PART 2: TUTORIALS-BAGGENSTOSS
37
RI, 02841, E-mail: (p.m.baggenstoss@ieee.arg).
766728bac030b169fcbc2fbafe24c6e22a58ef3cA survey of deep facial landmark detection
Yongzhe Yan1,2
Thierry Chateau1
1 Université Clermont Auvergne, France
2 Wisimage, France
3 Université de Lyon, CNRS, INSA Lyon, LIRIS, UMR5205, Lyon, France
Résumé
La détection de landmarks joue un rôle crucial dans de
nombreuses applications d’analyse du visage comme la
reconnaissance de l’identité, des expressions, l’animation
d’avatar, la reconstruction 3D du visage, ainsi que pour
les applications de réalité augmentée comme la pose de
masque ou de maquillage virtuel. L’avènement de l’ap-
prentissage profond a permis des progrès très importants
dans ce domaine, y compris sur les corpus non contraints
(in-the-wild). Nous présentons ici un état de l’art cen-
tré sur la détection 2D dans une image fixe, et les mé-
thodes spécifiques pour la vidéo. Nous présentons ensuite
les corpus existants pour ces trois tâches, ainsi que les mé-
triques d’évaluations associées. Nous exposons finalement
quelques résultats, ainsi que quelques pistes de recherche.
Mots Clef
Détection de landmark facial, Alignement de visage, Deep
learning
('3015472', 'Xavier Naturel', 'xavier naturel')
('50493659', 'Christophe Garcia', 'christophe garcia')
('48601809', 'Christophe Blanc', 'christophe blanc')
('1762557', 'Stefan Duffner', 'stefan duffner')
yongzhe.yan@etu.uca.fr
7697295ee6fc817296bed816ac5cae97644c2d5bDetecting and Recognizing Human-Object Interactions
Facebook AI Research (FAIR)
('2082991', 'Georgia Gkioxari', 'georgia gkioxari')
('39353098', 'Kaiming He', 'kaiming he')
7636f94ddce79f3dea375c56fbdaaa0f4d9854aaAppl. Math. Inf. Sci. 6 No. 2S pp. 403S-408S (2012)
An International Journal
© 2012 NSP
Applied Mathematics & Information Sciences
Robust Facial Expression Recognition Using
a Smartphone Working against Illumination Variation
Natural Sciences Publishing Cor.
Sejong University, 98 Kunja-Dong, Kwangjin-Gu, Seoul, Korea
Received June 22, 2010; Revised March 21, 2011; Accepted 11 June 2011
Published online: 1 January 2012
('2413560', 'Kyoung-Sic Cho', 'kyoung-sic cho')
('9270794', 'In-Ho Choi', 'in-ho choi')
('2706430', 'Yong-Guk Kim', 'yong-guk kim')
@ 2012 NSP
Corresponding author: Email: ykim@sejong.ac.kr
1c80bc91c74d4984e6422e7b0856cf3cf28df1fbNoname manuscript No.
(will be inserted by the editor)
Hierarchical Adaptive Structural SVM for Domain Adaptation
Received: date / Accepted: date
('2470198', 'Jiaolong Xu', 'jiaolong xu')
1ce3a91214c94ed05f15343490981ec7cc810016Exploring Photobios
University of Washington
2Adobe Systems†
3Google Inc.
('2419955', 'Ira Kemelmacher-Shlizerman', 'ira kemelmacher-shlizerman')
('2177801', 'Eli Shechtman', 'eli shechtman')
('9748713', 'Rahul Garg', 'rahul garg')
('1679223', 'Steven M. Seitz', 'steven m. seitz')
1c9efb6c895917174ac6ccc3bae191152f90c625Unifying Identification and Context Learning for Person Recognition
CUHK-SenseTime Joint Lab, The Chinese University of Hong Kong
('39360892', 'Qingqiu Huang', 'qingqiu huang')
('50446092', 'Yu Xiong', 'yu xiong')
('1807606', 'Dahua Lin', 'dahua lin')
{hq016, xy017, dhlin}@ie.cuhk.edu.hk
1c2724243b27a18a2302f12dea79d9a1d4460e35Fisher+Kernel Criterion for Discriminant Analysis*
National Laboratory on Machine Perception, Peking University, Beijing, P.R. China
the Chinese University of Hong Kong, Shatin, Hong Kong
3 MOE-Microsoft Key Laboratory of Multimedia Computing and Communication & Department of EEIS,
University of Science and Technology of China, Hefei, Anhui, P. R. China
4Microsoft Research Asia, Beijing, P.R. China
('1718245', 'Shu Yang', 'shu yang')
('1698982', 'Shuicheng Yan', 'shuicheng yan')
('38188040', 'Dong Xu', 'dong xu')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
('1720735', 'Chao Zhang', 'chao zhang')
Contact: yangshu@cis.pku.edu.cn
1ca8c09abb73a02519d8db77e4fe107acfc589b6Automatic Understanding of Image and Video Advertisements
University of Pittsburgh
IEEE 2017 Conference on
Computer Vision and Pattern
Recognition
Introduction
Dataset Overview
Answering Questions about Ads
• Advertisements implicitly persuade viewers to take certain actions.
• Understanding ads requires more than recognizing physical content.
Recognized Concepts (Clarifai):
Car, Street, Transportation System, Traffic, Road, City,
Pavement, Crossing, …
Image Caption (Vinyals et al.):
A red car driving down a street next to a traffic light.
True Meaning in Advertisement:
Automobile drivers should be cautious to avoid crashing
into cyclists as they share the road.
• We propose the novel problem of automatic advertisement
understanding, and provide two datasets with rich annotations.
• We analyze the common persuasive strategies: symbolism, atypical
objects, physical processes, cultural knowledge, surprise/shock, etc.
• We present baseline experiment results for several prediction tasks.
Dataset Collection
• 38 topics including commercials and public service announcements
• 30 sentiments indicating how ads emotionally impress viewers
• Questions and answers revealing the messages behind the visual ads
I should stop smoking because my
lungs are extremely sensitive and
could go up in smoke.
I should buy this candy because it
is unique and rises above the rest,
like the Swiss Alps.
• Our dataset contains 64,832 image ads and 3,477 video ads, each
annotated by 3-5 human workers from Amazon Mechanical Turk.
Symbolism Detection
Image
Video
Topic
Symbol
Topic
Fun/Exciting
204,340
64,131
17,345
17,374
Sentiment
Strategy
Sentiment
English?
102,340
20,000
17,345
15,380
Q + A Pairs
Slogan
Q + A Pairs
Effectiveness
202,090
11,130
17,345
16,721
('1996796', 'Zaeem Hussain', 'zaeem hussain')
('2365530', 'Mingda Zhang', 'mingda zhang')
('3186356', 'Xiaozhong Zhang', 'xiaozhong zhang')
('9085797', 'Keren Ye', 'keren ye')
('40540691', 'Christopher Thomas', 'christopher thomas')
('6004292', 'Zuha Agha', 'zuha agha')
('34493995', 'Nathan Ong', 'nathan ong')
('1770205', 'Adriana Kovashka', 'adriana kovashka')
1cfe3533759bf95be1fce8ce1d1aa2aeb5bfb4ccRecognition of Facial Gestures based on Support
Vector Machines
Faculty of Informatics, University of Debrecen, Hungary
H-4010 Debrecen P.O.Box 12.
('47547897', 'Attila Fazekas', 'attila fazekas')Attila.Fazekas@inf.unideb.hu
1ce4587e27e2cf8ba5947d3be7a37b4d1317fbeeDeep fusion of visual signatures
for client-server facial analysis
Normandie Univ, UNICAEN,
ENSICAEN, CNRS, GREYC
Computer Sc. & Engg.
IIT Kanpur, India
Frederic Jurie
Normandie Univ, UNICAEN,
ENSICAEN, CNRS, GREYC
Facial analysis is a key technology for enabling human-
machine interaction.
In this context, we present a client-
server framework, where a client transmits the signature of
a face to be analyzed to the server, and, in return, the server
sends back various information describing the face e.g. is the
person male or female, is she/he bald, does he have a mus-
tache, etc. We assume that a client can compute one (or a
combination) of visual features; from very simple and effi-
cient features, like Local Binary Patterns, to more complex
and computationally heavy, like Fisher Vectors and CNN
based, depending on the computing resources available. The
challenge addressed in this paper is to design a common uni-
versal representation such that a single merged signature is
transmitted to the server, whatever be the type and num-
ber of features computed by the client, ensuring nonetheless
an optimal performance. Our solution is based on learn-
ing of a common optimal subspace for aligning the different
face features and merging them into a universal signature.
We have validated the proposed method on the challenging
CelebA dataset, on which our method outperforms existing
state-of-art methods when rich representation is available at
test time, while giving competitive performance when only
simple signatures (like LBP) are available at test time due
to resource constraints on the client.
1.
INTRODUCTION
We propose a novel method in a heterogeneous server-
client framework for the challenging and important task of
analyzing images of faces. Facial analysis is a key ingredient
for assistive computer vision and human-machine interaction
methods, and systems and incorporating high-performing
methods in daily life devices is a challenging task. The ob-
jective of the present paper is to develop state-of-the-art
technologies for recognizing facial expressions and facial at-
tributes on mobile and low cost devices. Depending on their
computing resources, the clients (i.e. the devices on which
the face image is taken) are capable of computing different
types of face signatures, from the simplest ones (e.g. LPB)
to the most complex ones (e.g. very deep CNN features), and
should be able to eventually combine them into a single rich
signature. Moreover, it is convenient if the face analyzer,
which might require significant computing resources, is im-
plemented on a server receiving face signatures and comput-
ing facial expressions and attributes from these signatures.
Keeping the computation of the signatures on the client is
safer in terms of privacy, as the original images are not trans-
mitted, and keeping the analysis part on the server is also
beneficial for easy model upgrades in the future. To limit
the transmission costs, the signatures have to be made as
compact as possible.
In summary, the technology needed
for this scenario has to be able to merge the different avail-
able features – the number of features available at test time
is not known in advance but is dependent on the computing
resources available on the client – producing a unique rich
and compact signature of the face, which can be transmitted
and analyzed by a server. Ideally, we would like the univer-
sal signature to have the following properties: when all the
features are available, we would like the performance of the
signature to be better than the one of a system specifically
optimized for any single type of feature.
In addition, we
would like to have reasonable performance when only one
type of feature is available at test time.
For developing such a system, we propose a hybrid deep
neural network and give a method to carefully fine-tune the
network parameters while learning with all or a subset of
features available. Thus, the proposed network can process a
number of wide ranges of feature types such as hand-crafted
LBP and FV, or even CNN features which are learned end-
to-end.
While CNNs have been quite successful in computer vi-
sion [1], representing images with CNN features is relatively
time consuming, much more than some simple hand-crafted
features such as LBP. Thus, the use of CNN in real-time ap-
plications is still not feasible. In addition, the use of robust
hand-crafted features such as FV in hybrid architectures can
give performance comparable to Deep CNN features [2]. The
main advantage of learning hybrid architectures is to avoid
having large numbers of convolutional and pooling layers.
Again from [2], we can also observe that hybrid architec-
tures improve the performance of hand-crafted features e.g.
FVs. Therefore, hybrid architectures are useful for the cases
where only hand-crafted features, and not the original im-
ages, are available during training and testing time. This
scenario is useful when it is not possible to share training
images due to copyright or privacy issues.
Hybrid networks are particularly adapted to our client-
('2078892', 'Binod Bhattarai', 'binod bhattarai')
('2515597', 'Gaurav Sharma', 'gaurav sharma')
binod.bhattarai@unicaen.fr
grv@cse.iitk.ac.in
frederic.jurie@unicaen.fr
1c30bb689a40a895bd089e55e0cad746e343d1e2Learning Spatiotemporal Features with 3D Convolutional Networks
Facebook AI Research, 2Dartmouth College
('1687325', 'Du Tran', 'du tran')
('2276554', 'Rob Fergus', 'rob fergus')
('1732879', 'Lorenzo Torresani', 'lorenzo torresani')
('2210374', 'Manohar Paluri', 'manohar paluri')
{dutran,lorenzo}@cs.dartmouth.edu
{lubomir,robfergus,mano}@fb.com
1c4ceae745fe812d8251fda7aad03210448ae25eEURASIP Journal on Applied Signal Processing 2004:4, 522–529
c(cid:1) 2004 Hindawi Publishing Corporation
Optimization of Color Conversion for Face Recognition
Virginia Polytechnic Institute and State University
Blacksburg, VA 24061-0111, USA
Seattle Paci c University, Seattle, WA 98119-1957, USA
Virginia Polytechnic Institute and State University
Blacksburg, VA 24061-0111, USA
Received 5 November 2002; Revised 16 October 2003
This paper concerns the conversion of color images to monochromatic form for the purpose of human face recognition. Many
face recognition systems operate using monochromatic information alone even when color images are available. In such cases,
simple color transformations are commonly used that are not optimal for the face recognition task. We present a framework
for selecting the transformation from face imagery using one of three methods: Karhunen-Lo`eve analysis, linear regression of
color distribution, and a genetic algorithm. Experimental results are presented for both the well-known eigenface method and for
extraction of Gabor-based face features to demonstrate the potential for improved overall system performance. Using a database
of 280 images, our experiments using these methods resulted in performance improvements of approximately 4% to 14%.
Keywords and phrases: face recognition, color image analysis, color conversion, Karhunen-Lo`eve analysis.
1.
INTRODUCTION
Most single-view face recognition systems operate using in-
tensity (monochromatic) information alone. This is true
even for systems that accept color imagery as input. The
reason for this is not
that multispectral data is lack-
ing in information content, but often because of practical
considerations—difficulties associated with illumination and
color balancing, for example, as well as compatibility with
legacy systems. Associated with this is a lack of color image
databases with which to develop and test new algorithms. Al-
though work is in progress that will eventually aid in color-
based tasks (e.g., through color constancy [1]), those efforts
are still in the research stage.
When color information is present, most of today’s face
recognition systems convert the image to monochromatic
form using simple transformations. For example, a common
mapping [2, 3] produces an intensity value Ii by taking the
average of red, green, and blue (RGB) values (Ir, Ig, and Ib,
resp.):
Ii(x, y) = Ir(x, y) + Ig(x, y) + Ib(x, y)
(1)
The resulting image is then used for feature extraction and
analysis.
We argue that more effective system performance is pos-
sible if a color transformation is chosen that better matches
the task at hand. For example, the mapping in (1) implic-
itly assumes a uniform distribution of color values over the
entire color space. For a task such as face recognition, color
values tend to be more tightly confined to a small portion of
the color space, and it is possible to exploit this narrow con-
centration during color conversion. If the transformation is
selected based on the expected color distribution, then it is
reasonable to expect improved recognition accuracies.
This paper presents a task-oriented approach for select-
ing the color-to-grayscale image transformation. Our in-
tended application is face recognition, although the frame-
work that we present is applicable to other problem domains.
We assume that frontal color views of the human face
are available, and we develop a method for selecting alter-
nate weightings of the separate color values in computing a
single monochromatic value. Given the rich color content
of the human face, it is desirable to maximize the use of
this content even when full-color computation and match-
ing is not used. As an illustration of this framework, we
have used the Karhunen-Lo`eve (KL) transformation (also
known as principal components analysis) of observed distri-
butions in the color space to determine the improved map-
ping.
('1719681', 'Creed F. Jones', 'creed f. jones')
('1731164', 'A. Lynn Abbott', 'a. lynn abbott')
Email: crjones4@vt.edu
Email: abbott@vt.edu
1c3073b57000f9b6dbf1c5681c52d17c55d60fd7THÈSEprésentéepourl’obtentiondutitredeDOCTEURDEL’ÉCOLENATIONALEDESPONTSETCHAUSSÉESSpécialité:InformatiqueparCharlotteGHYSAnalyse,Reconstruction3D,&AnimationduVisageAnalysis,3DReconstruction,&AnimationofFacesSoutenancele19mai2010devantlejurycomposéde:Rapporteurs:MajaPANTICDimitrisSAMARASExaminateurs:MichelBARLAUDRenaudKERIVENDirectiondethèse:NikosPARAGIOSBénédicteBASCLE
1cee993dc42626caf5dbc26c0a7790ca6571d01aOptimal Illumination for Image and Video Relighting
Shree K.Nayar
Peter N.Belhumeur
Columbia University
It has been shown in the literature that image-based relighting of
scenes with unknown geometry can be achieved through linear
combinations of a set of pre-acquired reference images. Since the
placement and brightness of the light sources can be controlled, it
is natural to ask: what is the optimal way to illuminate the scene to
reduce the number of reference images that are needed?
In this work we show that the best way to light the scene (i.e., the
way that minimizes the number of reference images) is not using
a sequence of single, compact light sources as is most commonly
done, but rather to use a sequence of lighting patterns as given by an
object-dependent lighting basis. While this lighting basis, which we
call the optimal lighting basis (OLB), depends on camera and scene
properties, we show that it can be determined as a simple calibration
procedure before acquisition, through the SVD decomposition of
the images of the object lighted by single light sources (Fig. 1).
of basis images used, and for a set of four experiments (relighting
of a sphere, a face, a buddha statue, and a dragon). For any given
number of optimal lighting basis images, the corresponding num-
ber of images of any other lighting basis that are needed to achieve
the same reconstruction error equals the gain value. For instance, in
the ‘buddha’ experiment instead of 6 optimal basis images, we will
need to use 6× 1.8 ≈ 11 SHLB images, 6× 1.5 ≈ 9 FLB images or
6× 2.3 ≈ 14 HaLB images.
Figure 1: Computing the optimal lighting basis using SVD. First row: Images of the
object illuminated by a single light source in different positions. Second row: Lighting
patterns from the optimal lighting basis, containing both positive values, shown in
grey, and negative values, shown in blue. Third row: Offset and scaling of the optimal
lighting basis in order to make all its values positive.
We demonstrate with experiments on real and synthetic data that
the optimal lighting basis significantly reduces the number of refer-
ence images that are needed to achieve a desired level of accuracy
in the relit images. In particular, we show that the scene-dependent
optimal lighting basis (OBL) performs much better than the Fourier
lighting basis (FLB), Haar lighting basis (HaLB) and spherical har-
monic lighting basis (SHLB).
In Fig. 2 we show some reconstructed images of synthetic objects
which have been illuminated by SHLB and OLB. Observe how
when we reconstruct from images illuminated by OLB, the error is
significantly smaller. In Fig. 3 we plot the gains of the optimal light-
ing basis with respect the other basis, as a function of the number
Figure 3: Gains of the OLB with respect all the other lighting basis, (for a set of 4
experiments), plotted as a function of the number of basis images used.
This reduction in the number of needed images is particularly criti-
cal in the problem of relighting in video, as corresponding points on
moving objects must be aligned from frame to frame during each
cycle of the lighting basis. We show, however, that the efficiencies
gained by the optimal lighting basis makes relighting in video pos-
sible using only a simple optical flow alignment. Furthermore, in
our experiments we verify that although the optimal lighting basis
is computed for an initial orientation of the object, the reconstruc-
tion error does not increase noticeably as the object changes its pose
along the video sequence.
We have performed several relighting experiments on real video se-
quences of moving objects, moving faces, and scenes containing
both. In each case, although a single video clip was captured, we
are able to relight again and again, controlling the lighting direc-
tion, extent, and color. Fig. 4 shows some frames of one of these
sequences.
Ground Truth
FLB 16 basis OLB 16 basis
Error FLB
Error OLB
SHLB 3 basis OLB 3 basis
Ground Truth
Figure 2: Examples of reconstructed images and reconstruction errors, for different
lighting basis. Note that OLB performs much better.
Error SHLB
Error OLB
Figure 4: Two frames of a video sequence, illuminated with the optimal lighting
basis (first row), and relighted with a point light source (second row) and with an
environmental light (third row).
('1994318', 'Francesc Moreno-Noguer', 'francesc moreno-noguer')
1c147261f5ab1b8ee0a54021a3168fa191096df8Journal of Information Security, 2016, 7, 141-151
Published Online April 2016 in SciRes. http://www.scirp.org/journal/jis
http://dx.doi.org/10.4236/jis.2016.73010
Face Recognition across Time Lapse Using
Convolutional Neural Networks
George Mason University, Fairfax, VA, USA
Received 12 February 2016; accepted 8 April 2016; published 11 April 2016
Copyright © 2016 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

('2710867', 'Hachim El Khiyari', 'hachim el khiyari')
('1781577', 'Harry Wechsler', 'harry wechsler')
1c17450c4d616e1e1eece248c42eba4f87de9e0dYANG, LIN, CHANG, CHEN: AUTOMATIC AGE ESTIMATION VIA DEEP RANKING
Automatic Age Estimation from Face Images
via Deep Ranking
Research Center for Information
Technology Innovation
Academia Sinica
Taipei, Taiwan
Institute of Information Science
Academia Sinica
Taipei, Taiwan
('35436145', 'Huei-Fang Yang', 'huei-fang yang')
('36181124', 'Bo-Yao Lin', 'bo-yao lin')
('34692779', 'Kuang-Yu Chang', 'kuang-yu chang')
('1720473', 'Chu-Song Chen', 'chu-song chen')
hfyang@citi.sinica.edu.tw
boyaolin@iis.sinica.edu.tw
kuangyu@iis.sinica.edu.tw
song@iis.sinica.edu.tw
1c93b48abdd3ef1021599095a1a5ab5e0e020dd5JOURNAL OF LATEX CLASS FILES, VOL. *, NO. *, JANUARY 2009
A Compositional and Dynamic Model for Face Aging
('3133970', 'Song-Chun Zhu', 'song-chun zhu')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
1c41965c5e1f97b1504c1bdde8037b5e0417da5eInteraction-aware Spatio-temporal Pyramid
Attention Networks for Action Classification
University of Chinese Academy of Sciences
2 CAS Center for Excellence in Brain Science and Intelligence Technology, National
Laboratory of Pattern Recognition, Institute of Automation, CAS
3 Meitu, 4 National Computer network Emergency Response technical
Team/Coordination Center of China
('1807325', 'Yang Du', 'yang du')
('2034987', 'Chunfeng Yuan', 'chunfeng yuan')
('46708348', 'Bing Li', 'bing li')
('40027215', 'Lili Zhao', 'lili zhao')
('2082374', 'Yangxi Li', 'yangxi li')
('40506509', 'Weiming Hu', 'weiming hu')
duyang2014@ia.ac.cn,{cfyuan,bli,wmhu}@nlpr.ia.ac.cn,
lili.zhao@meitu.com, liyangxi@outlook.com
1cbd3f96524ca2258fd2d5c504c7ea8da7fb1d16Fusion of audio-visual features using hierarchical classifier systems for
the recognition of affective states and the state of depression
Institute of Neural Information Processing, Ulm University, Ulm, Germany
Keywords:
Emotion Recognition, Multiple Classifier Systems, Affective Computing, Information Fusion
('1860319', 'Michael Glodek', 'michael glodek')
('3243891', 'Sascha Meudt', 'sascha meudt')
('1685857', 'Friedhelm Schwenker', 'friedhelm schwenker')
firstname.lastname@uni-ulm.de
1cad5d682393ffbb00fd26231532d36132582bb4Spatio-Temporal Action Detection with
Cascade Proposal and Location Anticipation
Institute for Robotics and Intelligent
Systems
University of Southern California
Los Angeles, CA, USA
('3469030', 'Zhenheng Yang', 'zhenheng yang')
('3029956', 'Jiyang Gao', 'jiyang gao')
('27735100', 'Ram Nevatia', 'ram nevatia')
('3469030', 'Zhenheng Yang', 'zhenheng yang')
('3029956', 'Jiyang Gao', 'jiyang gao')
('27735100', 'Ram Nevatia', 'ram nevatia')
zhenheny@usc.edu
jiyangga@usc.edu
nevatia@usc.edu
1c1a98df3d0d5e2034ea723994bdc85af45934dbGuided Unsupervised Learning of Mode Specific Models for Facial Point
Detection in the Wild
School of Computer Science, The University of Nottingham
('2736086', 'Shashank Jaiswal', 'shashank jaiswal')
('2449665', 'Timur R. Almaev', 'timur r. almaev')
{psxsj3,psxta4,michel.valstar}@nottingham.ac.uk
1ca815327e62c70f4ee619a836e05183ef629567Global Supervised Descent Method
Carnegie Mellon University, Pittsburgh PA
('3182065', 'Xuehan Xiong', 'xuehan xiong')
('1707876', 'Fernando De la Torre', 'fernando de la torre')
{xxiong,ftorre}@andrew.cmu.edu
1c6be6874e150898d9db984dd546e9e85c85724e
1c65f3b3c70e1ea89114f955624d7adab620a013
1c530de1a94ac70bf9086e39af1712ea8d2d2781Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
Sparsity Conditional Energy Label
Distribution Learning for Age Estimation
Key Lab of Computer Network and Information Integration (Ministry of Education)
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
('2442058', 'Xu Yang', 'xu yang')
('1735299', 'Xin Geng', 'xin geng')
('1725992', 'Deyu Zhou', 'deyu zhou')
{x.yang,xgeng,d.zhou}@seu.edu.cn
1c6e22516ceb5c97c3caf07a9bd5df357988ceda
82f8652c2059187b944ce65e87bacb6b765521f6Discriminative Object Categorization with
External Semantic Knowledge
Dissertation Proposal
by
Department of Computer Science
University of Texas at Austin
Committee:
Prof. Kristen Grauman (Advisor)
Prof. Fei Sha
Prof. J. K. Aggarwal
('35788904', 'Sung Ju Hwang', 'sung ju hwang')
('1797655', 'Raymond Mooney', 'raymond mooney')
('2302443', 'Pradeep Ravikumar', 'pradeep ravikumar')
82bef8481207de9970c4dc8b1d0e17dced706352
825f56ff489cdd3bcc41e76426d0070754eab1a8Making Convolutional Networks Recurrent for Visual Sequence Learning
NVIDIA
('40058797', 'Xiaodong Yang', 'xiaodong yang'){xiaodongy,pmolchanov,jkautz}@nvidia.com
82d2af2ffa106160a183371946e466021876870dA Novel Space-Time Representation on the Positive Semidefinite Cone
for Facial Expression Recognition
1IMT Lille Douai, Univ. Lille, CNRS, UMR 9189 – CRIStAL –
Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
2Univ. Lille, CNRS, UMR 8524, Laboratoire Paul Painlev´e, F-59000 Lille, France.
('37809060', 'Anis Kacem', 'anis kacem')
('2909056', 'Mohamed Daoudi', 'mohamed daoudi')
('2125606', 'Boulbaba Ben Amor', 'boulbaba ben amor')
824d1db06e1c25f7681e46199fd02cb5fc343784Representing Relative Visual Attributes
with a Reference-Point-Based Decision Model
Marc T. Law
University of Toronto
Shanghai Jiao Tong University
University of Michigan-Shanghai Jiao Tong University Joint Institute
('38481975', 'Paul Weng', 'paul weng')
82ccd62f70e669ec770daf11d9611cab0a13047eSparse Variation Pattern for Texture Classification
Electrical Engineering Department
Computer Science and Software Engineering
Electrical Engineering Department
Tafresh University
Tafresh, Iran
The University of Western Australia
Central Tehran Branch, Azad University
WA 6009, Australia
Tehran, Iran
('2014145', 'Mohammad Tavakolian', 'mohammad tavakolian')
('3046235', 'Farshid Hajati', 'farshid hajati')
('1747500', 'Ajmal S. Mian', 'ajmal s. mian')
('2997971', 'Soheila Gheisari', 'soheila gheisari')
m tavakolian,hajati@tafreshu.ac.ir
ajmal.mian@uwa.edu.au
gheisari.s@iauctb.ac.ir
82eff71af91df2ca18aebb7f1153a7aed16ae7ccMSU-AVIS dataset:
Fusing Face and Voice Modalities for Biometric
Recognition in Indoor Surveillance Videos
Michigan State University, USA
Yarmouk University, Jordan
('39617163', 'Anurag Chowdhury', 'anurag chowdhury')
('2447931', 'Yousef Atoum', 'yousef atoum')
('1849929', 'Luan Tran', 'luan tran')
('49543771', 'Xiaoming Liu', 'xiaoming liu')
('1698707', 'Arun Ross', 'arun ross')
82c303cf4852ad18116a2eea31e2291325bc19c3Journal of Image and Graphics, Volume 2, No.1, June, 2014
Fusion Based FastICA Method: Facial Expression
Recognition
Computer Science, Engineering and Mathematics School, Flinders University, Australia
('3105876', 'Humayra B. Ali', 'humayra b. ali')
('1739260', 'David M W Powers', 'david m w powers')
Email: {ali0041, david.powers}@flinders.edu.au
8210fd10ef1de44265632589f8fc28bc439a57e6Single Sample Face Recognition via Learning Deep
Supervised Auto-Encoders
Shenghua Gao, Yuting Zhang, Kui Jia, Jiwen Lu, Yingying Zhang
82a4a35b2bae3e5c51f4d24ea5908c52973bd5beReal-time emotion recognition for gaming using
deep convolutional network features
S´ebastien Ouellet
82a610a59c210ff77cfdde7fd10c98067bd142daUC San Diego
UC San Diego Electronic Theses and Dissertations
Title
Human attention and intent analysis using robust visual cues in a Bayesian framework
Permalink
https://escholarship.org/uc/item/1cb8d7vw
Author
McCall, Joel Curtis
Publication Date
2006-01-01
Peer reviewed|Thesis/dissertation
eScholarship.org
Powered by the California Digital Library
University of California
829f390b3f8ad5856e7ba5ae8568f10cee0c7e6aInternational Journal of Computer Applications (0975 – 8887)
Volume 57– No.20, November 2012
A Robust Rotation Invariant Multiview Face Detection in
Erratic Illumination Condition
G.Nirmala Priya
Associate Professor, Department of ECE
Sona College of Technology
('48201570', 'Salem', 'salem')
82f4e8f053d20be64d9318529af9fadd2e3547efTechnical Report:
Multibiometric Cryptosystems
('2743820', 'Abhishek Nagar', 'abhishek nagar')
('34633765', 'Karthik Nandakumar', 'karthik nandakumar')
('40437942', 'Anil K. Jain', 'anil k. jain')
82b43bc9213230af9db17322301cbdf81e2ce8ccAttention-Set based Metric Learning for Video Face Recognition
Center for Research on Intelligent Perception and Computing,
Institute of Automation, Chinese Academy of Sciences
('33079499', 'Yibo Hu', 'yibo hu')
('33680526', 'Xiang Wu', 'xiang wu')
('1705643', 'Ran He', 'ran he')
yibo.hu@cripac.ia.ac.cn, alfredxiangwu@gmail.com, rhe@nlpr.ia.ac.cn
82d781b7b6b7c8c992e0cb13f7ec3989c8eafb3d141
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
Adler A., Youmaran R. and Loyka S., “Towards a Measure of
Biometric Information”, Canadian Conference on Electrical and
Computer Engineering, pp. 210-213, 2006.
Military Academy, West Point, New York, pp. 452-458, 2005.
Security and Trust, St. Andrews, New Brunswick, Canada, pp. 1-8,
2005.
Structural Model for Biometric Sketch Recognition”, Proceedings of
DAGM, Magdeburg, Germany, Vol. 2781, pp. 187-195, 2003.
of Security”, The First UAE International Conference on Biological
and Medical Physics, pp. 1-4, 2005.
Avraam Kasapis., “MLPs and Pose, Expression Classification”,
Proceedings of UNiS Report, pp. 1-87, 2003.
Detection for Storage Area Networks (SANs)”, Proceedings of 22nd
IEEE / 13th NASA Goddard Conference on Mass Storage Systems and
Technologies, pp. 118-127, 2005.
Black M.J. and Yacoob Y., “Recognizing Facial Expressions in Image
Sequences using Local Parameterized Models of Image Motion”, Int.
Journal Computer Vision, Vol. 25, No. 1, pp. 23-48, 1997.
10.
Recognition using a State-Based Model of Spatially-Localized Facial
('1689298', 'Ahmed', 'ahmed')
('1689298', 'Ahmed', 'ahmed')
('29977973', 'Angle', 'angle')
('20765969', 'Bolle', 'bolle')
('16848439', 'Bourel', 'bourel')
82417d8ec8ac6406f2d55774a35af2a1b3f4b66eSome faces are more equal than others:
Hierarchical organization for accurate and
efficient large-scale identity-based face retrieval
GREYC, CNRS UMR 6072, Universit´e de Caen Basse-Normandie, France1
Technicolor, Rennes, France2
('48467774', 'Binod Bhattarai', 'binod bhattarai')
('2515597', 'Gaurav Sharma', 'gaurav sharma')
82e66c4832386cafcec16b92ac88088ffd1a1bc9OpenFace: A general-purpose face recognition
library with mobile applications
June 2016
CMU-CS-16-118
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Poznan University of Technology
('1773498', 'Brandon Amos', 'brandon amos')
('1747303', 'Mahadev Satyanarayanan', 'mahadev satyanarayanan')
82eb267b8e86be0b444e841b4b4ed4814b6f1942Single Image 3D Interpreter Network
Massachusetts Institute of Technology
Stanford University
3Facebook AI Research
4Google Research
('3045089', 'Jiajun Wu', 'jiajun wu')
('3222730', 'Tianfan Xue', 'tianfan xue')
('35198686', 'Joseph J. Lim', 'joseph j. lim')
('39402399', 'Yuandong Tian', 'yuandong tian')
('1763295', 'Joshua B. Tenenbaum', 'joshua b. tenenbaum')
('1690178', 'Antonio Torralba', 'antonio torralba')
('1768236', 'William T. Freeman', 'william t. freeman')
826c66bd182b54fea3617192a242de1e4f16d020978-1-5090-4117-6/17/$31.00 ©2017 IEEE
1602
ICASSP 2017
499f1d647d938235e9186d968b7bb2ab20f2726dFace Recognition via Archetype Hull Ranking
The Chinese University of Hong Kong, Hong Kong
IBM T. J. Watson Research Center, Yorktown Heights, NY, USA
('3331521', 'Yuanjun Xiong', 'yuanjun xiong'){yjxiong,xtang}@ie.cuhk.edu.hk
weiliu@us.ibm.com
zhaodeli@gmail.com
4919663c62174a9bc0cc7f60da8f96974b397ad2HUMAN AGE ESTIMATION USING ENHANCED BIO-INSPIRED FEATURES (EBIF)
Faculty of Computers and Information, Cairo University, Cairo, Egypt
('3144122', 'Motaz El-Saban', 'motaz el-saban'){mohamed.y.eldib,motaz.elsaban}@gmail.com
49f70f707c2e030fe16059635df85c7625b5dc7ewww.ietdl.org
Received on 29th May 2014
Revised on 29th August 2014
Accepted on 23rd September 2014
doi: 10.1049/iet-bmt.2014.0033
ISSN 2047-4938
Face recognition under illumination variations based
on eight local directional patterns
Utah State University, Logan, UT 84322-4205, USA
('2147212', 'Mohammad Reza Faraji', 'mohammad reza faraji')
('1725739', 'Xiaojun Qi', 'xiaojun qi')
E-mail: Mohammadreza.Faraji@aggiemail.usu.edu
4967b0acc50995aa4b28e576c404dc85fefb0601 Vol. 4, No. 1 Jan 2013 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
©2009-2013 CIS Journal. All rights reserved.
An Automatic Face Detection and Gender Classification from
http://www.cisjournal.org
Color Images using Support Vector Machine
1, 2, 3 Department of Electrical & Electronic Engineering, International
University of Business Agriculture and Technology, Dhaka-1230, Bangladesh
('2832495', 'Md. Hafizur Rahman', 'md. hafizur rahman')
('2226529', 'Suman Chowdhury', 'suman chowdhury')
('36231591', 'Md. Abul Bashar', 'md. abul bashar')
49820ae612b3c0590a8a78a725f4f378cb605cd1Evaluation of Smile Detection Methods with
Images in Real-world Scenarios
Beijing University of Posts and Telecommunications, Beijing, China
('22550265', 'Zhoucong Cui', 'zhoucong cui')
('1678529', 'Shuo Zhang', 'shuo zhang')
('23224233', 'Jiani Hu', 'jiani hu')
('1774956', 'Weihong Deng', 'weihong deng')
4972aadcce369a8c0029e6dc2f288dfd0241e144Multi-target Unsupervised Domain Adaptation
without Exactly Shared Categories
('2076460', 'Huanhuan Yu', 'huanhuan yu')
('27096523', 'Menglei Hu', 'menglei hu')
('1680768', 'Songcan Chen', 'songcan chen')
49dd4b359f8014e85ed7c106e7848049f852a304
49e975a4c60d99bcc42c921d73f8d89ec7130916Human and computer recognition of facial expressions of emotion
J.M. Susskind a, G. Littlewort b, M.S. Bartlett b, J. Movellan b, A.K. Anderson a,c,∗
b Machine Perception Laboratory, Institute of Neural Computation, University of California, San Diego, United States
c Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ont. M6A 2E1, Canada
University of Toronto, Canada
Available online 12 June 2006
49e85869fa2cbb31e2fd761951d0cdfa741d95f3253
Adaptive Manifold Learning
('2923061', 'Zhenyue Zhang', 'zhenyue zhang')
('1697912', 'Jing Wang', 'jing wang')
('1750350', 'Hongyuan Zha', 'hongyuan zha')
49659fb64b1d47fdd569e41a8a6da6aa76612903
490a217a4e9a30563f3a4442a7d04f0ea34442c8International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
An SOM-based Automatic Facial Expression
Recognition System
Hsieh1, andPa-Chun Wang2
1Department of Computer Science &InformationEngineering,National Central
University, Taiwan, R.O.C
2Cathay General Hospital, Taiwan, R.O.C.
('1720774', 'Mu-Chun Su', 'mu-chun su')
('4226881', 'Chun-Kai Yang', 'chun-kai yang')
('40179526', 'Shih-Chieh Lin', 'shih-chieh lin')
E-mail: muchun@csie.ncu.edu.tw
49a7949fabcdf01bbae1c2eb38946ee99f491857A CONCATENATING FRAMEWORK OF SHORTCUT
CONVOLUTIONAL NEURAL NETWORKS
Yujian Li (liyujian@bjut.edu.cn), Ting Zhang, Zhaoying Liu, Haihe Hu
4934d44aa89b6d871eb6709dd1d1eebf16f3aaf1A Deep Sum-Product Architecture for Robust Facial Attributes Analysis
The Chinese University of Hong Kong
The Chinese University of Hong Kong
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('1693209', 'Ping Luo', 'ping luo')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
pluo.lhi@gmail.com
xgwang@ee.cuhk.edu.hk
xtang@ie.cuhk.edu.hk
499343a2fd9421dca608d206e25e53be84489f44Anil Kumar.C, et.al, International Journal of Technology and Engineering Science [IJTES]TM

Volume 1[9], pp: 1371-1375, December 2013
Face Recognition with Name Using Local Weber‟s
Law Descriptor
1C.Anil kumar,2A.Rajani,3I.Suneetha
1M.Tech Student,2Assistant Professor,3Associate Professor
Annamacharya Institute of Technology and Sciences, Tirupati, India
on FERET
1Anilyadav.kumar7@gmail.com,2rajanirevanth446@gmail.com,3iralasuneetha.aits@gmail.com
498fd231d7983433dac37f3c97fb1eafcf065268LINEAR DISENTANGLED REPRESENTATION LEARNING FOR FACIAL ACTIONS
1Dept. of Computer Science
2Dept. of Electrical & Computer Engineering
Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA
Fig. 1. The separability of the neutral face yn and expression
component ye. We find yn is better for identity recognition
than y and ye is better for expression recognition than y.
('40031188', 'Xiang Xiang', 'xiang xiang')
('1709073', 'Trac D. Tran', 'trac d. tran')
49e1aa3ecda55465641b2c2acc6583b32f3f1fc6International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 5, May 2012)
Support Vector Machine for age classification
1Assistant Professor, CSE, RSR RCET, Kohka Bhilai
2,3 Sr. Assistant Professor, CSE, SSCET, Junwani Bhilai
('6552360', 'Sangeeta Agrawal', 'sangeeta agrawal')
('40618181', 'Rohit Raja', 'rohit raja')
('40323262', 'Sonu Agrawal', 'sonu agrawal')
1agrawal.sans@gmail.com
2rohitraja4u@gmail.com
3agrawalsonu@gmail.com
499f2b005e960a145619305814a4e9aa6a1bba6aRobust human face recognition based on locality preserving
sparse overcomplete block approximation
University of Geneva
7 Route de Drize, Geneva, Switzerland
('36133844', 'Dimche Kostadinov', 'dimche kostadinov')
('8995309', 'Sviatoslav Voloshynovskiy', 'sviatoslav voloshynovskiy')
('1682792', 'Sohrab Ferdowsi', 'sohrab ferdowsi')
497bf2df484906e5430aa3045cf04a40c9225f94Sensors 2013, 13, 16682-16713; doi:10.3390/s131216682
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Hierarchical Recognition Scheme for Human Facial Expression
Recognition Systems
UC Lab, Kyung Hee University, Yongin-Si 446-701, Korea
Division of Information and Computer Engineering, Ajou University, Suwon 443-749, Korea
Tel.: +82-31-201-2514.
Received: 28 October 2013; in revised form: 30 November 2013 / Accepted: 2 December 2013 /
Published: 5 December 2013
('1711083', 'Muhammad Hameed Siddiqi', 'muhammad hameed siddiqi')
('1700806', 'Sungyoung Lee', 'sungyoung lee')
('1750915', 'Young-Koo Lee', 'young-koo lee')
('1714762', 'Adil Mehmood Khan', 'adil mehmood khan')
('34601872', 'Phan Tran Ho Truc', 'phan tran ho truc')
E-Mails: siddiqi@oslab.khu.ac.kr (M.H.S.); sylee@oslab.khu.ac.kr (S.L.); yklee@khu.ac.kr (Y.-K.L.)
E-Mail: amtareen@ajou.ac.kr
* Author to whom correspondence should be addressed; E-Mail: pthtruc@oslab.khu.ac.kr;
492f41e800c52614c5519f830e72561db205e86cA Deep Regression Architecture with Two-Stage Re-initialization for
High Performance Facial Landmark Detection
Jiangjing Lv1
Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences
University of Chinese Academy of Sciences
Institute of Automation, Chinese Academy of Sciences
('3492237', 'Xiaohu Shao', 'xiaohu shao')
('1757173', 'Junliang Xing', 'junliang xing')
('2095535', 'Cheng Cheng', 'cheng cheng')
('39959302', 'Xi Zhou', 'xi zhou')
{lvjiangjing,shaoxiaohu,chengcheng,zhouxi}@cigit.ac.cn
jlxing@nlpr.ia.ac.cn
49df381ea2a1e7f4059346311f1f9f45dd9971642018
On the Use of Client-Specific Information for Face
Presentation Attack Detection Based on Anomaly
Detection
('1690611', 'Shervin Rahimzadeh Arashloo', 'shervin rahimzadeh arashloo')
('1748684', 'Josef Kittler', 'josef kittler')
493ec9e567c5587c4cbeb5f08ca47408ca2d6571You et al. Complex Adapt Syst Model (2016) 4:22
DOI 10.1186/s40294‑016‑0034‑7
RESEARCH
Combining graph embedding
and sparse regression with structure low‑rank
representation for semi‑supervised learning
Open Access
*Correspondence:
1 School of IoT Engineering,
Jiangnan University, Wuxi
China
Full list of author information
is available at the end of the
article
('1766488', 'Vasile Palade', 'vasile palade')youcongzhe@gmail.com
49570b41bd9574bd9c600e24b269d945c645b7bdA Framework for Performance Evaluation
of Face Recognition Algorithms
Visual Computing and Communications Lab, Arizona State University
('40401270', 'John A. Black', 'john a. black')
('1743991', 'Sethuraman Panchanathan', 'sethuraman panchanathan')
496074fcbeefd88664b7bd945012ca22615d812eReview
Driver Distraction Using Visual-Based Sensors
and Algorithms
1 Grupo TSK, Technological Scientific Park of Gijón, 33203 Gijón, Asturias, Spain;
University of Oviedo, Campus de Viesques, 33204 Gij n
Academic Editor: Gonzalo Pajares Martinsanz
Received: 14 July 2016; Accepted: 24 October 2016; Published: 28 October 2016
('8306548', 'Rubén Usamentiaga', 'rubén usamentiaga')
('27666409', 'Juan Luis Carús', 'juan luis carús')
juanluis.carus@grupotsk.com
Asturias, Spain; rusamentiaga@uniovi.es (R.U.); rcasado@lsi.uniovi.es (R.C.)
* Corrospondence: alberto.fernandez@grupotsk.com; Tel.: +34-984-29-12-12; Fax: +34-984-39-06-12
40205181ed1406a6f101c5e38c5b4b9b583d06bcUsing Context to Recognize People in Consumer Images ('39460815', 'Andrew C. Gallagher', 'andrew c. gallagher')
('1746230', 'Tsuhan Chen', 'tsuhan chen')
40dab43abef32deaf875c2652133ea1e2c089223Noname manuscript No.
(will be inserted by the editor)
Facial Communicative Signals
Valence Recognition in Task-Oriented Human-Robot Interaction
Received: date / Accepted: date
('33734208', 'Christian Lang', 'christian lang')
40b0fced8bc45f548ca7f79922e62478d2043220Do Convnets Learn Correspondence?
University of California Berkeley
('1753210', 'Trevor Darrell', 'trevor darrell')
('34703740', 'Jonathan Long', 'jonathan long')
('40565777', 'Ning Zhang', 'ning zhang')
{jonlong, nzhang, trevor}@cs.berkeley.edu
405b43f4a52f70336ac1db36d5fa654600e9e643What can we learn about CNNs from a large scale controlled object dataset?
UWM
AUT
USC
('3177797', 'Ali Borji', 'ali borji')
('2391309', 'Saeed Izadi', 'saeed izadi')
('7326223', 'Laurent Itti', 'laurent itti')
borji@uwm.edu
sizadi@aut.ac.ir
itti@usc.edu
40b86ce698be51e36884edcc8937998979cd02ecYüz ve İsim İlişkisi kullanarak Haberlerdeki Kişilerin Bulunması
Finding Faces in News Photos Using Both Face and Name Information
Derya Ozkan, Pınar Duygulu
Bilgisayar Mühendisliği Bölümü, Bilkent Üniversitesi, 06800, Ankara
Özetçe
Bu çalışmada, haber fotoğraflarından oluşan geniş veri
kümelerinde kişilerin sorgulanmasını sağlayan bir yöntem
sunulmuştur. Yöntem isim ve yüzlerin ilişkilendirilmesine
dayanmaktadır. Haber başlığında kişinin ismi geçiyor ise
fotoğrafta da o kişinin yüzünün bulunacağı varsayımıyla, ilk
olarak sorgulanan isim ile ilişkilendirilmiş, fotoğraflardaki
tüm yüzler seçilir. Bu yüzler arasında sorgu kişisine ait farklı
koşul, poz ve zamanlarda çekilmiş pek çok resmin yanında,
haberde ismi geçen başka kişilere ait yüzler ya da kullanılan
yüz bulma yönteminin hatasından kaynaklanan yüz olmayan
resimler de bulunabilir. Yine de, çoğu zaman, sorgu kişisine
ait resimler daha çok olup, bu resimler birbirine diğerlerine
olduğundan daha çok benzeyeceklerdir. Bu nedenle, yüzler
arasındaki benzerlikler çizgesel olarak betimlendiğinde ,
birbirine en çok benzeyen yüzler bu çizgede en yoğun bileşen
olacaktır. Bu çalışmada, sorgu ismiyle ilişkilendirilmiş,
yüzler arasında birbirine en çok benzeyen alt kümeyi bulan,
çizgeye dayalı bir yöntem sunulmaktadır.
deryao@cs.bilkent.edu.tr, duygulu@cs.bilkent.edu.tr
40a74eea514b389b480d6fe8b359cb6ad31b644aDiscrete Deep Feature Extraction: A Theory and New Architectures
Aleksandar Stani´c1
Helmut B¨olcskei1
1Dept. IT & EE, ETH Zurich, Switzerland
University of Vienna, Austria
('2076040', 'Thomas Wiatowski', 'thomas wiatowski')
('2208878', 'Michael Tschannen', 'michael tschannen')
('1690644', 'Philipp Grohs', 'philipp grohs')
403a108dec92363fd1f465340bd54dbfe65af870describing images with statistics of local non-binarized pixel patterns
Local Higher-Order Statistics (LHS)
aGREYC CNRS UMR 6072, Universit´e de Caen Basse-Normandie, France
bMax Planck Institute for Informatics, Germany
('2515597', 'Gaurav Sharma', 'gaurav sharma')
40ee38d7ff2871761663d8634c3a4970ed1dc058Three-Dimensional Face Recognition: A Fishersurface
Approach
The University of York, United Kingdom
('2023950', 'Thomas Heseltine', 'thomas heseltine')
('1737428', 'Nick Pears', 'nick pears')
('2405628', 'Jim Austin', 'jim austin')
402f6db00251a15d1d92507887b17e1c50feebca3D Facial Action Units Recognition for Emotional
Expression
1Department of Information Technology and Communication, Politeknik Kuching, Sarawak, Malaysia
2Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan, Sarawak, Malaysia
The muscular activities caused the activation of certain AUs for every facial expression at the certain duration of time
throughout the facial expression. This paper presents the methods to recognise facial Action Unit (AU) using facial distance
of the facial features which activates the muscles. The seven facial action units involved are AU1, AU4, AU6, AU12, AU15,
AU17 and AU25 that characterises happy and sad expression. The recognition is performed on each AU according to rules
defined based on the distance of each facial points. The facial distances chosen are extracted from twelve facial features.
Then the facial distances are trained using Support Vector Machine (SVM) and Neural Network (NN). Classification result
using SVM is presented with several different SVM kernels while result using NN is presented for each training, validation
and testing phase.
Keywords: Facial action units recognition, 3D AU recognition, facial expression
('2801456', 'Hamimah Ujir', 'hamimah ujir')
('3310557', 'Jacey-Lynn Minoi', 'jacey-lynn minoi')
404042a1dcfde338cf24bc2742c57c0fb1f48359中国图象图形学报 vol.8, no.8, pp.849-859, 2003.
脸部特征定位方法综述1
林维训 潘纲 吴朝晖 潘云鹤
(浙江大学计算机系 310027)
摘 要 脸部特征定位是人脸分析技术的一个重要组成部分,其目标是在图像或图像序列中的指定
区域内搜索人脸特征(如眼、鼻、嘴、耳等)的位置。它可广泛应用于人脸检测和定位、人脸识别、
姿态识别、表情识别、头部像压缩及重构、脸部动画等领域。近年来该领域的研究有了较大的发展,
为了让相关领域内的理论研究和开发人员对目前的进展有一个全面的了解,本文将近年来提出的脸
部特征定位方法根据其所依据的基本信息类型分为基于先验知识、几何形状、色彩、外观和关联信
息等五类并分别作了介绍,对各类方法的性能作了一些比较和讨论,对未来的发展作了展望。
关键词 脸部特征定位 脸部特征提取
中图法分类号:TP391.41
A Survey on Facial Features Localization
College of Computer Science, Zhejiang University
4015e8195db6edb0ef8520709ca9cb2c46f29be7UNIVERSITY OF TARTU
FACULTY OF MATHEMATICS AND COMPUTER SCIENCE
Institute of Computer Science
Computer Science Curriculum
Smile Detector Based on the Motion of
Face Reference Points
Bachelor’s Thesis (6 ECTS)
Supervisor: Gholamreza Anbarjafari, PhD
Tartu 2014
('3168586', 'Andres Traumann', 'andres traumann')
407bb798ab153bf6156ba2956f8cf93256b6910aFisher Pruning of Deep Nets for Facial Trait
Classification
McGill University
University Street, Montreal, QC H3A 0E9, Canada
('1992537', 'Qing Tian', 'qing tian')
('1699104', 'Tal Arbel', 'tal arbel')
('1713608', 'James J. Clark', 'james j. clark')
40fb4e8932fb6a8fef0dddfdda57a3e142c3e823A Mixed Generative-Discriminative Framework for Pedestrian Classification
Dariu M. Gavrila2,3
1 Image & Pattern Analysis Group, Dept. of Math. and Comp. Sc., Univ. of Heidelberg, Germany
2 Environment Perception, Group Research, Daimler AG, Ulm, Germany
3 Intelligent Systems Lab, Faculty of Science, Univ. of Amsterdam, The Netherlands
('1765022', 'Markus Enzweiler', 'markus enzweiler'){uni-heidelberg.enzweiler,dariu.gavrila}@daimler.com
40dd2b9aace337467c6e1e269d0cb813442313d7This thesis has been submitted in fulfilment of the requirements for a postgraduate degree
e.g. PhD, MPhil, DClinPsychol) at the University of Edinburgh. Please note the following
terms and conditions of use:
This work is protected by copyright and other intellectual property rights, which are
retained by the thesis author, unless otherwise stated.
A copy can be downloaded for personal non-commercial research or study, without
prior permission or charge.
This thesis cannot be reproduced or quoted extensively from without first obtaining
permission in writing from the author.
The content must not be changed in any way or sold commercially in any format or
medium without the formal permission of the author.
When referring to this work, full bibliographic details including the author, title,
awarding institution and date of the thesis must be given.
407de9da58871cae7a6ded2f3a6162b9dc371f38TraMNet - Transition Matrix Network for
Efficient Action Tube Proposals
Oxford Brookes University, UK
('1931660', 'Gurkirt Singh', 'gurkirt singh')
('49348905', 'Suman Saha', 'suman saha')
('1754181', 'Fabio Cuzzolin', 'fabio cuzzolin')
gurkirt.singh-2015@brookes.ac.uk
405526dfc79de98f5bf3c97bf4aa9a287700f15dMegaFace: A Million Faces for Recognition at Scale
D. Miller
E. Brossard
S. Seitz
Dept. of Computer Science and Engineering
University of Washington
I. Kemelmacher-Shlizerman
Figure 1: We evaluate how recognition performs with increasing numbers of faces in the database: (a) shows rank-1 iden-
tification rates, and (b) rank-10. Recognition rates drop once the number of distractors increases. We also present first
large-scale human recognition results (up to 10K distractors). Interestingly, Google’s deep learning based FaceNet is more
robust at scale than humans. See http://megaface.cs.washington.edu to participate in the challenge.
40cd062438c280c76110e7a3a0b2cf5ef675052c
40b7e590dfd1cdfa1e0276e9ca592e02c1bd2b5bBeyond Trade-off: Accelerate FCN-based Face Detector with Higher Accuracy
Beihang University, 2The Chinese University of Hong Kong, 3Sensetime Group Limited
('12920342', 'Guanglu Song', 'guanglu song')
('1715752', 'Yu Liu', 'yu liu')
('40452812', 'Ming Jiang', 'ming jiang')
('33598672', 'Yujie Wang', 'yujie wang')
('1721677', 'Junjie Yan', 'junjie yan')
('2858789', 'Biao Leng', 'biao leng')
{guanglusong,jiangming1406,yujiewang,lengbiao}@buaa.edu.cn,
yuliu@ee.cuhk.edu.hk, yanjunjie@sensetime.com
40a5b32e261dc5ccc1b5df5d5338b7d3fe10370dFeedback-Controlled Sequential Lasso Screening
Department of Electrical Engineering
Princeton University
('1719525', 'Yun Wang', 'yun wang')
('1734498', 'Xu Chen', 'xu chen')
('1693135', 'Peter J. Ramadge', 'peter j. ramadge')
40a1935753cf91f29ffe25f6c9dde2dc49bf2a3a80
40a9f3d73c622cceee5e3d6ca8faa56ed6ebef60AUTOMATIC LIP TRACKING AND ACTION UNITS CLASSIFICATION USING
TWO-STEP ACTIVE CONTOURS AND PROBABILISTIC NEURAL NETWORKS
Faculty of Electrical and
Computer Engineering
University of Tabriz, Tabriz, Iran
WonSook LEE
School of Information Technology
and Engineering (SITE)
Faculty of Engineering,
University of Ottawa, Canada
Faculty of Electrical and
Computer Engineering
University of Tabriz, Tabriz, Iran

('3210269', 'Hadi Seyedarabi', 'hadi seyedarabi')
('2488201', 'Ali Aghagolzadeh', 'ali aghagolzadeh')
email: hadis@discover.uottawa.ca
email: wslee@uottawa.ca
email: aghagol@tabrizu.ac.ir
40a34d4eea5e32dfbcef420ffe2ce7c1ee0f23cdBridging Heterogeneous Domains With Parallel Transport For Vision and
Multimedia Applications
Dept. of Video and Multimedia Technologies Research
AT&T Labs-Research
San Francisco, CA 94108
('33692583', 'Raghuraman Gopalan', 'raghuraman gopalan')
40389b941a6901c190fb74e95dc170166fd7639dAutomatic Facial Expression Recognition
Emotient
http://emotient.com
February 12, 2014
Imago animi vultus est, indices oculi. (Cicero)
Introduction
The face is innervated by two different brain systems that compete for control of its muscles:
a cortical brain system related to voluntary and controllable behavior, and a sub-cortical
system responsible for involuntary expressions. The interplay between these two systems
generates a wealth of information that humans constantly use to read the emotions, inten-
tions, and interests [25] of others.
Given the critical role that facial expressions play in our daily life, technologies that can
interpret and respond to facial expressions automatically are likely to find a wide range of
applications. For example, in pharmacology, the effect of new anti-depression drugs could
be assessed more accurately based on daily records of the patients’ facial expressions than
asking the patients to fill out a questionnaire, as it is currently done [7]. Facial expression
recognition may enable a new generation of teaching systems to adapt to the expression
of their students in the way good teachers do [61]. Expression recognition could be used
to assess the fatigue of drivers and air-pilots [58, 59]. Daily-life robots with automatic
expression recognition will be able to assess the states and intentions of humans and respond
accordingly [41]. Smart phones with expression analysis may help people to prepare for
important meetings and job interviews.
Thanks to the introduction of machine learning methods, recent years have seen great
progress in the field of automatic facial expression recognition. Commercial real-time ex-
pression recognition systems are starting to be used in consumer applications, e.g., smile
detectors embedded in digital cameras [62]. Nonetheless, considerable progress has yet to be
made: Methods for face detection and tracking (the first step of automated face analysis)
work well for frontal views of adult Caucasian and Asian faces [50], but their performance
('1775637', 'Jacob Whitehill', 'jacob whitehill')
('40648952', 'Marian Stewart', 'marian stewart')
('1741200', 'Javier R. Movellan', 'javier r. movellan')
40e1743332523b2ab5614bae5e10f7a7799161f4Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural
Networks
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, UK
School of IoT Engineering, Jiangnan University, Wuxi 214122, China
('2976854', 'Zhen-Hua Feng', 'zhen-hua feng')
('1748684', 'Josef Kittler', 'josef kittler')
{z.feng, j.kittler, m.a.rana}@surrey.ac.uk, patrikhuber@gmail.com, wu xiaojun@jiangnan.edu.cn
40c8cffd5aac68f59324733416b6b2959cb668fdPooling Facial Segments to Face: The Shallow and Deep Ends
Department of Electrical and Computer Engineering and the Center for Automation Research,
UMIACS, University of Maryland, College Park, MD
('3152615', 'Upal Mahbub', 'upal mahbub')
('40599829', 'Sayantan Sarkar', 'sayantan sarkar')
('9215658', 'Rama Chellappa', 'rama chellappa')
{umahbub, ssarkar2, rama}@umiacs.umd.edu
40273657e6919455373455bd9a5355bb46a7d614Anonymizing k-Facial Attributes via Adversarial Perturbations
1 IIIT Delhi, New Delhi, India
2 Ministry of Electronics and Information Technology, New Delhi, India
('24380882', 'Saheb Chhabra', 'saheb chhabra')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('50046315', 'Gaurav Gupta', 'gaurav gupta')
{sahebc, rsingh, mayank@iiitd.ac.in}, gauravg@gov.in
40b10e330a5511a6a45f42c8b86da222504c717fImplementing the Viola-Jones
Face Detection Algorithm
Kongens Lyngby 2008
IMM-M.Sc.-2008-93
('24007383', 'Ole Helvig Jensen', 'ole helvig jensen')
40bb090a4e303f11168dce33ed992f51afe02ff7Marginal Loss for Deep Face Recognition
Imperial College London
Imperial College London
Imperial College London
UK
UK
UK
('3234063', 'Jiankang Deng', 'jiankang deng')
('2321938', 'Yuxiang Zhou', 'yuxiang zhou')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
j.deng16@imperial.ac.uk
yuxiang.zhou10@imperial.ac.uk
s.zafeiriou@imperial.ac.uk
40ca925befa1f7e039f0cd40d57dbef6007b4416Sampling Matters in Deep Embedding Learning
UT Austin
A9/Amazon
Amazon
Philipp Kr¨ahenb¨uhl
UT Austin
('2978413', 'Chao-Yuan Wu', 'chao-yuan wu')
('1758550', 'R. Manmatha', 'r. manmatha')
('1691629', 'Alexander J. Smola', 'alexander j. smola')
cywu@cs.utexas.edu
manmatha@a9.com
smola@amazon.com
philkr@cs.utexas.edu
4042bbb4e74e0934f4afbedbe92dd3e37336b2f4
4026dc62475d2ff2876557fc2b0445be898cd380An Affective User Interface Based on Facial Expression
Recognition and Eye-Gaze Tracking
School of Computer Engineering, Sejong University, Seoul, Korea
('7236280', 'Soo-Mi Choi', 'soo-mi choi')
('2706430', 'Yong-Guk Kim', 'yong-guk kim')
{smchoi,ykim}@sejong.ac.kr
40f127fa4459a69a9a21884ee93d286e99b54c5fOptimizing Apparent Display Resolution
Enhancement for Arbitrary Videos
('2267017', 'Michael Stengel', 'michael stengel')
('1701306', 'Martin Eisemann', 'martin eisemann')
('34751565', 'Stephan Wenger', 'stephan wenger')
('2765149', 'Benjamin Hell', 'benjamin hell')
401e6b9ada571603b67377b336786801f5b54eeeActive Image Clustering: Seeking Constraints from
Humans to Complement Algorithms
November 22, 2011
406431d2286a50205a71f04e0b311ba858fc7b6c3D FACIAL EXPRESSION CLASSIFICATION USING
A STATISTICAL MODEL OF SURFACE NORMALS
AND A MODULAR APPROACH
A thesis submitted to
University of Birmingham
for the degree of
DOCTOR OF PHILOSOPHY
School of Electronic, Electrical & Computer Engineering
University of Birmingham
August 2012
('2801456', 'Hamimah Ujir', 'hamimah ujir')
40217a8c60e0a7d1735d4f631171aa6ed146e719Part-Pair Representation for Part Localization
Columbia University
('2454675', 'Jiongxin Liu', 'jiongxin liu')
('3173493', 'Yinxiao Li', 'yinxiao li')
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
{liujx09, yli, belhumeur}@cs.columbia.edu
2e20ed644e7d6e04dd7ab70084f1bf28f93f75e9
2e8e6b835e5a8f55f3b0bdd7a1ff765a0b7e1b87International Journal of Computer Vision manuscript No.
(will be inserted by the editor)
Pointly-Supervised Action Localization
Received: date / Accepted: date
('2606260', 'Pascal Mettes', 'pascal mettes')
2eb37a3f362cffdcf5882a94a20a1212dfed25d94
Local Feature Based Face Recognition
R.I.T., Rajaramnagar and S.G.G.S. COE &T, Nanded
India
1. Introduction
A reliable automatic face recognition (AFR) system is a need of time because in today's
networked world, maintaining the security of private information or physical property is
becoming increasingly important and difficult as well. Most of the time criminals have been
taking the advantage of fundamental flaws in the conventional access control systems i.e.
the systems operating on credit card, ATM etc. do not grant access by "who we are", but by
"what we have”. The biometric based access control systems have a potential to overcome
most of the deficiencies of conventional access control systems and has been gaining the
importance in recent years. These systems can be designed with biometric traits such as
fingerprint, face, iris, signature, hand geometry etc. But comparison of different biometric
traits shows that face is very attractive biometric because of its non-intrusiveness and social
acceptability. It provides automated methods of verifying or recognizing the identity of a
living person based on its facial characteristics.
In last decade, major advances occurred in face recognition, with many systems capable of
achieving recognition rates greater than 90%. However real-world scenarios remain a
challenge, because face acquisition process can undergo to a wide range of variations. Hence
the AFR can be thought as a very complex object recognition problem, where the object to be
recognized is the face. This problem becomes even more difficult because the search is done
among objects belonging to the same class and very few images of each class are available to
train the system. Moreover different problems arise when images are acquired under
uncontrolled conditions such as illumination variations, pose changes, occlusion, person
appearance at different ages, expression changes and face deformations. The numbers of
approaches has been proposed by various researchers to deal with these problems but still
reported results cannot suffice the need of the reliable AFR system in presence of all facial
image variations. A recent survey paper (Abate et al., 2007) states that the sensibility of the
AFR systems to illumination and pose variations are the main problems researchers have
been facing up till.
2. Face recognition methods
The existing face recognition methods can be divided into two categories: holistic matching
methods and local matching methods.The holistic matching methods use complete face
region as a input to face recognition system and constructs a lower dimensional subspace
using principal component analysis (PCA) (Turk & Pentland, 1991), linear discriminant
www.intechopen.com
('2321206', 'Sanjay A. Pardeshi', 'sanjay a. pardeshi')
('3092481', 'Sanjay N. Talbar', 'sanjay n. talbar')
2e0addeffba4be98a6ad0460453fbab52616b139Face View Synthesis
Using A Single Image
Thesis Proposal
May 2006
Committee Members
Henry Schneiderman (Chair)
Alexei (Alyosha) Efros
Robotics Institute
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
c(cid:13) Carnegie Mellon University
('2989714', 'Jiang Ni', 'jiang ni')
('1709305', 'Martial Hebert', 'martial hebert')
('38998440', 'David Kriegman', 'david kriegman')
2e5cfa97f3ecc10ae8f54c1862433285281e6a7c
2e091b311ac48c18aaedbb5117e94213f1dbb529Collaborative Facial Landmark Localization
for Transferring Annotations Across Datasets
University of Wisconsin Madison
http://www.cs.wisc.edu/~lizhang/projects/collab-face-landmarks/
('1893050', 'Brandon M. Smith', 'brandon m. smith')
('40396555', 'Li Zhang', 'li zhang')
2e1415a814ae9abace5550e4893e13bd988c7ba1International Journal of Engineering Trends and Technology (IJETT) – Volume 21 Number 3 – March 2015
Dictionary Based Face Recognition in Video Using
Fuzzy Clustering and Fusion
#1IInd year M.E. Student, #2Assistant Professor
Dhanalakshmi Srinivasan College of Engineering
Coimbatore,Tamilnadu,India.
Anna University
2e0e056ed5927a4dc6e5c633715beb762628aeb0
2e8a0cc071017845ee6f67bd0633b8167a47abedSpatio-Temporal Covariance Descriptors for Action and Gesture Recognition
NICTA, PO Box 6020, St Lucia, QLD 4067, Australia ∗
University of Queensland, School of ITEE, QLD 4072, Australia
('2706642', 'Andres Sanin', 'andres sanin')
('1781182', 'Conrad Sanderson', 'conrad sanderson')
('2270092', 'Brian C. Lovell', 'brian c. lovell')
2e68190ebda2db8fb690e378fa213319ca915cf8Generating Videos with Scene Dynamics
MIT
UMBC
MIT
('1856025', 'Carl Vondrick', 'carl vondrick')
('2367683', 'Hamed Pirsiavash', 'hamed pirsiavash')
('1690178', 'Antonio Torralba', 'antonio torralba')
vondrick@mit.edu
hpirsiav@umbc.edu
torralba@mit.edu
2e0d56794379c436b2d1be63e71a215dd67eb2caImproving precision and recall of face recognition in SIPP with combination of
modified mean search and LSH
Xihua.Li
lixihua9@126.com
2ee8900bbde5d3c81b7ed4725710ed46cc7e91cd
2e475f1d496456831599ce86d8bbbdada8ee57edGroupsourcing: Team Competition Designs for
Crowdsourcing
L3S Research Center, Hannover, Germany
('2993225', 'Markus Rokicki', 'markus rokicki')
('2553718', 'Sergej Zerr', 'sergej zerr')
('1745880', 'Stefan Siersdorfer', 'stefan siersdorfer')
{rokicki,siersdorfer,zerr}@L3S.de
2ef51b57c4a3743ac33e47e0dc6a40b0afcdd522Leveraging Billions of Faces to Overcome
Performance Barriers in Unconstrained Face
Recognition
face.com
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('1776343', 'Lior Wolf', 'lior wolf')
{yaniv, wolf}@face.com
2e231f1e7e641dd3619bec59e14d02e91360ac01FUSION NETWORK FOR FACE-BASED AGE ESTIMATION
The University of Warwick, Coventry, UK
School of Management, University of Bath, Bath, UK
School of Computing and Mathematics, Charles Sturt University, Wagga Wagga, Australia
('1750506', 'Haoyi Wang', 'haoyi wang')
('40655450', 'Xingjie Wei', 'xingjie wei')
('1901920', 'Victor Sanchez', 'victor sanchez')
('1799504', 'Chang-Tsun Li', 'chang-tsun li')
{h.wang.16, vsanchez, C-T.Li}@warwick.ac.uk, x.wei@bath.ac.uk
2e6cfeba49d327de21ae3186532e56cadeb57c02Real Time Eye Gaze Tracking with 3D Deformable Eye-Face Model
Rensselaer Polytechnic Institute
110 8th Street, Troy, NY, USA
('1771700', 'Kang Wang', 'kang wang')
('1726583', 'Qiang Ji', 'qiang ji')
{wangk10, jiq}@rpi.edu
2ee817981e02c4709d65870c140665ed25b005ccSparse Representations and Random Projections for
Robust and Cancelable Biometrics
(Invited Paper)
Center for Automation Research
University of Maryland
College Park, MD 20742 USA
DAP - University of Sassari
piazza Duomo, 6
Alghero 07041 Italy
robust and secure physiological biometrics recognition such
as face and iris [6], [7], [9], [1]. In this paper, we categorize
approaches to biometrics based on sparse representations.
('1741177', 'Vishal M. Patel', 'vishal m. patel')
('9215658', 'Rama Chellappa', 'rama chellappa')
('1725688', 'Massimo Tistarelli', 'massimo tistarelli')
{pvishalm,rama}@umiacs.umd.edu
tista@uniss.it
2e98329fdec27d4b3b9b894687e7d1352d828b1dUsing Affect Awareness to Modulate Task Experience:
A Study Amongst Pre-Elementary School Kids
Carnegie Mellon University
5000 Forbes Avenue,
Pittsburgh, PA 15213
('29120285', 'Vivek Pai', 'vivek pai')
('1760345', 'Raja Sooriamurthi', 'raja sooriamurthi')
2e19371a2d797ab9929b99c80d80f01a1fbf9479
2ed4973984b254be5cba3129371506275fe8a8eb
THE EFFECTS OF MOOD ON
EMOTION RECOGNITION AND
ITS RELATIONSHIP WITH THE
GLOBAL VS LOCAL
INFORMATION PROCESSING
STYLES
BASIC RESEARCH PROGRAM
WORKING PAPERS
SERIES: PSYCHOLOGY
WP BRP 60/PSY/2016
This Working Paper is an output of a research project implemented at the National Research
University Higher School of Economics (HSE). Any opinions or claims contained in this
Working Paper do not necessarily reflect the views of HSE
('15615673', 'Victoria Ovsyannikova', 'victoria ovsyannikova')
2e9c780ee8145f29bd1a000585dd99b14d1f5894Simultaneous Adversarial Training - Learn from
Others’ Mistakes
Lite-On Singapore Pte. Ltd, 2Imperial College London
('9949538', 'Zukang Liao', 'zukang liao')
2ebc35d196cd975e1ccbc8e98694f20d7f52faf3This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Towards Wide-angle Micro Vision Sensors
('2724462', 'Sanjeev J. Koppal', 'sanjeev j. koppal')
('2407724', 'Ioannis Gkioulekas', 'ioannis gkioulekas')
('2140759', 'Kenneth B. Crozier', 'kenneth b. crozier')
2e3d081c8f0e10f138314c4d2c11064a981c1327
2e86402b354516d0a8392f75430156d629ca6281
2ea78e128bec30fb1a623c55ad5d55bb99190bd2Residual vs. Inception vs. Classical Networks for
Low-Resolution Face Recognition
Vision and Fusion Lab, Karlsruhe Institute of Technology KIT, Karlsruhe, Germany
2Fraunhofer IOSB, Karlsruhe, Germany
{christian.herrmann,dieter.willersinn,
('37646107', 'Christian Herrmann', 'christian herrmann')
('1783486', 'Dieter Willersinn', 'dieter willersinn')
juergen.beyerer}@iosb.fraunhofer.de
2e8eb9dc07deb5142a99bc861e0b6295574d1fbdAnalysis by Synthesis: 3D Object Recognition by Object Reconstruction
University of California, Irvine
University of California, Irvine
('1888731', 'Mohsen Hejrati', 'mohsen hejrati')
('1770537', 'Deva Ramanan', 'deva ramanan')
shejrati@ics.uci.edu
dramanan@ics.uci.edu
2e0f5e72ad893b049f971bc99b67ebf254e194f7Apparel Classification with Style
1ETH Z¨urich, Switzerland 2Microsoft, Austria 3Kooaba AG, Switzerland
4KU Leuven, Belgium
('1696393', 'Lukas Bossard', 'lukas bossard')
('1727791', 'Matthias Dantone', 'matthias dantone')
('1695579', 'Christian Leistner', 'christian leistner')
('1793359', 'Christian Wengert', 'christian wengert')
('1726249', 'Till Quack', 'till quack')
('1681236', 'Luc Van Gool', 'luc van gool')
2e3c893ac11e1a566971f64ae30ac4a1f36f5bb5Simultaneous Object Detection and Ranking with
Weak Supervision
Department of Engineering Science
University of Oxford
United Kingdom
('1758219', 'Matthew B. Blaschko', 'matthew b. blaschko')
('1687524', 'Andrea Vedaldi', 'andrea vedaldi')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
2ed3ce5cf9e262bcc48a6bd998e7fb70cf8a971cPreprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 26 January 2017 doi:10.20944/preprints201701.0120.v1
Peer-reviewed version available at Sensors 2017, 17, 275; doi:10.3390/s17020275
Article
Active AU Based Patch Weighting for Facial
Expression Recognition
School of Computer Science and Software Engineering, Shenzhen University, Nanhai Ave 3688, Shenzhen
Guangdong 518060, China
('34181727', 'Weicheng Xie', 'weicheng xie')
('1687690', 'LinLin Shen', 'linlin shen')
('5828998', 'Meng Yang', 'meng yang')
('5383601', 'Zhihui Lai', 'zhihui lai')
* Correspondence: llshen@szu.edu.cn; Tel.: +86-755-8693-5089
2edc6df161f6aadbef9c12408bdb367e72c3c967Improved Spatiotemporal Local Monogenic Binary Pattern
for Emotion Recognition in The Wild
Center for Machine Vision
Research
Department of Computer
Science and Engineering
University of Oulu, Finland
Center for Machine Vision
Research
Department of Computer
Science and Engineering
University of Oulu, Finland
Center for Machine Vision
Research
Department of Computer
Science and Engineering
University of Oulu, Finland
Center for Machine Vision
Research
Department of Computer
Science and Engineering
University of Oulu, Finland
Matti Pietikänen
Center for Machine Vision
Research
Department of Computer
Science and Engineering
University of Oulu, Finland
('18780812', 'Xiaohua Huang', 'xiaohua huang')
('2512942', 'Qiuhai He', 'qiuhai he')
('1836646', 'Xiaopeng Hong', 'xiaopeng hong')
('1757287', 'Guoying Zhao', 'guoying zhao')
huang.xiaohua@ee.oulu.fi
qiuhai.he@ee.oulu.fi
xhong@ee.oulu.fi
gyzhao@ee.oulu.fi
mkp@ee.oulu.fi
2ec7d6a04c8c72cc194d7eab7456f73dfa501c8cInternational Journal of Scientific Research and Management Studies (IJSRMS)
ISSN: 2349-3771

Volume 3 Issue 4, pg: 164-169
A REVIEW ON TEXTURE BASED EMOTION RECOGNITION
FROM FACIAL EXPRESSION
1U.G. Scholars, 2Assistant Professor,
Dept. of E & C Engg., MIT Moradabad, Ram Ganga Vihar, Phase II, Moradabad, India.
('5255436', 'Shubham Kashyap', 'shubham kashyap')
('2036732', 'Pankaj Pandey', 'pankaj pandey')
('36216996', 'Prashant Kumar', 'prashant kumar')
2eb9f1dbea71bdc57821dedbb587ff04f3a25f07Face for Ambient Interface
Imperial College, 180 Queens Gate
London SW7 2AZ, U.K.
('1694605', 'Maja Pantic', 'maja pantic')m.pantic@imperial.ac.uk
2e1fd8d57425b727fd850d7710d38194fa6e2654Learning Structured Appearance Models
from Captioned Images of Cluttered Scenes ∗
University of Toronto
Bielefeld University
('37894231', 'Michael Jamieson', 'michael jamieson')
('1724954', 'Sven Wachsmuth', 'sven wachsmuth')
{jamieson, afsaneh, sven, suzanne}@cs.toronto.edu
swachsmu@techfak.uni-bielefeld.de
2e1b1969ded4d63b69a5ec854350c0f74dc4de36
2e832d5657bf9e5678fd45b118fc74db07dac9daRunning head: RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION 

Recognition of Facial Expressions of Emotion: The Effects of Anxiety, Depression, and Fear of Negative 
Evaluation 
Rachel Merchak 
Wittenberg University
Rachel Merchak, Wittenberg University
Author Note 
This research was conducted in collaboration with Dr. Stephanie Little, Psychology Department, 
Wittenberg University, and Dr. Michael Anes, Wittenberg University
Correspondence concerning this article should be addressed to Rachel Merchak, 10063 Fox 
Chase Drive, Loveland, OH 45140.  
E‐mail: merchakr@wittenberg.edu 
2be0ab87dc8f4005c37c523f712dd033c0685827RELAXED LOCAL TERNARY PATTERN FOR FACE RECOGNITION
BeingThere Centre
Institute of Media Innovation
Nanyang Technological University
50 Nanyang Drive, Singapore 637553.
School of Electrical & Electronics Engineering
Nanyang Technological University
50 Nanyang Avenue, Singapore 639798
('1690809', 'Jianfeng Ren', 'jianfeng ren')
('3307580', 'Xudong Jiang', 'xudong jiang')
('34316743', 'Junsong Yuan', 'junsong yuan')
2bb2ba7c96d40e269fc6a2d5384c739ff9fa16ebImage-based recommendations on styles and substitutes
Julian McAuley
UC San Diego
University of Adelaide
Qinfeng (‘Javen’) Shi
University of Adelaide
('2110208', 'Christopher Targett', 'christopher targett')jmcauley@ucsd.edu
christopher.targett@student.adelaide.edu.au
javen.shi@adelaide.edu.au
2b339ece73e3787f445c5b92078e8f82c9b1c522Human Re-identification in Crowd Videos Using
Personal, Social and Environmental Constraints
University of Central Florida, Orlando, USA
Center for Research in Computer Vision,
('2963501', 'Shayan Modiri Assari', 'shayan modiri assari')
('1803711', 'Haroon Idrees', 'haroon idrees')
('1745480', 'Mubarak Shah', 'mubarak shah')
{smodiri,haroon,shah}@cs.ucf.edu
2b4d092d70efc13790d0c737c916b89952d4d8c7JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 32, XXXX-XXXX (2016)
Robust Facial Expression Recognition using Local Haar
Mean Binary Pattern
1,2 Department of Computer Engineering
Charotar University of Science and Technology, Changa, India
Gujarat Technological University, V.V.Nagar, India
In this paper, we propose a hybrid statistical feature extractor, Local Haar Mean Bina-
ry Pattern (LHMBP). It extracts level-1 haar approximation coefficients and computes Local
Mean Binary Pattern (LMBP) of it. LMBP code of pixel is obtained by weighting the
thresholded neighbor value of 3  3 patch on its mean. LHMBP produces highly discrimina-
tive code compared to other state of the art methods. To localize appearance features, ap-
proximation subband is divided into M  N regions. LHMBP feature descriptor is derived
by concatenating LMBP distribution of each region. We also propose a novel template
matching strategy called Histogram Normalized Absolute Difference (HNAD) for histogram
based feature comparison. Experiments prove the superiority of HNAD over well-known
template matching techniques such as L2 norm and Chi-Square. We also investigated
LHMBP for expression recognition in low resolution. The performance of the proposed ap-
proach is tested on well-known CK, JAFFE, and SFEW facial expression datasets in diverse
situations.
Keywords: affective computing, appearance based feature, local binary pattern, Gabor filter,
support vector machine.
1. INTRODUCTION
Facial Expression Recognition (FER) is a classical problem of pattern recognition
and machine learning. It plays a vital role in social communication and in conveying
emotions [1]. In the earlier development stage, the scope of FER was confined to psy-
chological studies only, but nowadays it covers a broad range of applications including
human-computer interfaces (HCI), industrial automation, surveillance systems, senti-
ment identification, etc. Precise recognition of facial expressions can become a driving
force for the future automation interfaces like car driving, robotics, driver alert systems,
etc.
According to input, expression recognition systems can be classified as static or
dynamic. In static approaches, features are computed from given still image only.
Whereas in dynamic approaches, temporal relationships between features over the
image sequence is extracted. Temporal relationships play a major role in expression
recognition from an image sequence. In last decade, many video-based methods have
been studied [2]. Research is also focused on detecting micro-expressions [2], [3], [4],
[5], recognition of spontaneous expressions [6], analysis of multi-views or profile views
[7] and fusion of geometric and appearance features [8], [9], [10]. Nowadays, deep
1249
('9318822', 'MAHESH GOYANI', 'mahesh goyani')
('11384332', 'NARENDRA PATEL', 'narendra patel')
E-mail: mgoyani@gmail.com, nmpatel@bvmengineerring.ac.in
2bb53e66aa9417b6560e588b6235e7b8ebbc294cSEMANTIC EMBEDDING SPACE FOR ZERO-SHOT ACTION RECOGNITION
School of EECS, Queen Mary University of London, London, UK
('47158489', 'Xun Xu', 'xun xu')
('2073354', 'Shaogang Gong', 'shaogang gong')
2b0ff4b82bac85c4f980c40b3dc4fde05d3cc23fAn Effective Approach for Facial Expression Recognition with Local Binary
Pattern and Support Vector Machine
('20656805', 'Thi Nhan', 'thi nhan')
('9872793', 'Il Choi', 'il choi')
*1School of Media, Soongsil University, ctnhen@yahoo.com
2School of Media, Soongsil University, an_tth@yahoo.com
3School of Media, Soongsil University, hic@ssu.ac.kr
2b3ceb40dced78a824cf67054959e250aeaa573b
2be8e06bc3a4662d0e4f5bcfea45631b8beca4d0Watch and Learn: Semi-Supervised Learning of Object Detectors From Videos
Robotics Institute, Carnegie Mellon University
The availability of large labeled image datasets [1, 2] has been one of the
key factors for advances in recognition. These datasets have not only helped
boost performance, but have also fostered the development of new tech-
niques. However, compared to images, videos seem like a more natural
source of training data because of the additional temporal continuity they
offer for both learning and labeling. The available video datasets lack the
richness and variety of annotations offered by benchmark image datasets.
It also seems unlikely that human per-image labeling will scale to the web-
scale video data without using temporal constraints. In this paper, we show
how to exploit the temporal information provided by videos to enable semi-
supervised learning.
We present a scalable framework that discovers and localizes multiple ob-
jects in video using semi-supervised learning (see Figure 1). It tackles this
challenging problem in long video (a million frames in our experiments)
starting from only a few labeled examples.
In addition, we present our
algorithm in a realistic setting of sparse labels [3], i.e., in the few initial
“labeled” frames, not all objects are annotated. This setting relaxes the as-
sumption that in a given frame all object instances have been exhaustively
annotated. It also implies that we do not know if any unannotated region
in the frame is an instance of the object category or the background, and
thus cannot use any region from our input as negative data. While much of
the past work has ignored this type of sparse labeling and lack of explicit
negatives, we show ways to overcome this handicap.
Contributions: Our semi-supervised learning (SSL) framework localizes
multiple unknown objects in videos. Starting from sparsely labeled objects,
it iteratively labels new training examples in the videos. Our key contribu-
tions are: 1) We tackle the SSL problem for discovering multiple objects in
sparsely labeled videos; 2) We present an approach to constrain SSL [6] by
combining multiple weak cues in videos and exploiting decorrelated errors
by modeling data in multiple feature spaces. We demonstrate its effective-
ness as compared to traditional tracking-by-detection approaches. 3) Given
the redundancy in video data, we need a method that can automatically de-
termine the relevance of training examples to the target detection task. We
present a way to include relevance and diversity of the training examples in
each iteration of the SSL, leading to scalable incremental learning.
Our algorithm starts with a few sparsely annotated video frames (L) and
iteratively discovers new instances in the large unlabeled set of videos (U ).
Simply put, we first train detectors on annotated objects, followed by de-
tection on input videos. We determine good detections (removing confident
false positives) which serve as starting points for short-term tracking. The
short-term tracking aims to label unseen examples reliably. Amongst these
newly labeled examples, we identify diverse examples which are used to
update the detector without re-training from scratch. We iteratively repeat
this process to label new examples. We now describe our algorithm.
Sparse Annotations (lack of explicit negatives): We start with a few sparsely
annotated frames in a random subset of U . Sparse labeling implies that un-
like standard tracking-by-detection approaches, we cannot sample negatives
from the vicinity of labeled positives. We use random images from the in-
ternet as negative data for training object detectors on these sparse labels.
We use these detectors to detect objects on a subset of the video, e.g., every
30 frames. Training on a few positives without domain negatives results in
high confidence false positives. Removing such false positives is important
because if we track them, we will add many more bad training examples,
thus degrading the detector’s performance over iterations.
Temporally consistent detections: We first remove detections that are tem-
porally inconsistent using a smoothness prior on the motion of detections.
Decorrelated errors: To remove high confidence false positives, we rely
on the principle of decorrelated errors (similar to multi-view SSL [5]). The
intuition is that the detector makes mistakes that are related to its feature
('1806773', 'Ishan Misra', 'ishan misra')
('1781242', 'Abhinav Shrivastava', 'abhinav shrivastava')
('1709305', 'Martial Hebert', 'martial hebert')
2bcec23ac1486f4106a3aa588b6589e9299aba70An Uncertain Future: Forecasting from Static
Images using Variational Autoencoders
The Robotics Institute, Carnegie Mellon University
('14192361', 'Jacob Walker', 'jacob walker')
('2786693', 'Carl Doersch', 'carl doersch')
('1737809', 'Abhinav Gupta', 'abhinav gupta')
('1709305', 'Martial Hebert', 'martial hebert')
2b773fe8f0246536c9c40671dfa307e98bf365adHindawi Publishing Corporation
Computational and Mathematical Methods in Medicine
Volume 2013, Article ID 106867, 14 pages
http://dx.doi.org/10.1155/2013/106867
Research Article
Fast Discriminative Stochastic Neighbor Embedding Analysis
School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
Received 9 February 2013; Accepted 22 March 2013
Academic Editor: Carlo Cattani
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Feature is important for many applications in biomedical signal analysis and living system analysis. A fast discriminative stochastic
neighbor embedding analysis (FDSNE) method for feature extraction is proposed in this paper by improving the existing DSNE
method. The proposed algorithm adopts an alternative probability distribution model constructed based on its K-nearest neighbors
from the interclass and intraclass samples. Furthermore, FDSNE is extended to nonlinear scenarios using the kernel trick and
then kernel-based methods, that is, KFDSNE1 and KFDSNE2. FDSNE, KFDSNE1, and KFDSNE2 are evaluated in three aspects:
visualization, recognition, and elapsed time. Experimental results on several datasets show that, compared with DSNE and MSNP,
the proposed algorithm not only significantly enhances the computational efficiency but also obtains higher classification accuracy.
1. Introduction
In recent years, dimensional reduction which can reduce the
curse of dimensionality [1] and remove irrelevant attributes in
high-dimensional space plays an increasingly important role
in many areas. It promotes the classification, visualization,
and compression of the high dimensional data. In machine
learning, dimension reduction is used to reduce the dimen-
sion by mapping the samples from the high-dimensional
space to the low-dimensional space. There are many purposes
of studying it: firstly, to reduce the amount of storage, sec-
ondly, to remove the influence of noise, thirdly, to understand
data distribution easily, and last but not least, to achieve good
results in classification or clustering.
Currently, many dimensional reduction methods have
been proposed, and they can be classified variously from dif-
ferent perspectives. Based on the nature of the input data,
they are broadly categorized into two classes: linear subspace
methods which try to find a linear subspace as feature space
so as to preserve certain kind of characteristics of observed
data, and nonlinear approaches such as kernel-based tech-
niques and geometry-based techniques; from the class labels’
perspective, they are divided into supervised learning and
unsupervised learning; furthermore, the purpose of the for-
mer is to maximize the recognition rate between classes while
the latter is for making the minimum of information loss. In
addition, judging whether samples utilize local information
or global information, we divide them into local method and
global method.
We briefly introduce several existing dimensional reduc-
tion techniques. In the main linear techniques, principal
component analysis (PCA) [2] aims at maximizing the vari-
ance of the samples in the low-dimensional representation
with a linear mapping matrix. It is global and unsupervised.
Different from PCA, linear discriminant analysis (LDA) [3]
learns a linear projection with the assistance of class labels.
It computes the linear transformation by maximizing the
amount of interclass variance relative to the amount of intra-
class variance. Based on LDA, marginal fisher analysis (MFA)
[4], local fisher discriminant analysis (LFDA) [5], and max-
min distance analysis (MMDA) [6] are proposed. All of the
three are linear supervised dimensional reduction methods.
MFA utilizes the intrinsic graph to characterize the intraclass
compactness and uses meanwhile the penalty graph to char-
acterize interclass separability. LFDA introduces the locality
to the LFD algorithm and is particularly useful for samples
consisting of intraclass separate clusters. MMDA considers
maximizing the minimum pairwise samples of interclass.
To deal with nonlinear structural data, which can often be
found in biomedical applications [7–10], a number of nonlin-
ear approaches have been developed for dimensional reduc-
tion. Among these kernel-based techniques and geometry-
based techniques are two hot issues. Kernel-based techniques
('1807755', 'Jianwei Zheng', 'jianwei zheng')
('1767635', 'Hong Qiu', 'hong qiu')
('2587047', 'Xinli Xu', 'xinli xu')
('7634945', 'Wanliang Wang', 'wanliang wang')
('1802128', 'Qiongfang Huang', 'qiongfang huang')
('1807755', 'Jianwei Zheng', 'jianwei zheng')
Correspondence should be addressed to Jianwei Zheng; zjw@zjut.edu.cn
2bab44d3a4c5ca79fb8f87abfef4456d326a0445Player Identification in Soccer Videos
Dipartimento di Sistemi e Informatica, University of Florence
Via S. Marta, 3 - 50139 Florence, Italy
('1801509', 'Marco Bertini', 'marco bertini')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
('2308851', 'Walter Nunziati', 'walter nunziati')
bertini@dsi.unifi.it, delbimbo@dsi.unifi.it, nunziati@dsi.unifi.it
2b0102d77d3d3f9bc55420d862075934f5c85becSlicing Convolutional Neural Network for Crowd Video Understanding
The Chinese University of Hong Kong
The Chinese University of Hong Kong
('2205438', 'Jing Shao', 'jing shao')jshao@ee.cuhk.edu.hk, ccloy@ie.cuhk.edu.hk, kkang@ee.cuhk.edu.hk, xgwang@ee.cuhk.edu.hk
2b435ee691718d0b55d057d9be4c3dbb8a81526eDREUW ET AL.: SURF-FACE RECOGNITION
SURF-Face: Face Recognition Under
Viewpoint Consistency Constraints
Human Language Technology and
Pattern Recognition
RWTH Aachen University
Aachen, Germany
('1967060', 'Philippe Dreuw', 'philippe dreuw')
('2044128', 'Pascal Steingrube', 'pascal steingrube')
('1804963', 'Harald Hanselmann', 'harald hanselmann')
('1685956', 'Hermann Ney', 'hermann ney')
dreuw@cs.rwth-aachen.de
steingrube@cs.rwth-aachen.de
hanselmann@cs.rwth-aachen.de
ney@cs.rwth-aachen.de
2b1327a51412646fcf96aa16329f6f74b42aba89Under review as a conference paper at ICLR 2016
IMPROVING PERFORMANCE OF RECURRENT NEURAL
NETWORK WITH RELU NONLINEARITY
Qualcomm Research
San Diego, CA 92121, USA
('2390504', 'Sachin S. Talathi', 'sachin s. talathi'){stalathi,avartak}@qti.qualcomm.com
2b5cb5466eecb131f06a8100dcaf0c7a0e30d391A Comparative Study of Active Appearance Model
Annotation Schemes for the Face
Face Aging Group
UNCW, USA
Face Aging Group
UNCW, USA
Face Aging Group
UNCW, USA
('2401418', 'Amrutha Sethuram', 'amrutha sethuram')
('1710348', 'Karl Ricanek', 'karl ricanek')
('37804931', 'Eric Patterson', 'eric patterson')
sethurama@uncw.edu
ricanekk@uncw.edu
pattersone@uncw.edu
2b64a8c1f584389b611198d47a750f5d74234426Deblurring Face Images with Exemplars
Dalian University of Technology, Dalian, China
University of California, Merced, USA
('1786024', 'Zhe Hu', 'zhe hu')
('4642456', 'Zhixun Su', 'zhixun su')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
2b632f090c09435d089ff76220fd31fd314838aeEarly Adaptation of Deep Priors in Age Prediction from Face Images
Computer Vision Lab
D-ITET, ETH Zurich
Computer Vision Lab
D-ITET, ETH Zurich
CVL, D-ITET, ETH Zurich
Merantix GmbH
('35647143', 'Mahdi Hajibabaei', 'mahdi hajibabaei')
('5328844', 'Anna Volokitin', 'anna volokitin')
('1732855', 'Radu Timofte', 'radu timofte')
hmahdi@student.ethz.ch
voanna@vision.ee.ethz.ch
timofter@vision.ee.ethz.ch
2b10a07c35c453144f22e8c539bf9a23695e85fcStandardization of Face Image Sample Quality(cid:63)
University of Science and Technology of China
Hefei 230026, China
2Center for Biometrics and Security Research &
National Laboratory of Pattern Recognition
Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China
http://www.cbsr.ia.ac.cn
('39609587', 'Xiufeng Gao', 'xiufeng gao')
('34679741', 'Stan Z. Li', 'stan z. li')
('3168566', 'Rong Liu', 'rong liu')
('2777824', 'Peiren Zhang', 'peiren zhang')
2b84630680e2c906f8d7ac528e2eb32c99ef203aWe are not All Equal: Personalizing Models for
Facial Expression Analysis
with Transductive Parameter Transfer
DISI, University of Trento, Italy
DIEI, University of Perugia, Italy
3 Fondazione Bruno Kessler (FBK), Italy
('1716310', 'Enver Sangineto', 'enver sangineto')
('2933565', 'Gloria Zen', 'gloria zen')
('40811261', 'Elisa Ricci', 'elisa ricci')
('1703601', 'Nicu Sebe', 'nicu sebe')
2b507f659b341ed0f23106446de8e4322f4a3f7eDeep Identity-aware Transfer of Facial Attributes
The Hong Kong Polytechnic University 2Harbin Institute of Technology
('1701799', 'Mu Li', 'mu li')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('1698371', 'David Zhang', 'david zhang')
csmuli@comp.polyu.edu.hk cswmzuo@gmail.com csdzhang@comp.polyu.edu.hk
2b7ef95822a4d577021df16607bf7b4a4514eb4bEmergence of Object-Selective Features in
Unsupervised Feature Learning
Computer Science Department
Stanford University
Stanford, CA 94305
('5574038', 'Adam Coates', 'adam coates')
('2354728', 'Andrej Karpathy', 'andrej karpathy')
('1701538', 'Andrew Y. Ng', 'andrew y. ng')
{acoates,karpathy,ang}@cs.stanford.edu
2b8dfbd7cae8f412c6c943ab48c795514d53c4a7529
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
978-1-4799-2893-4/14/$31.00 ©2014 IEEE
RECOGNITION
1. INTRODUCTION
(d1,d2)∈[0;d]2
d1+d2≤d
e-mail: firstname.lastname@technicolor.com
e-mail: firstname.lastname@univ-poitiers.fr
2b869d5551b10f13bf6fcdb8d13f0aa4d1f59fc4Ring loss: Convex Feature Normalization for Face Recognition
Department of Electrical and Computer Engineering
Carnegie Mellon University
('3049981', 'Yutong Zheng', 'yutong zheng')
('2628116', 'Dipan K. Pal', 'dipan k. pal')
('1794486', 'Marios Savvides', 'marios savvides')
{yutongzh, dipanp, marioss}@andrew.cmu.edu
2bae810500388dd595f4ebe992c36e1443b048d2International Journal of Bioelectromagnetism
Vol. 18, No. 1, pp. 13 - 18, 2016
www.ijbem.org
Analysis of Facial Expression Recognition
by Event-related Potentials
Department of Information and Computer Engineering,
National Institute of Technology, Toyota College, Japan
Toyota College, 2-1 Eisei, Toyota-shi, Aichi, 471-8525 Japan
('2179262', 'Taichi Hayasaka', 'taichi hayasaka')
('2179262', 'Taichi Hayasaka', 'taichi hayasaka')
E-mail: hayasaka@toyota-ct.ac.jp, phone +81 565 36 5861, fax +81 565 36 5926
2b42f83a720bd4156113ba5350add2df2673daf0Action Recognition and Detection by Combining
Motion and Appearance Features
The Chinese University of Hong Kong
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology
Chinese Academy of Sciences, Shenzhen, China
('33345248', 'Limin Wang', 'limin wang')
('39843569', 'Yu Qiao', 'yu qiao')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
07wanglimin@gmail.com, yu.qiao@siat.ac.cn, xtang@ie.cuhk.edu.hk
2bbbbe1873ad2800954058c749a00f30fe61ab17
ISSN(Online): 2320-9801
ISSN (Print): 2320-9798
International Journal of Innovative Research in Computer and Communication Engineering
(An ISO 3297: 2007 Certified Organization)
Vol.2, Special Issue 1, March 2014
Proceedings of International Conference On Global Innovations In Computing Technology (ICGICT’14)
Organized by
Department of CSE, JayShriram Group of Institutions, Tirupur, Tamilnadu, India on 6th & 7th March 2014
Face Verification across Ages Using Self
Organizing Map
B.Mahalakshmi1, K.Duraiswamy2, P.Gnanasuganya3, P.Aruldhevi4, R.Sundarapandiyan5
K.S.Rangasamy College of Technology, Namakkal, TamilNadu, India
Dean, K.S.Rangasamy College of Technology, Namakkal, TamilNadu, India
B.E, K.S.Rangasamy College of Technology, Namakkal, TamilNadu, India
2baec98c19804bf19b480a9a0aa814078e28bb3d
47fdbd64edd7d348713253cf362a9c21f98e4296FACIAL POINT DETECTION BASED ON A CONVOLUTIONAL NEURAL NETWORK WITH
OPTIMAL MINI-BATCH PROCEDURE
Chubu University
1200, Matsumoto-cho, Kasugai, AICHI
('2488607', 'Masatoshi Kimura', 'masatoshi kimura')
('35008538', 'Yuji Yamauchi', 'yuji yamauchi')
47382cb7f501188a81bb2e10cfd7aed20285f376Articulated Pose Estimation Using Hierarchical Exemplar-Based Models
Columbia University in the City of New York
('2454675', 'Jiongxin Liu', 'jiongxin liu')
('3173493', 'Yinxiao Li', 'yinxiao li')
{liujx09, yli, allen, belhumeur}@cs.columbia.edu
473366f025c4a6e0783e6174ca914f9cb328fe70Review of
Action Recognition and Detection
Methods
Department of Electrical Engineering and Computer Science
York University
Toronto, Ontario
Canada
('1709096', 'Richard P. Wildes', 'richard p. wildes')
477236563c6a6c6db922045453b74d3f9535bfa1International Journal of Science and Research (IJSR)
ISSN (Online): 2319-7064
Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611
Attribute Based Image Search Re-Ranking
Snehal S Patil1, Ajay Dani2
Master of Computer Engg, Savitribai Phule Pune University, G. H. Raisoni Collage of Engg and Technology, Wagholi, Pune
G. H .Raisoni Collage of Engg and Technology, Wagholi, Pune
integrating
images by
4793f11fbca4a7dba898b9fff68f70d868e2497cKinship Verification through Transfer Learning
Siyu Xia∗
CSE, SUNY at Buffalo, USA
and Southeast University, China
CSE
CSE
SUNY at Buffalo, USA
SUNY at Buffalo, USA
('2025056', 'Ming Shao', 'ming shao')
('1708679', 'Yun Fu', 'yun fu')
xsy@seu.edu.cn
mingshao@buffalo.edu
yunfu@buffalo.edu
470dbd3238b857f349ebf0efab0d2d6e9779073aUnsupervised Simultaneous Orthogonal Basis Clustering Feature Selection
School of Electrical Engineering, KAIST, South Korea
In this paper, we propose a novel unsupervised feature selection method: Si-
multaneous Orthogonal basis Clustering Feature Selection (SOCFS). To per-
form feature selection on unlabeled data effectively, a regularized regression-
based formulation with a new type of target matrix is designed. The target
matrix captures latent cluster centers of the projected data points by per-
forming the orthogonal basis clustering, and then guides the projection ma-
trix to select discriminative features. Unlike the recent unsupervised feature
selection methods, SOCFS does not explicitly use the pre-computed local
structure information for data points represented as additional terms of their
objective functions, but directly computes latent cluster information by the
target matrix conducting orthogonal basis clustering in a single unified term
of the proposed objective function.
Since the target matrix is put in a single unified term for regression of
the proposed objective function, feature selection and clustering are simul-
taneously performed. In this way, the projection matrix for feature selection
is more properly computed by the estimated latent cluster centers of the
projected data points. To the best of our knowledge, this is the first valid
formulation to consider feature selection and clustering together in a sin-
gle unified term of the objective function. The proposed objective function
has fewer parameters to tune and does not require complicated optimization
tools so just a simple optimization algorithm is sufficient. Substantial ex-
periments are performed on several publicly available real world datasets,
which shows that SOCFS outperforms various unsupervised feature selec-
tion methods and that latent cluster information by the target matrix is ef-
fective for regularized regression-based feature selection.
Problem Formulation: Given training data, let X = [x1, . . . ,xn] ∈ Rd×n
denote the data matrix with n instances where dimension is d and T =
[t1, . . . ,tn] ∈ Rm×n denote the corresponding target matrix where dimension
is m. We start from the regularized regression-based formulation to select
maximum r features is minW (cid:107)WT X− T(cid:107)2
s.t. (cid:107)W(cid:107)2,0 ≤ r. To exploit
such formulation on unlabeled data more effectively, it is crucial for the tar-
get matrix T to have discriminative destinations for projected clusters. To
this end, a new type of target matrix T is proposed to conduct clustering di-
rectly on the projected data points WT X. We allow extra degrees of freedom
to T by decomposing it into two other matrices B ∈ Rm×c and E ∈ Rn×c as
T = BET with additional constraints as
(1)
F + λ(cid:107)W(cid:107)2,1
(cid:107)WT X− BET(cid:107)2
s.t. BT B = I, ET E = I, E ≥ 0,
min
W,B,E
where λ > 0 is a weighting parameter for the relaxed regularizer (cid:107)W(cid:107)2,1
that induces row sparsity of the projection matrix W. The meanings of the
constraints BT B = I,ET E = I,E ≥ 0 are as follows: 1) the orthogonal con-
straint of B lets each column of B be independent; 2) the orthogonal and
the nonnegative constraint of E make each row of E has only one non-zero
element [2]. From 1) and 2), we can clearly interpret B as the basis matrix,
which has orthogonality and E as the encoding matrix, where the non-zero
element of each column of ET selects one column in B.
While optimizing problem (1), T = BET acts like clustering of projected
data points WT X with orthogonal basis B and encoder E, so T can estimate
latent cluster centers of the WT X. Then, W successively projects X close
to corresponding latent cluster centers, which are estimated by T. Note that
the orthogonal constraint of B makes each projected cluster in WT X be sep-
arated (independent of each other), and it helps W to be a better projection
matrix for selecting more discriminative features. If the clustering is directly
performed on X not on WT X, the orthogonal constraint of B extremely re-
stricts the degree of freedom of B. However, since features are selected by
W and the clustering is carried out on WT X in our formulation, so the or-
thogonal constraint of B is highly reasonable. A schematic illustration of
the proposed method is shown in Figure 1.
('2086576', 'Dongyoon Han', 'dongyoon han')
('1769295', 'Junmo Kim', 'junmo kim')
473031328c58b7461753e81251379331467f7a69Exploring Fisher Vector and Deep Networks for Action Spotting
Shenzhen key lab of Comp. Vis. and Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, China
The Chinese University of Hong Kong
('1915826', 'Zhe Wang', 'zhe wang')
('33345248', 'Limin Wang', 'limin wang')
('35031371', 'Wenbin Du', 'wenbin du')
('33427555', 'Yu Qiao', 'yu qiao')
buptwangzhe2012@gmail.com, 07wanglimin@gmail.com, wb.du@siat.ac.cn, yu.qiao@siat.ac.cn
47638197d83a8f8174cdddc44a2c7101fa8301b7Object-Centric Anomaly Detection by Attribute-Based Reasoning
Rutgers University
University of Washington
Ahmed Elgammal
Rutgers University
('3139794', 'Babak Saleh', 'babak saleh')
('2270286', 'Ali Farhadi', 'ali farhadi')
babaks@cs.rutgers.edu
ali@cs.uw.edu
elgammal@cs.rutgers.edu
47541d04ec24662c0be438531527323d983e958eAffective Information Processing
476f177b026830f7b31e94bdb23b7a415578f9a4INTRA-CLASS MULTI-OUTPUT REGRESSION BASED SUBSPACE ANALYSIS
University of California Santa Barbara
University of California Santa Barbara
('32919393', 'Swapna Joshi', 'swapna joshi')(cid:63){karthikeyan,swapna,manj}@ece.ucsb.edu
†{grafton}@psych.ucsb.edu
474b461cd12c6d1a2fbd67184362631681defa9e2014 IEEE International
Conference on Systems, Man
and Cybernetics
(SMC 2014)
San Diego, California, USA
5-8 October 2014
Pages 1-789
IEEE Catalog Number:
ISBN:
CFP14SMC-POD
978-1-4799-3841-4
1/5
472ba8dd4ec72b34e85e733bccebb115811fd726Cosine Similarity Metric Learning
for Face Verication
School of Computer Science, University of Nottingham
Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB, UK
http://www.nottingham.ac.uk/cs/
('2243665', 'Hieu V. Nguyen', 'hieu v. nguyen')
('1735386', 'Li Bai', 'li bai')
{vhn,bai}@cs.nott.ac.uk
47ca2df3d657d7938d7253bed673505a6a819661UNIVERSITY OF CALIFORNIA
Santa Barbara
Facial Expression Analysis on Manifolds
A Dissertation submitted in partial satisfaction of the
requirements for the degree Doctor of Philosophy
in Computer Science
by
Committee in charge:
Professor B.S. Manjunath
September 2006
('13303219', 'Ya Chang', 'ya chang')
('1752714', 'Matthew Turk', 'matthew turk')
('1706938', 'Yuan-Fang Wang', 'yuan-fang wang')
('2875421', 'Andy Beall', 'andy beall')
47d4838087a7ac2b995f3c5eba02ecdd2c28ba14JOURNAL OF IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. XX, NO. X, XXX 2017
Automatic Recognition of Facial Displays of
Unfelt Emotions
Escalera, Xavier Bar´o, Sylwia Hyniewska, Member, IEEE, J¨uri Allik,
('38370357', 'Kaustubh Kulkarni', 'kaustubh kulkarni')
('22197083', 'Ciprian Adrian Corneanu', 'ciprian adrian corneanu')
('22211769', 'Ikechukwu Ofodile', 'ikechukwu ofodile')
('3087532', 'Gholamreza Anbarjafari', 'gholamreza anbarjafari')
47eba2f95679e106e463e8296c1f61f6ddfe815bDeep Co-occurrence Feature Learning for Visual Object Recognition
Research Center for Information Technology Innovation, Academia Sinica
National Taiwan University
Graduate Institute of Electronics Engineering, National Taiwan University
Smart Network System Institute, Institute for Information Industry
('28990604', 'Ya-Fang Shih', 'ya-fang shih')
('28982867', 'Yang-Ming Yeh', 'yang-ming yeh')
('1744044', 'Yen-Yu Lin', 'yen-yu lin')
('34779427', 'Ming-Fang Weng', 'ming-fang weng')
('2712675', 'Yi-Chang Lu', 'yi-chang lu')
('37761361', 'Yung-Yu Chuang', 'yung-yu chuang')
47a2727bd60e43f3253247b6d6f63faf2b67c54bSemi-supervised Vocabulary-informed Learning
Disney Research
('35782003', 'Yanwei Fu', 'yanwei fu')
('14517812', 'Leonid Sigal', 'leonid sigal')
y.fu@qmul.ac.uk, lsigal@disneyresearch.com
47d3b923730746bfaabaab29a35634c5f72c3f04ISSN : 2248-9622, Vol. 7, Issue 7, ( Part -3) July 2017, pp.30-38
RESEARCH ARTICLE
OPEN ACCESS
Real-Time Facial Expression Recognition App Development on
Smart Phones
Florida Institute Of Technology, Melbourne Fl
USA
('7155812', 'Humaid Alshamsi', 'humaid alshamsi')
('7155812', 'Humaid Alshamsi', 'humaid alshamsi')
47190d213caef85e8b9dd0d271dbadc29ed0a953The Devil of Face Recognition is in the Noise
1 SenseTime Research
University of California San Diego
Nanyang Technological University
('1682816', 'Fei Wang', 'fei wang')
('3203648', 'Liren Chen', 'liren chen')
('46651787', 'Cheng Li', 'cheng li')
('1937119', 'Shiyao Huang', 'shiyao huang')
('47557603', 'Yanjie Chen', 'yanjie chen')
('49215552', 'Chen Qian', 'chen qian')
('1717179', 'Chen Change Loy', 'chen change loy')
{wangfei, chengli, huangshiyao, chenyanjie, qianchen}@sensetime.com,
lic002@eng.ucsd.edu, ccloy@ieee.org
47e3029a3d4cf0a9b0e96252c3dc1f646e750b14International Conference on Computer Systems and Technologies - CompSysTech’07
Facial Expression Recognition in still pictures and videos using Active
Appearance Models. A comparison approach.
Drago(cid:1) Datcu
Léon Rothkrantz
475e16577be1bfc0dd1f74f67bb651abd6d63524DAiSEE: Towards User Engagement Recognition in the Wild
Microsoft
Vineeth N Balasubramanian
Indian Institution of Technology Hyderabad
('38330340', 'Abhay Gupta', 'abhay gupta')abhgup@microsoft.com
vineethnb@iith.ac.in
471befc1b5167fcfbf5280aa7f908eff0489c72b570
Class-Specific Kernel-Discriminant
Analysis for Face Verification
class problems (
('2123731', 'Georgios Goudelis', 'georgios goudelis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
47f8b3b3f249830b6e17888df4810f3d189daac1
47e8db3d9adb79a87c8c02b88f432f911eb45dc5MAGMA: Multi-level accelerated gradient mirror descent algorithm for
large-scale convex composite minimization
July 15, 2016
('39984225', 'Vahan Hovhannisyan', 'vahan hovhannisyan')
('3112745', 'Panos Parpas', 'panos parpas')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
47aeb3b82f54b5ae8142b4bdda7b614433e69b9a
47dabb566f2bdd6b3e4fa7efc941824d8b923a13Probabilistic Temporal Head Pose Estimation
Using a Hierarchical Graphical Model
Centre for Intelligent Machines, McGill University, Montreal, Canada
('2515930', 'Meltem Demirkus', 'meltem demirkus')
('1724729', 'Doina Precup', 'doina precup')
('1713608', 'James J. Clark', 'james j. clark')
('1699104', 'Tal Arbel', 'tal arbel')
47f5f740e225281c02c8a2ae809be201458a854fSimultaneous Unsupervised Learning of Disparate Clusterings
University of Texas, Austin, TX 78712-1188, USA
Received 14 April 2008; accepted 05 May 2008
DOI:10.1002/sam.10007
Published online 3 November 2008 in Wiley InterScience (www.interscience.wiley.com).
('3164102', 'Prateek Jain', 'prateek jain')
('1751621', 'Raghu Meka', 'raghu meka')
('1783667', 'Inderjit S. Dhillon', 'inderjit s. dhillon')
47bf7a8779c68009ea56a7c20e455ccdf0e3a8faInternational Journal of Computer Applications (0975 – 8887)
Volume 83 – No 5, December 2013
Automatic Face Recognition System using Pattern
Recognition Techniques: A Survey
Department of Computer Science Department of Computer Science
Assam University, Silchar-788011 Assam University, Silchar
('37792796', 'Ningthoujam Sunita Devi', 'ningthoujam sunita devi')
47b508abdaa5661fe14c13e8eb21935b8940126b Volume 4, Issue 12, December 2014 ISSN: 2277 128X
International Journal of Advanced Research in
Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
An Efficient Method for Feature Extraction of Face
Recognition Using PCA
(M.Tech. Student)
Computer Science & Engineering
Iftm University, Moradabad-244001 U.P
('9247488', 'Tara Prasad Singh', 'tara prasad singh')
477811ff147f99b21e3c28309abff1304106dbbe
47e14fdc6685f0b3800f709c32e005068dfc8d47
473cbc5ec2609175041e1410bc6602b187d03b23Semantic Audio-Visual Data Fusion for Automatic Emotion Recognition
Man-Machine Interaction Group
Delft University of Technology
2628 CD, Delft,
The Netherlands
KEYWORDS
Data fusion, automatic emotion recognition, speech analysis,
face detection, facial feature extraction, facial characteristic
point extraction, Active Appearance Models, Support Vector
Machines.
('2866326', 'Dragos Datcu', 'dragos datcu')E-mail: {D.Datcu ; L.J.M.Rothkrantz}@tudelft.nl
782188821963304fb78791e01665590f0cd869e8
78a4cabf0afc94da123e299df5b32550cd638939
78f08cc9f845dc112f892a67e279a8366663e26dTECHNISCHE UNIVERSIT ¨AT M ¨UNCHEN
Lehrstuhl f¨ur Mensch-Maschine-Kommunikation
Semi-Autonomous Data Enrichment and
Optimisation for Intelligent Speech Analysis
Vollst¨andiger Abdruck der von der Fakult¨at f¨ur Elektrotechnik und Informationstechnik
der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines
Doktor-Ingenieurs (Dr.-Ing.)
genehmigten Dissertation.
Vorsitzender:
Univ.-Prof. Dr.-Ing. habil. Dr. h.c. Alexander W. Koch
Pr¨ufer der Dissertation:
1.
Univ.-Prof. Dr.-Ing. habil. Bj¨orn W. Schuller,
Universit¨at Passau
2. Univ.-Prof. Gordon Cheng, Ph.D.
Die Dissertation wurde am 30.09.2014 bei der Technischen Universit¨at M¨unchen einge-
reicht und durch die Fakult¨at f¨ur Elektrotechnik und Informationstechnik am 07.04.2015
angenommen.
('1742291', 'Zixing Zhang', 'zixing zhang')
78d645d5b426247e9c8f359694080186681f57dbGender Classification by LUT Based Boosting
of Overlapping Block Patterns
Tampere University of Technology, Tampere, Finland
Idiap Research Institute, Martigny, Switzerland
('3350574', 'Rakesh Mehta', 'rakesh mehta')rakesh.mehta@tut.fi
{manuel.guenther,marcel}@idiap.ch
7862d40da0d4e33cd6f5c71bbdb47377e4c6b95aDemography-based Facial Retouching Detection using Subclass Supervised
Sparse Autoencoder
University of Notre Dame, 2IIIT-Delhi
('5014060', 'Aparna Bharati', 'aparna bharati')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
('39129417', 'Richa Singh', 'richa singh')
('1799014', 'Kevin W. Bowyer', 'kevin w. bowyer')
('1743927', 'Xin Tong', 'xin tong')
1{abharati, kwb, xtong1}@nd.edu, 2{mayank, rsingh}@iiitd.ac.in
783f3fccde99931bb900dce91357a6268afecc52Hindawi Publishing Corporation
EURASIP Journal on Image and Video Processing
Volume 2009, Article ID 945717, 14 pages
doi:10.1155/2009/945717
Research Article
Adapted Active Appearance Models
1 SUP ´ELEC/IETR, Avenue de la Boulaie, 35511 Cesson-S´evign´e, France
2 Orange Labs—TECH/IRIS, 4 rue du clos courtel, 35 512 Cesson S´evign´e, France
Received 5 January 2009; Revised 2 September 2009; Accepted 20 October 2009
Recommended by Kenneth M. Lam
Active Appearance Models (AAMs) are able to align efficiently known faces under duress, when face pose and illumination are
controlled. We propose Adapted Active Appearance Models to align unknown faces in unknown poses and illuminations. Our
proposal is based on the one hand on a specific transformation of the active model texture in an oriented map, which changes the
AAM normalization process; on the other hand on the research made in a set of different precomputed models related to the most
adapted AAM for an unknown face. Tests on public and private databases show the interest of our approach. It becomes possible
to align unknown faces in real-time situations, in which light and pose are not controlled.
Copyright © 2009 Renaud S´eguier et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
1. Introduction
All applications related to face analysis and synthesis (Man-
Machine Interaction, compression in video communication,
augmented reality) need to detect and then to align the user’s
face. This latest process consists in the precise localization of
the eyes, nose, and mouth gravity center. Face detection can
now be realized in real time and in a rather efficient manner
[1, 2]; the technical bottleneck lies now in the face alignment
when it is done in real conditions, which is precisely the
object of this paper.
Since such Active Appearance Models (AAMs) as those
described in [3] exist, it is therefore possible to align faces
in real time. The AAMs exploit a set of face examples in
order to extract a statistical model. To align an unknown
face in new image, the models parameters must be tuned, in
order to match the analyzed face features in the best possible
way. There is no difficulty to align a face featuring the same
characteristics (same morphology, illumination, and pose)
as those constituting the example data set. Unfortunately,
AAMs are less outstanding when illumination, pose, and
face type changes. We suggest in this paper a robust Active
Appearance Model allowing a real-time implementation. In
the next section, we will survey the different techniques,
which aim to increase the AAM robustness. We will see
that none of them address at the same time the three types
of robustness, we are interested in pose, illumination, and
identity. It must be pointed out that we do not consider the
robustness against occlusion as [4] does, for example, when
a person moves his hand around the face.
After a quick introduction of the Active Appearance
Models and their limitations (Section 3), we will present our
two main contributions in Section 4.1 in order to improve
AAM robustness in illumination, pose, and identity. Exper-
iments will be conducted and discussed in Section 5 before
drawing a conclusion, suggesting new research directions in
the last section.
2. State of the Art
We propose to classify the methods which lead to an increase
of the AAM robustness as follows. The specific types of
dedicated robustness are in italic.
(i) Preprocess
(1) Invariant features (illumination)
(2) Canonical representation (illumination)
(ii) Parameter space extension
(1) Light modeling (illumination)
(2) 3D modeling (pose)
('3353560', 'Sylvain Le Gallou', 'sylvain le gallou')
('40427923', 'Gaspard Breton', 'gaspard breton')
('34798028', 'Christophe Garcia', 'christophe garcia')
Correspondence should be addressed to Renaud S´eguier, renaud.seguier@supelec.fr
7897c8a9361b427f7b07249d21eb9315db189496
7859667ed6c05a467dfc8a322ecd0f5e2337db56Web-Scale Transfer Learning for Unconstrained 1:N Face Identification
Facebook AI Research
Menlo Park, CA 94025, USA
Tel Aviv University
Tel Aviv, Israel
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('32447229', 'Ming Yang', 'ming yang')
('1776343', 'Lior Wolf', 'lior wolf')
{yaniv, mingyang, ranzato}@fb.com
wolf@cs.tau.ac.il
78c1ad33772237bf138084220d1ffab800e1200dState Key Laboratory of Software Development Environment, Beihang University, P.R.China
University of Michigan, Ann Arbor
('48545182', 'Lei Huang', 'lei huang')
('8342699', 'Jia Deng', 'jia deng')
78436256ff8f2e448b28e854ebec5e8d8306cf21Measuring and Understanding Sensory Representations within
Deep Networks Using a Numerical Optimization Framework
Harvard University, Cambridge, MA
USA
Center for Brain Science, Harvard University, Cambridge, MA, USA
Harvard University, Cambridge, MA, USA
('1739108', 'Chuan-Yung Tsai', 'chuan-yung tsai')
('2042941', 'David D. Cox', 'david d. cox')
∗ E-mail: davidcox@fas.harvard.edu
78f438ed17f08bfe71dfb205ac447ce0561250c6
78f79c83b50ff94d3e922bed392737b47f93aa06The Computer Expression Recognition Toolbox (CERT)
Mark Frank3, Javier Movellan1, and Marian Bartlett1
Machine Perception Laboratory, University of California, San Diego
University of Arizona
University of Buffalo
('2724380', 'Gwen Littlewort', 'gwen littlewort')
('1775637', 'Jacob Whitehill', 'jacob whitehill')
('4072965', 'Tingfan Wu', 'tingfan wu')
{gwen, jake, ting, movellan}@mplab.ucsd.edu,
ianfasel@cs.arizona.edu, mfrank83@buffalo.edu, marni@salk.edu
78fede85d6595e7a0939095821121f8bfae05da6KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL. 9, NO. 2, Feb. 2015 742
Copyright © 2015 KSII
Discriminant Metric Learning Approach for
Face Verification
1 Department of Computer Science and Information Engineering
National Kaohsiung University of Applied Sciences, Kaohsiung, Kaohsiung, Taiwan, ROC
2 Department of Computer Science and Information Engineering
National Cheng Kung University, Tainan, Taiwan, ROC
Received September 3, 2014; revised November 12, 2014; accepted December 13, 2014;
published February 28, 2015
('37284667', 'Ju-Chin Chen', 'ju-chin chen')
('36612683', 'Pei-Hsun Wu', 'pei-hsun wu')
('3461535', 'Jenn-Jier James Lien', 'jenn-jier james lien')
('37284667', 'Ju-Chin Chen', 'ju-chin chen')
[e-mail: jc.chen@cc.kuas.edu.tw]
[e-mail: jjlien@csie.ncku.edu.tw]
78598e7005f7c96d64cc47ff47e6f13ae52245b8Hand2Face: Automatic Synthesis and Recognition of Hand Over Face Occlusions
Synthetic Reality Lab
Department of Computer Science
University of Central Florida
Orlando, Florida
Synthetic Reality Lab
Department of Computer Science
University of Central Florida
Orlando, Florida
Tadas Baltruˇsaitis
Language Technology Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA
Language Technology Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA
('2974242', 'Behnaz Nojavanasghari', 'behnaz nojavanasghari')
('32827434', 'Charles E. Hughes', 'charles e. hughes')
('1767184', 'Louis-Philippe Morency', 'louis-philippe morency')
Email: behnaz@eecs.ucf.edu
Email: ceh@cs.ucf.edu
Email: tbaltrus@cs.cmu.edu
Email: morency@cs.cmu.edu
7862f646d640cbf9f88e5ba94a7d642e2a552ec9Being John Malkovich
University of Washington
2 Adobe Systems
3 Google Inc.
('2419955', 'Ira Kemelmacher-Shlizerman', 'ira kemelmacher-shlizerman')
('40416141', 'Aditya Sankar', 'aditya sankar')
('2177801', 'Eli Shechtman', 'eli shechtman')
('1679223', 'Steven M. Seitz', 'steven m. seitz')
{kemelmi,aditya,seitz}@cs.washington.edu
elishe@adobe.com
78a11b7d2d7e1b19d92d2afd51bd3624eca86c3c
78a4eb59ec98994bebcf3a5edf9e1d34970c45f6Conveying Shape and Features with Image-Based Relighting
Stanford University
Stanford University
Stanford University
Stanford University
Microsoft Research
Stanford University
('36475465', 'David Akers', 'david akers')
('1967534', 'Frank Losasso', 'frank losasso')
('37133509', 'John Rick', 'john rick')
('2367620', 'Jeff Klingner', 'jeff klingner')
('1820412', 'Maneesh Agrawala', 'maneesh agrawala')
('1689128', 'Pat Hanrahan', 'pat hanrahan')
781c2553c4ed2a3147bbf78ad57ef9d0aeb6c7edInt J Comput Vis
DOI 10.1007/s11263-017-1023-9
Tubelets: Unsupervised Action Proposals from Spatiotemporal
Super-Voxels
Cees G. M. Snoek1
Received: 25 June 2016 / Accepted: 18 May 2017
© The Author(s) 2017. This article is an open access publication
('40027484', 'Mihir Jain', 'mihir jain')
('1681054', 'Hervé Jégou', 'hervé jégou')
78174c2be084e67f48f3e8ea5cb6c9968615a42cPeriocular Recognition Using CNN Features
Off-the-Shelf
School of Information Technology (ITE), Halmstad University, Box 823, 30118 Halmstad, Sweden
('51446244', 'Kevin Hernandez-Diaz', 'kevin hernandez-diaz')
('2847751', 'Fernando Alonso-Fernandez', 'fernando alonso-fernandez')
('5058247', 'Josef Bigun', 'josef bigun')
Email: kevin.hernandez-diaz@hh.se, feralo@hh.se, josef.bigun@hh.se
78df7d3fdd5c32f037fb5cc2a7c104ac1743d74eTEMPORAL PYRAMID POOLING CNN FOR ACTION RECOGNITION
Temporal Pyramid Pooling Based Convolutional
Neural Network for Action Recognition
('40378631', 'Peng Wang', 'peng wang')
('2572430', 'Yuanzhouhan Cao', 'yuanzhouhan cao')
('40529029', 'Chunhua Shen', 'chunhua shen')
('2161037', 'Lingqiao Liu', 'lingqiao liu')
('1724393', 'Heng Tao Shen', 'heng tao shen')
780557daaa39a445b24c41f637d5fc9b216a0621Large Video Event Ontology Browsing, Search and
Tagging (EventNet Demo)
Columbia University, New York, NY 10027, USA
('2368325', 'Hongliang Xu', 'hongliang xu')
('35984288', 'Guangnan Ye', 'guangnan ye')
('2664705', 'Yitong Li', 'yitong li')
('40313086', 'Dong Liu', 'dong liu')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
{hx2168, gy2179, yl3029, dl2713, sc250}@columbia.edu
78fdf2b98cf6380623b0e20b0005a452e736181e
788a7b59ea72e23ef4f86dc9abb4450efefeca41
787c1bb6d1f2341c5909a0d6d7314bced96f4681Face Detection and Verification in Unconstrained
Videos: Challenges, Detection, and Benchmark
Evaluation
IIIT-D-MTech-CS-GEN-13-106
July 16, 2015
Indraprastha Institute of Information Technology, Delhi
Thesis Advisors
Dr. Mayank Vatsa
Dr. Richa Singh
Submitted in partial fulfillment of the requirements
for the Degree of M.Tech. in Computer Science
c(cid:13) Shah, 2015
Keywords: face recognition, face detection, face verification
('25087736', 'Mahek Shah', 'mahek shah')
7808937b46acad36e43c30ae4e9f3fd57462853dDescribing People: A Poselet-Based Approach to Attribute Classification ∗
1EECS, U.C. Berkeley, Berkeley, CA 94720
Adobe Systems, Inc., 345 Park Ave, San Jose, CA
('35208858', 'Subhransu Maji', 'subhransu maji')
('1689212', 'Jitendra Malik', 'jitendra malik')
{lbourdev,smaji,malik}@eecs.berkeley.edu
8b2c090d9007e147b8c660f9282f357336358061Lake Forest College
Lake Forest College Publications
Senior Theses
4-23-2018
Student Publications
Emotion Classification based on Expressions and
Body Language using Convolutional Neural
Networks
Follow this and additional works at: https://publications.lakeforest.edu/seniortheses
Part of the Neuroscience and Neurobiology Commons
Recommended Citation
Tanveer, Aasimah S., "Emotion Classification based on Expressions and Body Language using Convolutional Neural Networks"
(2018). Senior Theses.
This Thesis is brought to you for free and open access by the Student Publications at Lake Forest College Publications. It has been accepted for
inclusion in Senior Theses by an authorized administrator of Lake Forest College Publications. For more information, please contact
Lake Forest College, tanveeras@lakeforest.edu
levinson@lakeforest.edu.
8ba67f45fbb1ce47a90df38f21834db37c840079People Search and Activity Mining in Large-Scale
Community-Contributed Photos
National Taiwan University, Taipei, Taiwan
Winston H. Hsu, Hong-Yuan Mark Liao
Advised by
('35081710', 'Yan-Ying Chen', 'yan-ying chen')yanying@cmlab.csie.ntu.edu.tw
8b547b87fd95c8ff6a74f89a2b072b60ec0a3351Initial Perceptions of a Casual Game to Crowdsource
Facial Expressions in the Wild
Games Studio, Faculty of Engineering and IT, University of Technology, Sydney
('1733360', 'Chek Tien Tan', 'chek tien tan')
('2117735', 'Hemanta Sapkota', 'hemanta sapkota')
('2823535', 'Daniel Rosser', 'daniel rosser')
('3141633', 'Yusuf Pisan', 'yusuf pisan')
chek@gamesstudio.org
hemanta.sapkota@student.uts.edu.au
daniel.j.rosser@gmail.com
yusuf.pisan@gamesstudio.org
8bed7ff2f75d956652320270eaf331e1f73efb35Emotion Recognition in the Wild using
Deep Neural Networks and Bayesian Classifiers
Elena Ba(cid:138)ini S¨onmez
University of Calabria - DeMACS
Via Pietro Bucci
Rende (CS), Italy
Plymouth University - CRNS
Portland Square PL4 8AA
Plymouth, United Kingdom
ac.uk
Istanbul Bilgi University - DCE
Eski Silahtaraa Elektrik Santral Kazm
Karabekir Cad. No: 2/13 34060 Eyp
Istanbul, Turkey
University of Calabria - DeMACS
Via Pietro Bucci
Rende (CS), Italy
Plymouth University - CRNS
Portland Square PL4 8AA
Plymouth, United Kingdom
('32751441', 'Luca Surace', 'luca surace')
('3366919', 'Massimiliano Patacchiola', 'massimiliano patacchiola')
('3205804', 'William Spataro', 'william spataro')
('1692929', 'Angelo Cangelosi', 'angelo cangelosi')
lucasurace11@gmail.com
massimiliano.patacchiola@plymouth.
ebsonmez@bilgi.edu.tr
william.spataro@unical.it
angelo.cangelosi@plymouth.ac.uk
8b7191a2b8ab3ba97423b979da6ffc39cb53f46bSearch Pruning in Video Surveillance Systems: Efficiency-Reliability Tradeoff
EURECOM
Sophia Antipolis, France
('3299530', 'Antitza Dantcheva', 'antitza dantcheva')
('1688531', 'Petros Elia', 'petros elia')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
{Antitza.Dantcheva, Arun.Singh, Petros.Elia, Jean-Luc.Dugelay}@eurecom.fr
8bf57dc0dd45ed969ad9690033d44af24fd18e05Subject-Independent Emotion Recognition from Facial Expressions
using a Gabor Feature RBF Neural Classifier Trained with Virtual
Samples Generated by Concurrent Self-Organizing Maps
VICTOR-EMIL NEAGOE, ADRIAN-DUMITRU CIOTEC
Depart. Electronics, Telecommunications & Information Technology
Polytechnic University of Bucharest
Splaiul Independentei No. 313, Sector 6, Bucharest,
ROMANIA
victoremil@gmail.com, adryyandc@yahoo.com
8bf243817112ac0aa1348b40a065bb0b735cdb9cLEARNING A REPRESSION NETWORK FOR PRECISE VEHICLE SEARCH
Institute of Digital Media
School of Electrical Engineering and Computer Science, Peking University
No.5 Yiheyuan Road, 100871, Beijing, China
('17872416', 'Qiantong Xu', 'qiantong xu')
('13318784', 'Ke Yan', 'ke yan')
('1705972', 'Yonghong Tian', 'yonghong tian')
{xuqiantong, keyan, yhtian}@pku.edu.cn
8bfada57140aa1aa22a575e960c2a71140083293Can we match Ultraviolet Face Images against their Visible
Counterparts?
aMILab, LCSEE, West Virginia University, Morgantown, West Virginia, USA
('33240042', 'Neeru Narang', 'neeru narang')
('1731727', 'Thirimachos Bourlai', 'thirimachos bourlai')
('1678573', 'Lawrence A. Hornak', 'lawrence a. hornak')
('11898042', 'Paul D. Coverdell', 'paul d. coverdell')
8b8728edc536020bc4871dc66b26a191f6658f7c
8befcd91c24038e5c26df0238d26e2311b21719aA Joint Sequence Fusion Model for Video
Question Answering and Retrieval
Department of Computer Science and Engineering,
Seoul National University, Seoul, Korea
http://vision.snu.ac.kr/projects/jsfusion/
('7877122', 'Youngjae Yu', 'youngjae yu')
('2175130', 'Jongseok Kim', 'jongseok kim')
{yj.yu,js.kim}@vision.snu.ac.kr, gunhee@snu.ac.kr
8bbbdff11e88327816cad3c565f4ab1bb3ee20dbAutomatic Semantic Face Recognition
University of Southampton
Southampton, United Kingdom
('19249411', 'Nawaf Yousef Almudhahka', 'nawaf yousef almudhahka')
('1727698', 'Mark S. Nixon', 'mark s. nixon')
('31534955', 'Jonathon S. Hare', 'jonathon s. hare')
{nya1g14,msn,jsh2}@ecs.soton.ac.uk
8bdf6f03bde08c424c214188b35be8b2dec7cdeaExploiting Unintended Feature Leakage in Collaborative Learning∗
UCL
Cornell University
UCL and Alan Turing Institute
Cornell Tech
('2008164', 'Luca Melis', 'luca melis')
('3469125', 'Congzheng Song', 'congzheng song')
('1728207', 'Emiliano De Cristofaro', 'emiliano de cristofaro')
('1723945', 'Vitaly Shmatikov', 'vitaly shmatikov')
luca.melis.14@alumni.ucl.ac.uk
cs2296@cornell.edu
e.decristofaro@ucl.ac.uk
shmat@cs.cornell.edu
8b744786137cf6be766778344d9f13abf4ec0683978-1-4799-9988-0/16/$31.00 ©2016 IEEE
2697
ICASSP 2016
8b10383ef569ea0029a2c4a60cc2d8c87391b4dbZHOU,MILLERANDZHANG:AGECLASSIFICATIONUSINGRADONTRANSFORM...
Age classification using Radon transform
and entropy based scaling SVM
Paul Miller1
The Institute of Electronics
Communications
and Information Technology
Queen s University Belfast
2 School of Computing
University of Dundee
United Kingdom
('2040772', 'Huiyu Zhou', 'huiyu zhou')
('1744844', 'Jianguo Zhang', 'jianguo zhang')
h.zhou@ecit.qub.ac.uk
p.miller@ecit.qub.ac.uk
jgzhang@computing.dundee.ac.uk
8b30259a8ab07394d4dac971f3d3bd633beac811Representing Sets of Instances for Visual Recognition
1 National Key Laboratory for Novel Software Technology
Nanjing University, China
2 Minieye, Youjia Innovation LLC, China
('1808816', 'Jianxin Wu', 'jianxin wu')
('2226422', 'Bin-Bin Gao', 'bin-bin gao')
('15527784', 'Guoqing Liu', 'guoqing liu')
∗ wujx2001@nju.edu.cn, gaobb@lamda.nju.edu.cn
guoqing@minieye.cc
8b61fdc47b5eeae6bc0a52523f519eaeaadbc8c8HU, LIU, LI, LIU: TEMPORAL PERCEPTIVE NETWORK FOR ACTION RECOGNITION
Temporal Perceptive Network for
Skeleton-Based Action Recognition
Institute of Computer Science and
Technology
Peking University
Beijing, China
Sijie Song
('9956463', 'Yueyu Hu', 'yueyu hu')
('49046516', 'Chunhui Liu', 'chunhui liu')
('3128506', 'Yanghao Li', 'yanghao li')
('41127426', 'Jiaying Liu', 'jiaying liu')
huyy@pku.edu.cn
liuchunhui@pku.edu.cn
lyttonhao@pku.edu.cn
ssj940920@pku.edu.cn
liujiaying@pku.edu.cn
8b19efa16a9e73125ab973429eb769d0ad5a8208SCAR: Dynamic adaptation for person detection and
persistence analysis in unconstrained videos
Department of Computer Science
Stevens Institute of Technology
Hoboken, NJ 07030, USA
('2789357', 'George Kamberov', 'george kamberov')
('3219999', 'Matt Burlick', 'matt burlick')
('2283008', 'Lazaros Karydas', 'lazaros karydas')
('3228177', 'Olga Koteoglou', 'olga koteoglou')
gkambero,mburlick,lkarydas,okoteogl@stevens.edu (cid:63)
8b6fded4d08bf0b7c56966b60562ee096af1f0c4International Journal of Computer Applications (0975 – 8887)
Volume 59– No.3, December 2012
A Neural Network based Facial Expression Recognition
using Fisherface
Department of Mathematics
Semarang State University
Semarang, 50229, Indonesia
('39807349', 'Zaenal Abidin', 'zaenal abidin')
8bf647fed40bdc9e35560021636dfb892a46720eLearning to Hash-tag Videos with Tag2Vec
CVIT, KCIS, IIIT Hyderabad, India
P J Narayanan
http://cvit.iiit.ac.in/research/projects/tag2vec
Figure 1. Learning a direct mapping from videos to hash-tags : sample frames from short video clips with user-given hash-tags
(left); a sample frame from a query video and hash-tags suggested by our system for this query (right).
('2461059', 'Aditya Singh', 'aditya singh')
('3448416', 'Saurabh Saini', 'saurabh saini')
('1962817', 'Rajvi Shah', 'rajvi shah')
{(aditya.singh,saurabh.saini,rajvi.shah)@research.,pjn@}iiit.ac.in
8b2704a5218a6ef70e553eaf0a463bd55129b69dSensors 2013, 13, 7714-7734; doi:10.3390/s130607714
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Geometric Feature-Based Facial Expression Recognition in
Image Sequences Using Multi-Class AdaBoost and Support
Vector Machines
Division of Computer Engineering, Chonbuk National University, Jeonju-si, Jeollabuk-do
Tel.: +82-63-270-2406; Fax: +82-63-270-2394.
Received: 3 May 2013; in revised form: 29 May 2013 / Accepted: 3 June 2013 /
Published: 14 June 2013
('32322842', 'Deepak Ghimire', 'deepak ghimire')
('2034182', 'Joonwhoan Lee', 'joonwhoan lee')
Korea; E-Mail: deep@jbnu.ac.kr
* Author to whom correspondence should be addressed; E-Mail: chlee@jbnu.ac.kr;
8bb21b1f8d6952d77cae95b4e0b8964c9e0201b0Methoden
at 11/2013
(cid:2)(cid:2)(cid:2)
Multimodale Interaktion
auf einer sozialen Roboterplattform
Multimodal Interaction on a Social Robotic Platform
Zusammenfassung Dieser Beitrag beschreibt die multimo-
dalen Interaktionsmöglichkeiten mit der Forschungsroboter-
plattform ELIAS. Zunächst wird ein Überblick über die Ro-
boterplattform sowie die entwickelten Verarbeitungskompo-
nenten gegeben, die Einteilung dieser Komponenten erfolgt
nach dem Konzept von wahrnehmenden und agierenden Mo-
dalitäten. Anschließend wird das Zusammenspiel der Kom-
ponenten in einem multimodalen Spieleszenario näher be-
trachtet. (cid:2)(cid:2)(cid:2) Summary
This paper presents the mul-
timodal
interaction capabilities of the robotic research plat-
form ELIAS. An overview of the robotic platform as well
as the developed processing components is presented, the
classification of the components follows the concept of sen-
sing and acting modalities. Finally,
the interplay between
those components within a multimodal gaming scenario is
described.
Schlagwörter Mensch-Roboter-Interaktion, Multimodalität, Gesten, Blick (cid:2)(cid:2)(cid:2) Keywords Human-robot interaction,
multimodal, gestures, gaze
1 Einleitung
Eine intuitive und natürliche Bedienbarkeit der zuneh-
mend komplexeren Technik wird für den Menschen
immer wichtiger, da im heutigen Alltag eine Vielzahl an
technischen Geräten mit wachsendem Funktionsumfang
anzutreffen ist. Unterschiedliche Aktivitäten in der For-
schungsgemeinschaft haben sich schon seit längerer Zeit
mit verbalen sowie nonverbalen Kommunikationsformen
(bspw. Emotions- und Gestenerkennung) in der Mensch-
Maschine-Interaktion beschäftigt. Gerade in der jüngeren
Zeit trugen auf diesem Forschungsfeld unterschiedliche
Innovationen (bspw. Touchscreen, Gestensteuerung im
Fernseher) dazu bei, dass intuitive und natürliche Bedien-
konzepte mehr und mehr im Alltag Verwendung finden.
Auch Möglichkeiten zur Sprach- und Gestensteuerung
von Konsolen und Mobiltelefonen finden heute vermehr-
ten Einsatz in der Gerätebedienung. Diese natürlicheren
und multimodalen Benutzerschnittstellen sind dem Nut-
zer schnell zugänglich und erlauben eine intuitivere
Interaktion mit komplexen technischen Geräten.
Auch für Robotersysteme bietet sich eine multimodale
Interaktion an, um die Benutzung und den Zugang zu
den Funktionalitäten zu vereinfachen. Der Mensch soll
in seiner Kommunikation idealerweise vollkommene Ent-
scheidungsfreiheit bei der Wahl der Modalitäten haben,
um sein gewünschtes Ziel zu erreichen. Dafür werden
in diesem Beitrag die wahrnehmenden und agieren-
den Modalitäten einer, rein auf Kommunikationsaspekte
reduzierten, Forschungsroboterplattform beispielhaft in
einer Spieleanwendung untersucht.
1.1 Struktur des Beitrags
In diesem Beitrag wird zunächst ein kurzer Über-
blick über die multimodale Interaktion im Allgemeinen
gegeben, hierbei erfolgt eine Betrachtung nach wahr-
nehmenden und agierenden Modalitäten. Im nächsten
Abschnitt werden Arbeiten vorgestellt, die sich auch mit
multimodalen Robotersystemen beschäftigen. Im darauf
folgenden Abschnitt wird die Roboterplattform ELIAS
mit den wahrnehmenden, verarbeitenden und agierenden
at – Automatisierungstechnik 61 (2013) 11 / DOI 10.1515/auto.2013.1062 © Oldenbourg Wissenschaftsverlag
- 10.1515/auto.2013.1062
Downloaded from De Gruyter Online at 09/27/2016 10:08:34PM
via Technische Universität München
737
('35116429', 'Jürgen Blume', 'jürgen blume')
('1682283', 'Tobias Rehrl', 'tobias rehrl')
('1705843', 'Gerhard Rigoll', 'gerhard rigoll')
Korrespondenzautor: blume@tum.de
8b1db0894a23c4d6535b5adf28692f795559be90Biometric and Surveillance Technology for Human and Activity Identification X, edited by Ioannis Kakadiaris,
Walter J. Scheirer, Laurence G. Hassebrook, Proc. of SPIE Vol. 8712, 87120Q · © 2013 SPIE
CCC code: 0277-786X/13/$18 · doi: 10.1117/12.2018974
Proc. of SPIE Vol. 8712 87120Q-1
8b2e3805b37c18618b74b243e7a6098018556559Workshop track - ICLR 2018
IMPROVING VARIATIONAL AUTOENCODER WITH DEEP
University of Nottingham, Nottingham, UK
Shenzhen University, Shenzhen, China
('3468964', 'Xianxu Hou', 'xianxu hou')
('1698461', 'Guoping Qiu', 'guoping qiu')
xianxu.hou@nottingham.edu.cn
guoping.qiu@nottingham.ac.uk
8b74252625c91375f55cbdd2e6415e752a281d10Using Convolutional 3D Neural Networks for
User-Independent Continuous Gesture Recognition
Necati Cihan Camgoz, Simon Hadfield
University of Surrey
Guildford, GU2 7XH, UK
Human Technology & Pattern Recognition
RWTH Aachen University, Germany
University of Surrey
Guildford, GU2 7XH, UK
('2309364', 'Oscar Koller', 'oscar koller')
('1695195', 'Richard Bowden', 'richard bowden')
{n.camgoz, s.hadfield}@surrey.ac.uk
koller@cs.rwth-aachen.de
r.bowden@surrey.ac.uk
8b38124ff02a9cf8ad00de5521a7f8a9fa4d7259Real-time 3D Face Fitting and Texture Fusion
on In-the-wild Videos
Centre for Vision, Speech and Signal Processing
Image Understanding and Interactive Robotics
University of Surrey
Guildford, GU2 7XH, United Kingdom
Contact: http://www.patrikhuber.ch
Reutlingen University
D-72762 Reutlingen, Germany
('39976184', 'Patrik Huber', 'patrik huber')
('49759031', 'William Christmas', 'william christmas')
('1748684', 'Josef Kittler', 'josef kittler')
('49330989', 'Philipp Kopp', 'philipp kopp')
134f1cee8408cca648d8b4ca44b38b0a7023af71Partially Shared Multi-Task Convolutional Neural Network with Local
Constraint for Face Attribute Learning
College of Information Science and Electronic Engineering
Zhejiang University, China
('41021477', 'Jiajiong Cao', 'jiajiong cao')
('2367491', 'Yingming Li', 'yingming li')
('1720488', 'Zhongfei Zhang', 'zhongfei zhang')
{jiajiong, yingming, zhongfei}@zju.edu.cn
13719bbb4bb8bbe0cbcdad009243a926d93be433Deep LDA-Pruned Nets for Efficient Facial Gender Classification
McGill University
University Street, Montral, QC H3A 0E9, Canada
('1992537', 'Qing Tian', 'qing tian')
('1699104', 'Tal Arbel', 'tal arbel')
('1713608', 'James J. Clark', 'james j. clark')
{qtian,arbel,clark}@cim.mcgill.ca
134db6ca13f808a848321d3998e4fe4cdc52fbc2IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 2, APRIL 2006
433
Dynamics of Facial Expression: Recognition of
Facial Actions and Their Temporal Segments
From Face Profile Image Sequences
('1694605', 'Maja Pantic', 'maja pantic')
('1744405', 'Ioannis Patras', 'ioannis patras')
133dd0f23e52c4e7bf254e8849ac6f8b17fcd22dThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
Active Clustering with Model-Based
Uncertainty Reduction
('2228109', 'Caiming Xiong', 'caiming xiong')
('34187462', 'David M. Johnson', 'david m. johnson')
('3587688', 'Jason J. Corso', 'jason j. corso')
1329206dbdb0a2b9e23102e1340c17bd2b2adcf5Part-based R-CNNs for
Fine-grained Category Detection
University of California, Berkeley
('40565777', 'Ning Zhang', 'ning zhang')
('1753210', 'Trevor Darrell', 'trevor darrell')
{nzhang,jdonahue,rbg,trevor}@eecs.berkeley.edu
1369e9f174760ea592a94177dbcab9ed29be1649Geometrical Facial Modeling for Emotion Recognition ('3250085', 'Giampaolo L. Libralon', 'giampaolo l. libralon')
133900a0e7450979c9491951a5f1c2a403a180f0JOURNAL OF LATEX CLASS FILES
Social Grouping for Multi-target Tracking and
Head Pose Estimation in Video
('12561781', 'Zhen Qin', 'zhen qin')
('3564227', 'Christian R. Shelton', 'christian r. shelton')
13bda03fc8984d5943ed8d02e49a779d27c84114Efficient Object Detection Using Cascades of Nearest Convex Model Classifiers
Eskisehir Osmangazi University
Laboratoire Jean Kuntzmann
Meselik Kampusu, 26480, Eskisehir Turkey
B.P. 53, 38041 Grenoble Cedex 9, France
('2277308', 'Hakan Cevikalp', 'hakan cevikalp')
('1756114', 'Bill Triggs', 'bill triggs')
hakan.cevikalp@gmail.com
Bill.Triggs@imag.fr
13db9466d2ddf3c30b0fd66db8bfe6289e880802I.J. Image, Graphics and Signal Processing, 2017, 1, 27-32
Published Online January 2017 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijigsp.2017.01.04
Transfer Subspace Learning Model for Face
Recognition at a Distance
MIT, Pune ,India
AISSM’S IOT,India
College of Engineering Pune, India
learning algorithms work
('3335915', 'Alwin Anuse', 'alwin anuse')
('32032353', 'Vibha Vyas', 'vibha vyas')
Email: alwin.anuse@mitpune.edu.in
Email: deshmukhnilima@gmail.com
Email: vsv.extc@coep.ac.in
13a994d489c15d440c1238fc1ac37dad06dd928cLearning Discriminant Face Descriptor for Face
Recognition
Center for Biometrics and Security Research & National Laboratory of Pattern
Recognition, Institute of Automation, Chinese Academy of Sciences
('1718623', 'Zhen Lei', 'zhen lei')
('34679741', 'Stan Z. Li', 'stan z. li')
fzlei,szlig@nlpr.ia.ac.cn
131178dad3c056458e0400bed7ee1a36de1b2918Visual Reranking through Weakly Supervised Multi-Graph Learning
Xidian University, Xi an, China
Xiamen University, Xiamen, China
IBM Watson Research Center, Armonk, NY, USA
University of Technology, Sydney, Australia
('1715156', 'Cheng Deng', 'cheng deng')
('1725599', 'Rongrong Ji', 'rongrong ji')
('39059457', 'Wei Liu', 'wei liu')
('1692693', 'Dacheng Tao', 'dacheng tao')
('10699750', 'Xinbo Gao', 'xinbo gao')
{chdeng.xd, jirongrong, wliu.cu, dacheng.tao, xbgao.xidian}@gmail.com
13141284f1a7e1fe255f5c2b22c09e32f0a4d465Object Tracking by
Oversampling Local Features
('2619131', 'Federico Pernici', 'federico pernici')
('8196487', 'Alberto Del Bimbo', 'alberto del bimbo')
132527383890565d18f1b7ad50d76dfad2f14972JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 22, 1033-1046 (2006)
Facial Expression Classification Using PCA and
Hierarchical Radial Basis Function Network*
Department of Computer Science and Information Engineering
National Taipei University
Sanshia, 237 Taiwan
Intelligent human-computer interaction (HCI) integrates versatile tools such as per-
ceptual recognition, machine learning, affective computing, and emotion cognition to
enhance the ways humans interact with computers. Facial expression analysis is one of
the essential medium of behavior interpretation and emotion modeling. In this paper, we
modify and develop a reconstruction method utilizing Principal Component Analysis
(PCA) to perform facial expression recognition. A framework of hierarchical radial basis
function network (HRBFN) is further proposed to classify facial expressions based on
local features extraction by PCA technique from lips and eyes images. It decomposes the
acquired data into a small set of characteristic features. The objective of this research is
to develop a more efficient approach to discriminate between seven prototypic facial ex-
pressions, such as neutral, smile, anger, surprise, fear, disgust, and sadness. A construc-
tive procedure is detailed and the system performance is evaluated on a public database
“Japanese Females Facial Expression (JAFFE).” We conclude that local images of lips
and eyes can be treated as cues for facial expression. As anticipated, the experimental
results demonstrate the potential capabilities of the proposed approach.
Keywords: intelligent human-computer interaction, facial expression classification, hier-
archical radial basis function network, principal component analysis, local features
1. INTRODUCTION
The intelligent human-computer interaction (HCI) technologies play important roles
in the development of advanced and ambient communication/computation. In contrast to
the conventional mechanisms of passive manipulation, intelligent HCI integrates versa-
tile tools such as perceptual recognition, machine learning, affective computing, and
emotion cognition to enhance the ways humans interact with computers. Migrating from
W4 (what, where, when, who) to W5+ (what, where, when, who, why, how), novel intel-
ligent interface design has placed emphasis on both apparent and internal behavior of
users [1]. Nonverbal information such as facial expression, posture, gesture, and eye gaze
is suitable for behavior interpretation. Facial data analysis is one of the essential medium
of perceptual processing and emotion modeling.
Received August 16, 2005; accepted January 17, 2006.
Communicated by Jhing-Fa Wang, Pau-Choo Chung and Mark Billinghurst.
* This work was supported in part by the National Science Council of Taiwan, R.O.C., under grants No. NSC
88-2213-E216-010 and No. NSC 89-2213-E216-016.
* The preliminary content of this paper has been presented in “International Conference on Neural Information
Processing,” Perth, Australia, November 1999. Acknowledgement also due to Mr. Der-Chen Pan at the Na-
tional Taipei University for his help in performing simulations. The author would like to thank Mr. Ming
Shon Chen at Ulead System Inc., Taipei, Taiwan, for his early work and assistance in this research.
1033
('39548632', 'Daw-Tung Lin', 'daw-tung lin')
13604bbdb6f04a71dea4bd093794e46730b0a488Robust Loss Functions under Label Noise for
Deep Neural Networks
Microsoft, Bangalore
Indian Institute of Science, Bangalore
Indian Institute of Science, Bangalore
('3201314', 'Aritra Ghosh', 'aritra ghosh')
('47602083', 'Himanshu Kumar', 'himanshu kumar')
('1711348', 'P. S. Sastry', 'p. s. sastry')
arghosh@microsoft.com
himanshukr@ee.iisc.ernet.in
sastry@ee.iisc.ernet.in
1394ca71fc52db972366602a6643dc3e65ee8726See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/308407783
EmoReact: A Multimodal Approach and Dataset
for Recognizing Emotional Responses in Children
Conference Paper · November 2016
DOI: 10.1145/2993148.2993168
CITATIONS
READS
95
4 authors, including:
Behnaz Nojavanasghari
University of Central Florida
4 PUBLICATIONS 20 CITATIONS
Tadas Baltrusaitis
Carnegie Mellon University
30 PUBLICATIONS 247 CITATIONS
SEE PROFILE
SEE PROFILE
Charles E. Hughes
University of Central Florida
185 PUBLICATIONS 1,248 CITATIONS
SEE PROFILE
All in-text references underlined in blue are linked to publications on ResearchGate,
letting you access and read them immediately.
Available from: Behnaz Nojavanasghari
Retrieved on: 13 October 2016
137aa2f891d474fce1e7a1d1e9b3aefe21e22b34Is the Eye Region More Reliable Than the Face? A Preliminary Study of
Face-based Recognition on a Transgender Dataset
Institute of Interdisciplinary Studies in Identity Sciences (IISIS
University of North Carolina Wilmington
('1805620', 'Gayathri Mahalingam', 'gayathri mahalingam')
('3275890', 'Karl Ricanek', 'karl ricanek')
{mahalingamg, ricanekk}@uncw.edu
13b1b18b9cfa6c8c44addb9a81fe10b0e89db32aA Hierarchical Deep Temporal Model for
Group Activity Recognition
by
B. Tech., Indian Institute of Technology Jodhpur
Thesis Submitted in Partial Fulfillment of the
Requirements for the Degree of
Master of Science
in the
School of Computing Science
Faculty of Applied Sciences
SIMON FRASER UNIVERSITY
Spring 2016
However, in accordance with the Copyright Act of Canada, this work may be
reproduced without authorization under the conditions for “Fair Dealing.”
Therefore, limited reproduction of this work for the purposes of private study,
research, education, satire, parody, criticism, review and news reporting is likely
All rights reserved.
to be in accordance with the law, particularly if cited appropriately.
('2716937', 'Srikanth Muralidharan', 'srikanth muralidharan')
('2716937', 'Srikanth Muralidharan', 'srikanth muralidharan')
1329bcac5ebd0b08ce33ae1af384bd3e7a0deacaDataset Issues in Object Recognition
J. Ponce1,2, T.L. Berg3, M. Everingham4, D.A. Forsyth1, M. Hebert5,
S. Lazebnik1, M. Marszalek6, C. Schmid6, B.C. Russell7, A. Torralba7,
C.K.I. Williams8, J. Zhang6, and A. Zisserman4
University of Illinois at Urbana-Champaign, USA
2 Ecole Normale Sup´erieure, Paris, France
University of California at Berkeley, USA
Oxford University, UK
Carnegie Mellon University, Pittsburgh, USA
6 INRIA Rhˆone-Alpes, Grenoble, France
7 MIT, Cambridge, USA
University of Edinburgh, Edinburgh, UK
133da0d8c7719a219537f4a11c915bf74c320da7International Journal of Computer Applications (0975 – 8887)
Volume 123 – No.4, August 2015
A Novel Method for 3D Image Segmentation with Fusion
of Two Images using Color K-means Algorithm
Dept. of CSE
ITM Universe
Gwalior
Dept. of CSE
ITM Universe
Gwalior
two
13c250fb740cb5616aeb474869db6ab11560e2a6LEARNING LANGUAGE–VISION CORRESPONDENCES
by
A thesis submitted in conformity with the requirements
for the degree of Doctor of Philosophy
Graduate Department of Computer Science
University of Toronto
('38986168', 'Michael Jamieson', 'michael jamieson')
('38986168', 'Michael Jamieson', 'michael jamieson')
13940d0cc90dbf854a58f92d533ce7053aac024aBoston University
OpenBU
Theses & Dissertations
http://open.bu.edu
Boston University Theses and Dissertations
2015
Local learning by partitioning
http://hdl.handle.net/2144/15204
Boston University
('2870611', 'Wang', 'wang')
('17099457', 'Joseph', 'joseph')
133f01aec1534604d184d56de866a4bd531dac87Effective Unconstrained Face Recognition by
Combining Multiple Descriptors and Learned
Background Statistics
('1776343', 'Lior Wolf', 'lior wolf')
('1756099', 'Tal Hassner', 'tal hassner')
('2188620', 'Yaniv Taigman', 'yaniv taigman')
131bfa2ae6a04fd3b921ccb82b1c3f18a400a9c1Elastic Graph Matching versus Linear Subspace
Methods for Frontal Face Verification
Dept. of Informatics
Aristotle University of Thessaloniki, Box 451, 54124 Thessaloniki, Greece
Tel: +30-2310-996361, Fax: +30-2310-998453
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1737071', 'Anastasios Tefas', 'anastasios tefas')
('1698588', 'Ioannis Pitas', 'ioannis pitas')
E-mail: pitas@zeus.csd.auth.gr
13841d54c55bd74964d877b4b517fa94650d9b65Generalised Ambient Reflection Models for Lambertian and
Phong Surfaces
Author
Zhang, Paul, Gao, Yongsheng
Published
2009
Conference Title
Proceedings of the 2009 IEEE International Conference on Image Processing (ICIP 2009)
DOI
https://doi.org/10.1109/ICIP.2009.5413812
Copyright Statement
© 2009 IEEE. Personal use of this material is permitted. However, permission to reprint/
republish this material for advertising or promotional purposes or for creating new collective
works for resale or redistribution to servers or lists, or to reuse any copyrighted component of
this work in other works must be obtained from the IEEE.
Downloaded from
http://hdl.handle.net/10072/30001
Griffith Research Online
https://research-repository.griffith.edu.au
1389ba6c3ff34cdf452ede130c738f37dca7e8cbA Convolution Tree with Deconvolution Branches: Exploiting Geometric
Relationships for Single Shot Keypoint Detection
Department of Electrical and Computer Engineering, CFAR and UMIACS
University of Maryland-College Park, USA
('40080979', 'Amit Kumar', 'amit kumar')
('9215658', 'Rama Chellappa', 'rama chellappa')
akumar14@umiacs.umd.edu, rama@umiacs.umd.edu
131e395c94999c55c53afead65d81be61cd349a4
1384a83e557b96883a6bffdb8433517ec52d0bea
13fd0a4d06f30a665fc0f6938cea6572f3b496f7
132f88626f6760d769c95984212ed0915790b625UC Irvine
UC Irvine Electronic Theses and Dissertations
Title
Exploring Entity Resolution for Multimedia Person Identification
Permalink
https://escholarship.org/uc/item/9t59f756
Author
Zhang, Liyan
Publication Date
2014-01-01
Peer reviewed|Thesis/dissertation
eScholarship.org
Powered by the California Digital Library
University of California
13aef395f426ca8bd93640c9c3f848398b189874Image Preprocessing and Complete 2DPCA with Feature
Extraction for Gender Recognition
NSF REU 2017: Statistical Learning and Data Mining
University of North Carolina Wilmington
13f6ab2f245b4a871720b95045c41a4204626814RESEARCH ARTICLE
Cortex commands the performance of
skilled movement
Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, United
States
('13837962', 'Jian-Zhong Guo', 'jian-zhong guo')
('35466277', 'Austin R Graves', 'austin r graves')
('31262308', 'Wendy W Guo', 'wendy w guo')
('12009815', 'Jihong Zheng', 'jihong zheng')
('3031589', 'Allen Lee', 'allen lee')
('38033405', 'Nuo Li', 'nuo li')
('40634144', 'John J Macklin', 'john j macklin')
('34447371', 'James W Phillips', 'james w phillips')
('1875164', 'Brett D Mensh', 'brett d mensh')
('2424812', 'Kristin Branson', 'kristin branson')
('5832202', 'Adam W Hantman', 'adam w hantman')
13be4f13dac6c9a93f969f823c4b8c88f607a8c4Families in the Wild (FIW): Large-Scale Kinship Image
Database and Benchmarks
Dept. of Electrical and Computer Engineering
Northeastern University
Boston, MA, USA
('14802538', 'Joseph P. Robinson', 'joseph p. robinson')
('2025056', 'Ming Shao', 'ming shao')
('1746738', 'Yue Wu', 'yue wu')
('1708679', 'Yun Fu', 'yun fu')
{jrobins1, mingshao, yuewu, yunfu}@ece.neu.edu
13afc4f8d08f766479577db2083f9632544c7ea6Multiple Kernel Learning for
Emotion Recognition in the Wild
Machine Perception Laboratory
UCSD
EmotiW Challenge, ICMI, 2013
1
('39707211', 'Karan Sikka', 'karan sikka')
('1963167', 'Karmen Dykstra', 'karmen dykstra')
('1924458', 'Suchitra Sathyanarayana', 'suchitra sathyanarayana')
('2724380', 'Gwen Littlewort', 'gwen littlewort')
13188a88bbf83a18dd4964e3f89d0bc0a4d3a0bdHOD, St. Joseph College of Information Technology, Songea, Tanzania
13d9da779138af990d761ef84556e3e5c1e0eb94Int J Comput Vis (2008) 77: 3–24
DOI 10.1007/s11263-007-0093-5
Learning to Locate Informative Features for Visual Identification
Received: 18 August 2005 / Accepted: 11 September 2007 / Published online: 9 November 2007
© Springer Science+Business Media, LLC 2007
('3236352', 'Andras Ferencz', 'andras ferencz')
('1689212', 'Jitendra Malik', 'jitendra malik')
1316296fae6485c1510f00b1b57fb171b9320ac2FaceID-GAN: Learning a Symmetry Three-Player GAN
for Identity-Preserving Face Synthesis
CUHK - SenseTime Joint Lab, The Chinese University of Hong Kong
2SenseTime Research
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
('8035201', 'Yujun Shen', 'yujun shen')
('47571885', 'Ping Luo', 'ping luo')
('1721677', 'Junjie Yan', 'junjie yan')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
{sy116, pluo, xtang}@ie.cuhk.edu.hk, yanjunjie@sensetime.com, xgwang@ee.cuhk.edu.hk
7f57e9939560562727344c1c987416285ef76cdaAccessorize to a Crime: Real and Stealthy Attacks on
State-of-the-Art Face Recognition
Carnegie Mellon University
Pittsburgh, PA, USA
Carnegie Mellon University
Pittsburgh, PA, USA
Carnegie Mellon University
Pittsburgh, PA, USA
University of North Carolina
Chapel Hill, NC, USA
('36301492', 'Mahmood Sharif', 'mahmood sharif')
('38572260', 'Lujo Bauer', 'lujo bauer')
('38181360', 'Sruti Bhagavatula', 'sruti bhagavatula')
('1746214', 'Michael K. Reiter', 'michael k. reiter')
mahmoods@cmu.edu
lbauer@cmu.edu
srutib@cmu.edu
reiter@cs.unc.edu
7fc5b6130e9d474dfb49d9612b6aa0297d481c8eDimensionality Reduction on Grassmannian via Riemannian
Optimization:
A Generalized Perspective
Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
University of Chinese Academy of Sciences, Beijing, 100049, China
3Key Laboratory of Optical-Electronics Information Processing
November 20, 2017
('1803285', 'Tianci Liu', 'tianci liu')
('2172914', 'Zelin Shi', 'zelin shi')
('2556853', 'Yunpeng Liu', 'yunpeng liu')
7f511a6a2b38a26f077a5aec4baf5dffc981d881LOW-LATENCY HUMAN ACTION RECOGNITION WITH WEIGHTED MULTI-REGION
CONVOLUTIONAL NEURAL NETWORK
cid:63)University of Science and Technology of China, Hefei, Anhui, China
†HERE Technologies, Chicago, Illinois, USA
('49417387', 'Yunfeng Wang', 'yunfeng wang')
('38272296', 'Wengang Zhou', 'wengang zhou')
('46324995', 'Qilin Zhang', 'qilin zhang')
('49897466', 'Xiaotian Zhu', 'xiaotian zhu')
('7179232', 'Houqiang Li', 'houqiang li')
7f21a7441c6ded38008c1fd0b91bdd54425d3f80Real Time System for Facial Analysis
Tampere University of Technology, Finland
I.
INTRODUCTION
Most signal or image processing algorithms should be
designed with real-time execution in mind. Most use cases
compute on an embedded platform while receiving streaming
data as a constant data flow. In machine learning, however, the
real time deployment and streaming data processing are less
often a design criterion. Instead, the bulk of machine learning is
executed offline on the cloud without any real time restrictions.
However, the real time use is rapidly becoming more important
as deep learning systems are appearing into, for example,
autonomous vehicles and working machines.
In this work, we describe the functionality of our demo
system integrating a number of common real time machine
learning systems together. The demo system consists of a
screen, webcam and a computer, and it estimates the age,
gender and facial expression of all faces seen by the webcam.
A picture of the system in use is shown in Figure 1. There is
also a Youtube video at https://youtu.be/Kfe5hKNwrCU and
the code is freely available at https://github.com/mahehu/TUT-
live-age-estimator.
Apart from serving as an illustrative example of modern
human level machine learning for the general public, the
system also highlights several aspects that are common in real
time machine learning systems. First, the subtasks needed to
achieve the three recognition results represent a wide variety of
machine learning problems: (1) object detection is used to find
the faces, (2) age estimation represents a regression problem
with a real-valued target output (3) gender prediction is a
binary classification problem, and (4) facial expression
prediction is a multi-class classification problem. Moreover, all
these tasks should operate in unison, such that each task will
receive enough resources from a limited pool.
In the remainder of this paper, we first describe the system
level multithreaded architecture for real time processing in
Section II. This is followed by detailed discussion individual
components of the system in Section III. Next, we report
experimental results on the accuracy of each individual
recognition component in Section IV, and finally, discuss the
benefits of demonstrating the potential of modern machine
learning to both general public and experts in the field.
II. SYSTEM LEVEL FUNCTIONALITY
The challenge in real-time operation is that there are
numerous components in the system, and each uses different
amount of execution time. The system should be designed
such that the operation appears smooth, which means that the
most visible tasks should be fast and have the priority in
scheduling.
Figure 1. Demo system recognizes the age, gender and facial
expression in real time.
The system is running in threads, as illustrated in Figure 2.
The whole system is controlled by the upper level controller
and visualization thread, which owns and starts the sub-
threads dedicated for individual tasks. The main thread holds
all data and executes the visualization loop showing the
recognition results to the user at 25 frames per second.
The recognition process starts from the grabber thread,
which is connected to a webcam. The thread requests video
frames from camera for feeding them into a FIFO buffer
located inside the controller thread. At grab time, each frame is
wrapped inside a class object, which holds the necessary meta
data related to each frame. More specifically, each frame is
linked with a timestamp and a flag indicating whether the face
detection has already been executed and
locations
(bounding boxes) of all found faces in the scene.
the
The actual face analysis consists of two parts: face
detection and face analysis. The detection is executed in the
detection thread, which operates asynchronously, requesting
new non-processed frames from the controller thread. After
face detection, the locations of found faces are sent to the
controller thread, which then matches each new face with all
face objects from the previous frames using straightforward
centroid tracking. Tracking allows us to average the estimates
for each face over a number of recent frames.
The detection thread operates on the average faster than the
frame rate, but sometimes there are delays due to high load on
the other threads. Therefore, the controller thread holds a
buffer of the most recent frames, in order to increase the
flexibility of processing time.
The recognition thread is responsible for assessing the age,
gender and facial expression of each face crop found from the
image. The thread operates also in an asynchronous mode,
requesting new non-processed (but face-detected) frames from
('51232696', 'Janne Tommola', 'janne tommola')
('51149972', 'Pedram Ghazi', 'pedram ghazi')
('51131997', 'Bishwo Adhikari', 'bishwo adhikari')
('1847889', 'Heikki Huttunen', 'heikki huttunen')
7fce5769a7d9c69248178989a99d1231daa4fce9(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 7, No. 5, 2016
Towards Face Recognition Using Eigenface
Department of Computer Engineering
King Faisal University
Hofuf, Al-Ahsa 31982, Saudi Arabia
('39604645', 'Md. Al-Amin Bhuiyan', 'md. al-amin bhuiyan')
7fa2605676c589a7d1a90d759f8d7832940118b5A New Approach to Clothing Classification using Mid-Level Layers
Department of Electrical and Computer Engineering
Clemson University, Clemson, SC
('2181472', 'Bryan Willimon', 'bryan willimon'){rwillim,iwalker,stb}@clemson.edu
7ff42ee09c9b1a508080837a3dc2ea780a1a839bData Fusion for Real-time Multimodal Emotion Recognition through Webcams
and Microphones in E-Learning
Welten Institute, Research Centre for Learning, Teaching and Technology, Faculty of
Psychology and Educational Sciences, Open University of the Netherlands, Valkenburgerweg
177, 6419 AT Heerlen, The Netherlands
('2565070', 'Kiavash Bahreini', 'kiavash bahreini')
('1717772', 'Rob Nadolski', 'rob nadolski')
('3235367', 'Wim Westera', 'wim westera')
{kiavash.bahreini, rob.nadolski, wim.westera}@ou.nl
7fb5006b6522436ece5bedf509e79bdb7b79c9a7Multi-Task Convolutional Neural Network for Face Recognition
Department of Computer Science and Engineering
Michigan State University, East Lansing MI
('2399004', 'Xi Yin', 'xi yin')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
{yinxi1,liuxm}@msu.edu
7f533bd8f32525e2934a66a5b57d9143d7a89ee1Audio-Visual Identity Grounding for Enabling Cross Media Search
Paper ID 22
('1950685', 'Kevin Brady', 'kevin brady')
7f44f8a5fd48b2d70cc2f344b4d1e7095f4f1fe5Int J Comput Vis (2016) 119:60–75
DOI 10.1007/s11263-015-0839-4
Sparse Output Coding for Scalable Visual Recognition
Received: 15 May 2013 / Accepted: 16 June 2015 / Published online: 26 June 2015
© Springer Science+Business Media New York 2015
('1729034', 'Bin Zhao', 'bin zhao')
7f4bc8883c3b9872408cc391bcd294017848d0cf

Computer
Sciences
Department
The Multimodal Focused Attribute Model: A Nonparametric
Bayesian Approach to Simultaneous Object Classification and
Attribute Discovery
Technical Report #1697
January 2012
('6256616', 'Jake Rosin', 'jake rosin')
('1724754', 'Charles R. Dyer', 'charles r. dyer')
('1832364', 'Xiaojin Zhu', 'xiaojin zhu')
7f6061c83dc36633911e4d726a497cdc1f31e58aYouTube-8M: A Large-Scale Video Classification
Benchmark
Paul Natsev
Google Research
('2461984', 'Sami Abu-El-Haija', 'sami abu-el-haija')
('1805076', 'George Toderici', 'george toderici')
('32575647', 'Nisarg Kothari', 'nisarg kothari')
('2119006', 'Joonseok Lee', 'joonseok lee')
('2758088', 'Balakrishnan Varadarajan', 'balakrishnan varadarajan')
('2259154', 'Sudheendra Vijayanarasimhan', 'sudheendra vijayanarasimhan')
haija@google.com
gtoderici@google.com
ndk@google.com
joonseok@google.com
natsev@google.com
balakrishnanv@google.com
svnaras@google.com
7fa3d4be12e692a47b991c0b3d3eba3a31de4d05Efficient Online Spatio-Temporal Filtering
for Video Event Detection
1 Department of Computer Science and Engineering,
Shanghai Jiao Tong University, Shanghai 200240, China
2 School of Electrical and Electronic Engineering,
Nanyang Technological University, Singapore 639798, Singapore
3 Computer Science and Engineering Division,
University of Michigan
Ann Arbor, MI 48105, USA
('3084614', 'Xinchen Yan', 'xinchen yan')
('34316743', 'Junsong Yuan', 'junsong yuan')
('2574445', 'Hui Liang', 'hui liang')
skywalkeryxc@gmail.com
jsyuan@ntu.edu.sg, hliang1@e.ntu.edu.sg
7f445191fa0475ff0113577d95502a96dc702ef9Towards an Unequivocal Representation of Actions
University of Bristol
University of Bristol
University of Bristol
('2052236', 'Michael Wray', 'michael wray')
('3420479', 'Davide Moltisanti', 'davide moltisanti')
('1728459', 'Dima Damen', 'dima damen')
firstname.surname@bristol.ac.uk
7f82f8a416170e259b217186c9e38a9b05cb3eb4Multi-Attribute Robust Component Analysis for Facial UV Maps
Imperial College London, London, UK
Middlesex University London, London, UK
Goldsmiths, University of London, London, UK
('24278037', 'Stylianos Moschoglou', 'stylianos moschoglou')
('31243357', 'Evangelos Ververas', 'evangelos ververas')
('1780393', 'Yannis Panagakis', 'yannis panagakis')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
{s.moschoglou, e.ververas16, i.panagakis, s.zafeiriou}@imperial.ac.uk, m.nicolaou@gold.ac.uk
7f36dd9ead29649ed389306790faf3b390dc0aa2MOVEMENT DIFFERENCES BETWEEN DELIBERATE
AND SPONTANEOUS FACIAL EXPRESSIONS:
ZYGOMATICUS MAJOR ACTION IN SMILING
('2059653', 'Zara Ambadar', 'zara ambadar')
7f6cd03e3b7b63fca7170e317b3bb072ec9889e0A Face Recognition Signature Combining Patch-based
Features with Soft Facial Attributes
L. Zhang, P. Dou, I.A. Kakadiaris
Computational Biomedicine Lab, 4849 Calhoun Rd, Rm 373, Houston, TX 77204
7fab17ef7e25626643f1d55257a3e13348e435bdAge Progression/Regression by Conditional Adversarial Autoencoder
The University of Tennessee, Knoxville, TN, USA
('1786391', 'Zhifei Zhang', 'zhifei zhang')
('46970616', 'Yang Song', 'yang song')
('1698645', 'Hairong Qi', 'hairong qi')
{zzhang61, ysong18, hqi}@utk.edu
7f6599e674a33ed64549cd512ad75bdbd28c7f6cKernel Alignment Inspired
Linear Discriminant Analysis
Department of Computer Science and Engineering,
University of Texas at Arlington, TX, USA
('1747268', 'Shuai Zheng', 'shuai zheng')zhengs123@gmail.com, chqding@uta.edu
7f9260c00a86a0d53df14469f1fa10e318ee2a3cHOW IRIS RECOGNITION WORKS
University of Cambridge, The Computer Laboratory, Cambridge CB3 0FD, U.K
('1781325', 'John Daugman', 'john daugman')
7f97a36a5a634c30de5a8e8b2d1c812ca9f971aeIncremental Classifier Learning with Generative Adversarial Networks
Northeastern University 2Microsoft Research 3City University of New York
('1746738', 'Yue Wu', 'yue wu')
('1691128', 'Zicheng Liu', 'zicheng liu')
{yuewu,yunfu}@ece.neu.edu, yye@gradcenter.cuny.edu
{yiche,lijuanw,zliu,yandong.guo,zhang}@microsoft.com
7f2a4cd506fe84dee26c0fb41848cb219305173fInternational Journal of Hybrid Information Technology
Vol.8, No.2 (2015), pp.109-120
http://dx.doi.org/10.14257/ijhit.2015.8.2.10
Face Detection and Pose Estimation Based on Evaluating Facial
Feature Selection
School of Information Science and Engineering, Central South University, Changsha
410083, China
Huazhong University of
Science and Technology, Wuhan, China
Collage of Sciences, Baghdad University, Iraq
('2759156', 'Hiyam Hatem', 'hiyam hatem')
('2742321', 'Mohammed Lutf', 'mohammed lutf')
('2462860', 'Jumana Waleed', 'jumana waleed')
hiamhatim2005@yahoo.com, bjzou@vip.163.com, aed.m.muttasher@gmail.com,
jumana_waleed@yahoo.com, mohammed.lutf@gmail.com1
7fd700f4a010d765c506841de9884df394c1de1cCorrelational Spectral Clustering
Max Planck Institute for Biological Cybernetics
72076 T¨ubingen, Germany
('1758219', 'Matthew B. Blaschko', 'matthew b. blaschko')
('1787591', 'Christoph H. Lampert', 'christoph h. lampert')
{blaschko,chl}@tuebingen.mpg.de
7f59657c883f77dc26393c2f9ed3d19bdf51137bUniversity of Wollongong
Research Online
Faculty of Informatics - Papers (Archive)
Faculty of Engineering and Information Sciences
2006
Facial expression recognition for multiplayer online
games
Publication Details
Zhan, C., Li, W., Ogunbona, P. O. & Safaei, F. (2006). Facial expression recognition for multiplayer online games. Joint International
Conference on CyberGames and Interactive Entertainment (pp. 52-58). Western Australia: Murdoch university
Research Online is the open access institutional repository for the
University of Wollongong. For further information contact the UOW
('3283367', 'Ce Zhan', 'ce zhan')
('1685696', 'Wanqing Li', 'wanqing li')
('1719314', 'Philip O. Ogunbona', 'philip o. ogunbona')
('1803733', 'Farzad Safaei', 'farzad safaei')
University of Wollongong, czhan@uow.edu.au
University of Wollongong, wanqing@uow.edu.au
University of Wollongong, philipo@uow.edu.au
University of Wollongong, farzad@uow.edu.au
Library: research-pubs@uow.edu.au
7f23a4bb0c777dd72cca7665a5f370ac7980217eImproving Person Re-identification by Attribute and Identity Learning
University of Technology Sydney
('9919679', 'Yutian Lin', 'yutian lin')
('14904242', 'Liang Zheng', 'liang zheng')
('7435343', 'Zhedong Zheng', 'zhedong zheng')
('1887625', 'Yu Wu', 'yu wu')
('1698559', 'Yi Yang', 'yi yang')
yutianlin477,liangzheng06,zdzheng12,wu08yu,yee.i.yang@gmail.com
7f268f29d2c8f58cea4946536f5e2325777fa8faFacial Emotion Recognition in Curvelet Domain
Indian Institute of Informaiton Technology, Allahabad, India
Allahabad, India - 211012
('35077572', 'Gyanendra K Verma', 'gyanendra k verma')
('30102998', 'Bhupesh Kumar Singh', 'bhupesh kumar singh')
gyanendra@iiita.ac.in , rs65@iiita.ac.in
7fc3442c8b4c96300ad3e860ee0310edb086de94Similarity Scores based on Background Samples
The School of Computer Science, Tel-Aviv University, Israel
Computer Science Division, The Open University of Israel, Israel
3 face.com
('1776343', 'Lior Wolf', 'lior wolf')
('1756099', 'Tal Hassner', 'tal hassner')
('2188620', 'Yaniv Taigman', 'yaniv taigman')
7f3a73babe733520112c0199ff8d26ddfc7038a0
7f8d44e7fd2605d580683e47bb185de7f9ea9e28Predicting Personal Traits from Facial Images using Convolutional Neural
Networks Augmented with Facial Landmark Information
The Hebrew University of Jerusalem, Israel
2Microsoft Research, Cambridge, United Kingdom
Machine Intelligence Lab (MIL), Cambridge University
('2291654', 'Yoad Lewenberg', 'yoad lewenberg')
('1698412', 'Yoram Bachrach', 'yoram bachrach')
('1808862', 'Sukrit Shankar', 'sukrit shankar')
('1716777', 'Antonio Criminisi', 'antonio criminisi')
yoadlew@cs.huji.ac.il
yobach@microsoft.com
ss965@cam.ac.uk
antcrim@microsoft.com
7f1f3d7b1a4e7fc895b77cb23b1119a6f13e4d3aProc. of IEEE International
Symposium on Computational
Intelligence in Robotics and
Automation (CIRA), July.16-20,
2003, Kobe Japan, pp. 954-959
Multi-Subregion Based Probabilistic Approach Toward
Pose-Invariant Face Recognition
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
2SANYO Electric Co., Ltd., Osaka, Japan 573-8534
('1733113', 'Takeo Kanade', 'takeo kanade')
('3151943', 'Akihiko Yamada', 'akihiko yamada')
E-mail: tk@ri.cmu.edu, aki-yamada@rd.sanyo.co.jp,
7fcfd72ba6bc14bbb90b31fe14c2c77a8b220ab2Robust FEC-CNN: A High Accuracy Facial Landmark Detection System
1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS),
Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
3 CAS Center for Excellence in Brain Science and Intelligence Technology
('3469114', 'Zhenliang He', 'zhenliang he')
('1698586', 'Jie Zhang', 'jie zhang')
('1693589', 'Meina Kan', 'meina kan')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1710220', 'Xilin Chen', 'xilin chen')
{zhenliang.he,jie.zhang,meina.kan,shiguang.shan,xilin.chen}@vipl.ict.ac.cn
7f205b9fca7e66ac80758c4d6caabe148deb8581Page 1 of 47
Computing Surveys
A Survey on Mobile Social Signal Processing
Understanding human behaviour in an automatic but non-intrusive manner is an important area for various applications. This requires the
collaboration of information technology with human sciences to transfer existing knowledge of human behaviour into self-acting tools. These
tools will reduce human error that is introduced by current obtrusive methods such as questionnaires. To achieve unobtrusiveness, we focus on
exploiting the pervasive and ubiquitous character of mobile devices.
In this article, a survey of existing techniques for extracting social behaviour through mobile devices is provided. Initially we expose the
terminology used in the area and introduce a concrete architecture for social signal processing applications on mobile phones, constituted by
sensing, social interaction detection, behavioural cues extraction, social signal inference and social behaviour understanding. Furthermore, we
present state-of-the-art techniques applied to each stage of the process. Finally, potential applications are shown while arguing about the main
challenges of the area.
Categories and Subject Descriptors: General and reference [Document Types]: Surveys and Overviews; Human-centered computing [Collab-
orative and social computing, Ubiquitous and mobile computing]
General Terms: Design, Theory, Human Factors, Performance
Additional Key Words and Phrases: Social Signal Processing, mobile phones, social behaviour
ACM Reference Format:
Processing. ACM V, N, Article A (January YYYY), 35 pages.
DOI:http://dx.doi.org/10.1145/0000000.0000000
1. INTRODUCTION
Human behaviour understanding has received a great deal of interest since the beginning of the previous century.
People initially conducted research on the way animals behave when they are surrounded by creatures of the same
species. Acquiring basic underlying knowledge of animal relations led to extending this information to humans
in order to understand social behaviour, social relations etc. Initial experiments were conducted by empirically
observing people and retrieving feedback from them. These methods gave rise to well-established psychological
approaches for understanding human behaviour, such as surveys, questionnaires, camera recordings and human
observers. Nevertheless, these methods introduce several limitations including various sources of error. Complet-
ing surveys and questionnaires induces partiality, unconcern etc. [Groves 2004], human error [Reason 1990], and
additional restrictions in scalability of the experiments. Accumulating these research problems leads to a common
challenge, the lack of automation in an unobtrusive manner.
An area that has focussed on detecting social behaviour automatically and has received a great amount of at-
tention is Social Signal Processing (SSP). The main target of the field is to model, analyse and synthesise human
behaviour with limited user intervention. To achieve these targets, researchers presented three key terms which
('23537960', 'NIKLAS PALAGHIAS', 'niklas palaghias')
('3339833', 'SEYED AMIR HOSEINITABATABAEI', 'seyed amir hoseinitabatabaei')
('2082222', 'MICHELE NATI', 'michele nati')
('1929850', 'ALEXANDER GLUHAK', 'alexander gluhak')
('1693389', 'KLAUS MOESSNER', 'klaus moessner')
('23537960', 'NIKLAS PALAGHIAS', 'niklas palaghias')
('3339833', 'SEYED AMIR HOSEINITABATABAEI', 'seyed amir hoseinitabatabaei')
('2082222', 'MICHELE NATI', 'michele nati')
('1929850', 'ALEXANDER GLUHAK', 'alexander gluhak')
('1693389', 'KLAUS MOESSNER', 'klaus moessner')
7fc76446d2b11fc0479df6e285723ceb4244d4efJRPIT 42.1.QXP:Layout 1 12/03/10 2:11 PM Page 3
Laplacian MinMax Discriminant Projection and its
Applications
Zhejiang Normal University, Jinhua, China
Jie Yang
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
A new algorithm, Laplacian MinMax Discriminant Projection (LMMDP), is proposed in this paper
for supervised dimensionality reduction. LMMDP aims at learning a discriminant linear
transformation. Specifically, we define the within-class scatter and the between-class scatter using
similarities which are based on pairwise distances in sample space. After the transformation, the
considered pairwise samples within the same class are as close as possible, while those between
classes are as far as possible. The structural information of classes is contained in the within-class
and the between-class Laplacian matrices. Therefore, the discriminant projection subspace can be
derived by controlling the structural evolution of Laplacian matrices. The performance on several
data sets demonstrates the competence of the proposed algorithm.
ACM Classification: I.5
Keywords: Manifold Learning; Dimensionality Reduction; Supervised Learning; Discriminant
Analysis
1. INTRODUCTION
Dimensionality reduction has attracted tremendous attention in the pattern recognition community
over the past few decades and many new algorithms have been developed. Among these algorithms,
linear dimensionality reduction is widely spread for its simplicity and effectiveness. Principal
component analysis (PCA), as a classic linear method for unsupervised dimensionality reduction,
aims at learning a kind of subspaces where the maximum covariance of all training samples are
preserved (Turk,1991). Locality Preserving Projections, as another typical approach for
unsupervised dimensionality reduction, seeks projections to preserve the local structure of the
sample space (He, 2005). However, unsupervised learning algorithms cannot properly model the
underlying structures and characteristics of different classes (Zhao, 2007). Discriminant features are
often obtained by supervised dimensionality reduction. Linear discriminant analysis (LDA) is one
of the most popular supervised techniques for classification (Fukunaga, 1990; Belhumeur, 1997).
LDA aims at learning discriminant subspace where the within-class scatter is minimized and the
between-class scatter of samples is maximized at the same time. Many improved LDAs up to date
have demonstrated competitive performance in object classification (Howland, 2004; Liu, 2007;
Martinez, 2006; Wang and Tang, 2004a; Yang, 2005).
Copyright© 2010, Australian Computer Society Inc. General permission to republish, but not for profit, all or part of this
material is granted, provided that the JRPIT copyright notice is given and that reference is made to the publication, to its
date of issue, and to the fact that reprinting privileges were granted by permission of the Australian Computer Society Inc.
Manuscript received: 15 April 2008
Communicating Editor: Tele Tan
('3185576', 'Zhonglong Zheng', 'zhonglong zheng')
('3140483', 'Xueping Chang', 'xueping chang')
Email: zhonglong@sjtu.org
7a9ef21a7f59a47ce53b1dff2dd49a8289bb5098
7af38f6dcfbe1cd89f2307776bcaa09c54c30a8beaig i C e Vii ad Beyd:
Devee
h . Weg
Deae f C e Sciece
ichiga Sae Uiveiy
Ea aig  48824
Abac
Thi chae id ce wha i caed he deveea aach  c e vii i
aic a ad ai(cid:12)cia ieigece i geea.  dic e he c e baic aadig f de
veig a ye ad i f daea iiai. The deveea aach i ivaed
by h a cgiive devee f ifacy  ad hd. A deveea eaig ag
ih i deeied befe he \bih" f he ye. Afe he \bih" i eabe he ye
 ea ew ak wih  a eed f egaig. The aj ga f he deveea
aach i  eaize a ai f geea  e eaig ha eabe achie  ef
deveea eaig ve a g eid. S ch eaig i cd ced i a de iia  he
way aia ad h a ea. The achie   ea diecy f ci   ey i
  ea whie ieacig wih he evie ic dig h a eache.  hi eaig
de deveig ieige ga f vai  ak i eaized h gh ea ie ieac
weg@c. .ed
7a81967598c2c0b3b3771c1af943efb1defd4482Do We Need More Training Data? ('32542103', 'Xiangxin Zhu', 'xiangxin zhu')
7ae0212d6bf8a067b468f2a78054c64ea6a577ceHuman Face Processing Techniques
With Application To
Large Scale Video Indexing
DOCTOR OF
PHILOSOPHY
Department of Informatics,
School of Multidisciplinary Sciences,
The Graduate University for Advanced Studies (SOKENDAI
2006 (School Year)
September 2006
7a9c317734acaf4b9bd8e07dd99221c457b94171Lorentzian Discriminant Projection and Its Applications
Dalian University of Technology, Dalian 116024, China
2 Microsoft Research Asia, Beijing 100080, China
('34469457', 'Risheng Liu', 'risheng liu')
('4642456', 'Zhixun Su', 'zhixun su')
('33383055', 'Zhouchen Lin', 'zhouchen lin')
('40290490', 'Xiaoyu Hou', 'xiaoyu hou')
zxsu@dlut.edu.cn
7a0fb972e524cb9115cae655e24f2ae0cfe448e0Facial Expression Classification Using RBF AND Back-Propagation Neural Networks
R.Q.Feitosa1,2,
M.M.B.Vellasco1,2,
D.T.Oliveira1,
D.V.Andrade1,
S.A.R.S.Maffra1
Catholic University of Rio de Janeiro, Brazil
Department of Electric Engineering
State University of Rio de Janeiro, Brazil
Department of Computer Engineering
e-mail: [raul, marley]@ele.puc -rio.br, tuler@inf.puc-rio.br, [diogo, sam]@tecgraf.puc-rio.br
7ad77b6e727795a12fdacd1f328f4f904471233fSupervised Local Descriptor Learning
for Human Action Recognition
('34798935', 'Xiantong Zhen', 'xiantong zhen')
('40255667', 'Feng Zheng', 'feng zheng')
('40799321', 'Ling Shao', 'ling shao')
('1720247', 'Xianbin Cao', 'xianbin cao')
('40147776', 'Dan Xu', 'dan xu')
7a3d46f32f680144fd2ba261681b43b86b702b85Multi-label Learning Based Deep Transfer Neural Network for Facial Attribute
Classification
School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
bSchool of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
aFujian Key Laboratory of Sensing and Computing for Smart City,
cSchool of Computer Science, The University of Adelaide, Adelaide, SA 5005, Australia
('41034942', 'Ni Zhuang', 'ni zhuang')
('40461734', 'Yan Yan', 'yan yan')
('47336404', 'Si Chen', 'si chen')
('37414077', 'Hanzi Wang', 'hanzi wang')
('1780381', 'Chunhua Shen', 'chunhua shen')
7a97de9460d679efa5a5b4c6f0b0a5ef68b56b3b
7a7f2403e3cc7207e76475e8f27a501c21320a44Emotion Recognition from Multi-Modal Information
Department of Computer Science and Information Engineering,
National Cheng Kung University, Tainan, Taiwan, R.O.C
('1681512', 'Chung-Hsien Wu', 'chung-hsien wu')
('1709777', 'Jen-Chun Lin', 'jen-chun lin')
('1691390', 'Wen-Li Wei', 'wen-li wei')
('2891156', 'Kuan-Chun Cheng', 'kuan-chun cheng')
E-mail: chunghsienwu@gmail.com, jenchunlin@gmail.com, lilijinjin@gmail.com, davidcheng817@gmail.com
7aafeb9aab48fb2c34bed4b86755ac71e3f00338Article
Real Time 3D Facial Movement Tracking Using a
Monocular Camera
School of Electronics and Information Engineering, Tongji University, Caoan Road 4800, Shanghai
Kumamoto University, 2-39-1 Kurokami, Kumamoto shi
Academic Editor: Vittorio M. N. Passaro
Received: 9 May 2016; Accepted: 20 July 2016; Published: 25 July 2016
('2576907', 'Yanchao Dong', 'yanchao dong')
('1715838', 'Yanming Wang', 'yanming wang')
('2721582', 'Jiguang Yue', 'jiguang yue')
('3256415', 'Zhencheng Hu', 'zhencheng hu')
China; 11wanggyanming@tongji.edu.cn (Y.W.); yuejiguang@tongji.edu.cn (J.Y.)
Japan; hu@cs.kumamoto-u.ac.jp
* Correspondence: dongyanchao@tongji.edu.cn; Tel.: +86-21-6958-3806
7a84368ebb1a20cc0882237a4947efc81c56c0c0Robust and Efficient Parametric Face Alignment
†Dept. of Computing,
Imperial College London
180 Queen’s Gate
London SW7 2AZ, U.K.
∗EEMCS
University of Twente
Drienerlolaan 5
7522 NB Enschede
The Netherlands ∗
('2610880', 'Georgios Tzimiropoulos', 'georgios tzimiropoulos')
('1776444', 'Stefanos Zafeiriou', 'stefanos zafeiriou')
('1694605', 'Maja Pantic', 'maja pantic')
{gt204,s.zafeiriou,m.pantic}@imperial.ac.uk
7aa4c16a8e1481629f16167dea313fe9256abb42978-1-5090-4117-6/17/$31.00 ©2017 IEEE
2981
ICASSP 2017
7a85b3ab0efb6b6fcb034ce13145156ee9d10598
7ab930146f4b5946ec59459f8473c700bcc89233
7a65fc9e78eff3ab6062707deaadde024d2fad40A Study on Apparent Age Estimation
West Virginia University, Morgantown WV 26506, USA
Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of
Computing Technology, CAS, Beijing, 100190, China
('1736182', 'Yu Zhu', 'yu zhu')
('1698571', 'Yan Li', 'yan li')
('2501850', 'Guowang Mu', 'guowang mu')
('1822413', 'Guodong Guo', 'guodong guo')
yzhu4@mix.wvu.edu, yan.li@vipl.ict.ac.cn, guowang.mu@mail.wvu.edu ,
Guodong.Guo@mail.wvu.edu (corresponding author)
7ad7897740e701eae455457ea74ac10f8b307bedRandom Subspace Two-dimensional LDA for Face Recognition* ('29980351', 'Garrett Bingham', 'garrett bingham')
7ac9aaafe4d74542832c273acf9d631cb8ea6193Deep Micro-Dictionary Learning and Coding Network
University of Trento, Trento, Italy
2Department of Electrical Engineering, Hong Kong Polytechnic Unversity, Hong Kong, China
3Lingxi Artificial Interlligence Co., Ltd, Shen Zhen, China
4Computer Vision Laboratory, ´Ecole Polytechnique F´ed´erale de Lausanne, Lausanne, Switzerland
University of Oxford, Oxford, UK
Texas State University, San Marcos, USA
('46666325', 'Hao Tang', 'hao tang')
('49567679', 'Heng Wei', 'heng wei')
('38505394', 'Wei Xiao', 'wei xiao')
('47824598', 'Wei Wang', 'wei wang')
('40147776', 'Dan Xu', 'dan xu')
('1703601', 'Nicu Sebe', 'nicu sebe')
{hao.tang, niculae.sebe}@unitn.it, 15102924d@connect.polyu.hk, xiaoweithu@163.com
wei.wang@epfl.ch, danxu@robots.ox.ac.uk, y y34@txstate.edu
7a1ce696e260899688cb705f243adf73c679f0d9Predicting Missing Demographic Information in
Biometric Records using Label Propagation
Techniques
Department of Computer Science and Engineering
Department of Computer Science and Engineering
Michigan State University
East Lansing, Michigan 48824
Michigan State University
East Lansing, Michigan 48824
('3153117', 'Thomas Swearingen', 'thomas swearingen')
('1698707', 'Arun Ross', 'arun ross')
Email: swearin3@msu.edu
Email: rossarun@msu.edu
7a7b1352d97913ba7b5d9318d4c3d0d53d6fb697Attend and Rectify: a Gated Attention
Mechanism for Fine-Grained Recovery
†Computer Vision Center and Universitat Aut`onoma de Barcelona (UAB),
Campus UAB, 08193 Bellaterra, Catalonia Spain
‡Visual Tagging Services, Parc de Recerca, Campus UAB
('1739551', 'Josep M. Gonfaus', 'josep m. gonfaus')
('7153363', 'Guillem Cucurull', 'guillem cucurull')
('1696387', 'F. Xavier Roca', 'f. xavier roca')
7aa062c6c90dba866273f5edd413075b90077b51I.J. Information Technology and Computer Science, 2017, 5, 40-51
Published Online May 2017 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijitcs.2017.05.06
Minimizing Separability: A Comparative Analysis
of Illumination Compensation Techniques in Face
Recognition
Baze University, Abuja, Nigeria
('7392398', 'Chollette C. Olisah', 'chollette c. olisah')E-mail: chollette.olisah@bazeuniversity.edu.ng
7a131fafa7058fb75fdca32d0529bc7cb50429bdBeyond Face Rotation: Global and Local Perception GAN for Photorealistic and
Identity Preserving Frontal View Synthesis
1National Laboratory of Pattern Recognition, CASIA
2Center for Research on Intelligent Perception and Computing, CASIA
University of Chinese Academy of Sciences, Beijing, China
('48241673', 'Rui Huang', 'rui huang')
('50202300', 'Shu Zhang', 'shu zhang')
('50290162', 'Tianyu Li', 'tianyu li')
('1705643', 'Ran He', 'ran he')
huangrui@cmu.edu, tianyu.lizard@gmail.com, {shu.zhang, rhe}@nlpr.ia.ac.cn
1451e7b11e66c86104f9391b80d9fb422fb11c01IET Signal Processing
Research Article
Image privacy protection with secure JPEG
transmorphing
ISSN 1751-9675
Received on 30th December 2016
Revised 13th July 2017
Accepted on 11th August 2017
doi: 10.1049/iet-spr.2016.0756
www.ietdl.org
1Multimedia Signal Processing Group, Electrical Engineering Department, EPFL, Station 11, Lausanne, Switzerland
('1681498', 'Touradj Ebrahimi', 'touradj ebrahimi') E-mail: lin.yuan@epfl.ch
14761b89152aa1fc280a33ea4d77b723df4e3864
14b87359f6874ff9b8ee234b18b418e57e75b762H. GAO ET AL: FACE ALIGNMENT USING A RANKING MODEL BASED ON RT
Face Alignment Using a Ranking Model
based on Regression Trees
Hazım Kemal Ekenel1,2
Institute for Anthropomatics
Karlsruhe Institute of Technology
Karlsruhe, Germany
2 Faculty of Computer and Informatics
Istanbul Technical University
Istanbul, Turkey
('1697965', 'Hua Gao', 'hua gao')
('1742325', 'Rainer Stiefelhagen', 'rainer stiefelhagen')
gao@kit.edu
ekenel@kit.edu
rainer.stiefelhagen@kit.edu
14fdec563788af3202ce71c021dd8b300ae33051Social Influence Analysis based on Facial Emotions
Department of Computer Science and Engineering
Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, 466-8555 Japan
('2159044', 'Pankaj Mishra', 'pankaj mishra')
('1679044', 'Takayuki Ito', 'takayuki ito')
{pankaj.mishra, rafik}@itolab.nitech.ac.jp,
ito.takayuki@nitech.ac.jp
142e5b4492bc83b36191be4445ef0b8b770bf4b0Discriminative Analysis of Brain Function
at Resting-State for Attention-Deficit/Hyperactivity
Disorder
Y.F. Wang2, and T. Z. Jiang1
National Laboratory of Pattern Recognition, Institute of Automation
Chinese Academy of Sciences, P.R. China
Institute of Mental Health, Peking University, P.R. China
('2339602', 'M. Liang', 'm. liang')czzhu@nlpr.ia.ac.cn
14b016c7a87d142f4b9a0e6dc470dcfc073af517Modest proposals for improving biometric recognition papers
NIST, Gaithersburg MD
San Jose State University, San Jose, CA
('2145366', 'James R. Matey', 'james r. matey')
('34958610', 'George W. Quinn', 'george w. quinn')
('2136478', 'Patrick Grother', 'patrick grother')
('2326261', 'Elham Tabassi', 'elham tabassi')
('1707135', 'James L. Wayman', 'james l. wayman')
POC: james.matey@NIST.gov
jlwayman@aol.com
14b66748d7c8f3752dca23991254fca81b6ee86cA. RICHARD, J. GALL: A BOW-EQUIVALENT NEURAL NETWORK
A BoW-equivalent Recurrent Neural Network
for Action Recognition
Institute of Computer Science III
University of Bonn
Bonn, Germany
('32774629', 'Alexander Richard', 'alexander richard')
('2946643', 'Juergen Gall', 'juergen gall')
richard@iai.uni-bonn.de
gall@iai.uni-bonn.de
14e8dbc0db89ef722c3c198ae19bde58138e88bfHapFACS: an Open Source API/Software to
Generate FACS-Based Expressions for ECAs
Animation and for Corpus Generation
Christine Lisetti
School of Computing and Information Sciences
School of Computing and Information Sciences
Florida International University
Miami, Florida, USA
Florida International University
Miami, Florida, USA
('1809087', 'Reza Amini', 'reza amini')Email: ramin001@fiu.edu
Email: lisetti@cis.fiu.edu
14fa27234fa2112014eda23da16af606db7f3637
1459d4d16088379c3748322ab0835f50300d9a38Cross-Domain Visual Matching via Generalized
Similarity Measure and Feature Learning
('40461403', 'Liang Lin', 'liang lin')
('2749191', 'Guangrun Wang', 'guangrun wang')
('1724520', 'Wangmeng Zuo', 'wangmeng zuo')
('2340559', 'Xiangchu Feng', 'xiangchu feng')
('40396552', 'Lei Zhang', 'lei zhang')
14e949f5754f9e5160e8bfa3f1364dd92c2bb8d6
146bbf00298ee1caecde3d74e59a2b8773d2c0fcUniversity of Groningen
4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera
Schimbinschi, Florin; Wiering, Marco; Mohan, R.E.; Sheba, J.K.
Published in:
7th IEEE Conference on Industrial Electronics and Applications
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to
cite from it. Please check the document version below.
Document Version
Final author's version (accepted by publisher, after peer review)
Publication date:
2012
Link to publication in University of Groningen/UMCG research database
Citation for published version (APA):
Schimbinschi, F., Wiering, M., Mohan, R. E., & Sheba, J. K. (2012). 4D Unconstrained Real-time Face
Recognition Using a Commodity Depthh Camera. In 7th IEEE Conference on Industrial Electronics and
Applications : ICIEA
Copyright
Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the
author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
Take-down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the
number of authors shown on this cover page is limited to 10 maximum.
Download date: 03-09-2017
14e9158daf17985ccbb15c9cd31cf457e5551990ConvNets with Smooth Adaptive Activation Functions for
Regression
Tahsin M. Kurc1,2
Stony Brook University
2Oak Ridge National Laboratory
Stony Brook University Hospital
('2321406', 'Le Hou', 'le hou')
('1686020', 'Dimitris Samaras', 'dimitris samaras')
('1735710', 'Joel H. Saltz', 'joel h. saltz')
('1755448', 'Yi Gao', 'yi gao')
14ce7635ff18318e7094417d0f92acbec6669f1cDeepFace: Closing the Gap to Human-Level Performance in Face Verification
Marc’Aurelio Ranzato
Facebook AI Group
Menlo Park, CA, USA
Tel Aviv University
Tel Aviv, Israel
('2188620', 'Yaniv Taigman', 'yaniv taigman')
('2909406', 'Ming Yang', 'ming yang')
('1776343', 'Lior Wolf', 'lior wolf')
{yaniv, mingyang, ranzato}@fb.com
wolf@cs.tau.ac.il
1450296fb936d666f2f11454cc8f0108e2306741Learning to Discover Cross-Domain Relations
with Generative Adversarial Networks
('2509132', 'Taeksoo Kim', 'taeksoo kim')
140438a77a771a8fb656b39a78ff488066eb6b50Localizing Parts of Faces Using a Consensus of Exemplars
(cid:63)Kriegman-Belhumeur Vision Technologies∗
University of Maryland, College Park
University of California, San Diego
Columbia University
('1767767', 'Peter N. Belhumeur', 'peter n. belhumeur')
('34734622', 'David W. Jacobs', 'david w. jacobs')
('1765887', 'David J. Kriegman', 'david j. kriegman')
('40631426', 'Neeraj Kumar', 'neeraj kumar')
143bee9120bcd7df29a0f2ad6f0f0abfb23977b8Shared Gaussian Process Latent Variable Model
for Multi-view Facial Expression Recognition
Imperial College London, UK
EEMCS, University of Twente, The Netherlands
('2308430', 'Stefanos Eleftheriadis', 'stefanos eleftheriadis')
('1729713', 'Ognjen Rudovic', 'ognjen rudovic')
('1694605', 'Maja Pantic', 'maja pantic')
14d72dc9f78d65534c68c3ed57305f14bd4b5753Exploiting Multi-Grain Ranking Constraints for Precisely Searching
Visually-similar Vehicles
1National Engineering Laboratory for Video Technology, School of EE&CS,
Peking University, Beijing, China
2Cooperative Medianet Innovation Center, China
Beijing Institute of Technology, China
('13318784', 'Ke Yan', 'ke yan')
('5765799', 'Yaowei Wang', 'yaowei wang')
('1687907', 'Wei Zeng', 'wei zeng')
('1705972', 'Yonghong Tian', 'yonghong tian')
('34097174', 'Tiejun Huang', 'tiejun huang')
{keyan, yhtian, weizeng, tjhuang}@pku.edu.cn;yaoweiwang@bit.edu.cn
14b162c2581aea1c0ffe84e7e9273ab075820f52Training Object Class Detectors from Eye Tracking Data
School of Informatics, University of Edinburgh, UK
('1749373', 'Dim P. Papadopoulos', 'dim p. papadopoulos')
('2505673', 'Frank Keller', 'frank keller')
('1749692', 'Vittorio Ferrari', 'vittorio ferrari')
14ff9c89f00dacc8e0c13c94f9fadcd90e4e604dCorrelation Filter Cascade for Facial Landmark Localization
Pattern Analysis and Computer Vision Department
School of Computing
Istituto Italiano di Tecnologia, Genova, Italy
National University of Singapore, Singapore
('2860592', 'Hamed Kiani Galoogahi', 'hamed kiani galoogahi')
('1715286', 'Terence Sim', 'terence sim')
hamed.kiani@iit.it
tsim@comp.nus.edu.sg
14fdce01c958043140e3af0a7f274517b235adf3
14b69626b64106bff20e17cf8681790254d1e81cHybrid Super Vector with Improved Dense Trajectories for Action Recognition
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology, CAS, China
Southwest Jiaotong University, Chengdu, P.R. China
The Chinese University of Hong Kong, Hong Kong
('1766837', 'Xiaojiang Peng', 'xiaojiang peng')
('40795365', 'LiMin Wang', 'limin wang')
('2985266', 'Zhuowei Cai', 'zhuowei cai')
('40285012', 'Yu Qiao', 'yu qiao')
('39657084', 'Qiang Peng', 'qiang peng')
fxiaojiangp,07wanglimin,iamcaizhuoweig@gmail.com, yu.qiao@siat.ac.cn, qpeng@swjtu.edu.cn
14070478b8f0d84e5597c3e67c30af91b5c3a917Detecting Social Actions of Fruit Flies
California Institute of Technology, Pasadena, California, USA
Howard Hughes Medical Institute (HHMI
('2948199', 'Eyrun Eyjolfsdottir', 'eyrun eyjolfsdottir')
('3251767', 'Steve Branson', 'steve branson')
('2232848', 'Xavier P. Burgos-Artizzu', 'xavier p. burgos-artizzu')
('2954028', 'Eric D. Hoopfer', 'eric d. hoopfer')
('20299567', 'Jonathan Schor', 'jonathan schor')
('30334638', 'David J. Anderson', 'david j. anderson')
('1690922', 'Pietro Perona', 'pietro perona')
14fb3283d4e37760b7dc044a1e2906e3cbf4d23aWeak Attributes for Large-Scale Image Retrieval∗
Columbia University, New York, NY
('1815972', 'Felix X. Yu', 'felix x. yu')
('1725599', 'Rongrong Ji', 'rongrong ji')
('3138710', 'Ming-Hen Tsai', 'ming-hen tsai')
('35984288', 'Guangnan Ye', 'guangnan ye')
('9546964', 'Shih-Fu Chang', 'shih-fu chang')
y{yuxinnan, rrji, yegn, sfchang}@ee.columbia.edu
xminghen@cs.columbia.edu
14811696e75ce09fd84b75fdd0569c241ae02f12Margin-Based Discriminant Dimensionality Reduction for Visual Recognition
Eskisehir Osmangazi University
Laboratoire Jean Kuntzmann
Meselik Kampusu 26480 Eskisehir Turkey
B.P. 53, 38041 Grenoble Cedex 9, France
Fr´ed´eric Jurie
University of Caen
Universit´e de Caen - F-14032 Caen, France
Rowan University
201 Mullica Hill Road, Glassboro NJ USA
('2277308', 'Hakan Cevikalp', 'hakan cevikalp')
('1756114', 'Bill Triggs', 'bill triggs')
('1780024', 'Robi Polikar', 'robi polikar')
Hakan.Cevikalp@gmail.com
Bill.Triggs@imag.fr
Frederic.Jurie@unicaen.fr
polikar@rowan.edu
141eab5f7e164e4ef40dd7bc19df9c31bd200c5e
14e759cb019aaf812d6ac049fde54f40c4ed1468Subspace Methods
Synonyms
{ Multiple similarity method
Related Concepts
{ Principal component analysis (PCA)
{ Subspace analysis
{ Dimensionality reduction
De(cid:12)nition
Subspace analysis in computer vision is a generic name to describe a general
framework for comparison and classification of subspaces. A typical approach in
subspace analysis is the subspace method (SM) that classify an input pattern
vector into several classes based on the minimum distance or angle between the
input pattern vector and each class subspace, where a class subspace corresponds
to the distribution of pattern vectors of the class in high dimensional vector
space.
Background
Comparison and classification of subspaces has been one of the central prob-
lems in computer vision, where an image set of an object to be classified is
compactly represented by a subspace in high dimensional vector space.
The subspace method is one of the most effective classification method in
subspace analysis, which was developed by two Japanese researchers, Watanabe
and Iijima around 1970, independently [1, 2]. Watanabe and Iijima named their
methods the CLAFIC [3] and the multiple similarity method [4], respectively.
The concept of the subspace method is derived from the observation that pat-
terns belonging to a class forms a compact cluster in high dimensional vector
space, where, for example, a w×h pixels image pattern is usually represented as a
vector in w×h-dimensional vector space. The compact cluster can be represented
by a subspace, which is generated by using Karhunen-Lo`eve (KL) expansion, also
known as the principal component analysis (PCA). Note that a subspace is gen-
erated for each class, unlike the Eigenface Method [5] in which only one subspace
(called eigenspace) is generated.
The SM has been known as one of the most useful methods in pattern recog-
nition field, since its algorithm is very simple and it can handle classification
of multiple classes. However, its classification performance was not sufficient for
many applications in practice, because class subspaces are generated indepen-
dently of each other [1]. There is no reason to assume a priori that each class
('1770128', 'Kazuhiro Fukui', 'kazuhiro fukui')
1442319de86d171ce9595b20866ec865003e66fcVision-Based Fall Detection with Convolutional
Neural Networks
DeustoTech - University of Deusto
Avenida de las Universidades, 24 - 48007, Bilbao, Spain
2 Dept. of Computer Science and Artificial Intelligence, Basque
Country University, San Sebastian, Spain
P. Manuel Lardizabal, 1 - 20018, San Sebastian, Spain
3 Ikerbasque, Basque Foundation for Science, Bilbao, Spain
Maria Diaz de Haro, 3 - 48013 Bilbao, Spain
4 Donostia International Physics Center (DIPC), San Sebastian, Spain
P. Manuel Lardizabal, 4 - 20018, San Sebastian, Spain
('2481918', 'Gorka Azkune', 'gorka azkune')
('3147227', 'Ignacio Arganda-Carreras', 'ignacio arganda-carreras')
{adrian.nunez@deusto.es, gorka.azkune@deusto.es, ignacio.arganda@ehu.es}
146a7ecc7e34b85276dd0275c337eff6ba6ef8c0This is a pre-print of the original paper submitted for review in FG 2017.
AFFACT - Alignment Free Facial Attribute Classification Technique
Vision and Security Technology (VAST) Lab,
University of Colorado Colorado Springs
∗ authors with equal contribution
('2974221', 'Andras Rozsa', 'andras rozsa')
('1760117', 'Terrance E. Boult', 'terrance e. boult')
{mgunther,arozsa,tboult}@vast.uccs.edu
148eb413bede35487198ce7851997bf8721ea2d6People Search in Surveillance Videos
Four Eyes Lab, UCSB
IBM Research
IBM Research
IBM Research
Four Eyes Lab, UCSB
INTRODUCTION
1.
In traditional surveillance scenarios, users are required to
watch video footage corresponding to extended periods of
time in order to find events of interest. However, this pro-
cess is resource-consuming, and suffers from high costs of
employing security personnel. The field of intelligent vi-
sual surveillance [2] seeks to address these issues by applying
computer vision techniques to automatically detect specific
events in long video streams. The events can then be pre-
sented to the user or be indexed into a database to allow
queries such as “show me the red cars that entered a given
parking lot from 7pm to 9pm on Monday” or “show me the
faces of people who left the city’s train station last week.”
In this work, we are interested in analyzing people, by ex-
tracting information that can be used to search for them in
surveillance videos. Current research on this topic focuses
on approaches based on face recognition, where the goal is
to establish the identity of a person given an image of a
face. However, face recognition is still a very challenging
problem, especially in low resolution images with variations
in pose and lighting, which is often the case in surveillance
data. State-of-the-art face recognition systems [1] require
a fair amount of resolution in order to produce reliable re-
sults, but in many cases this level of detail is not available
in surveillance applications.
We approach the problem in an alternative way, by avoiding
face recognition and proposing a framework for finding peo-
ple based on parsing the human body and exploiting part
attributes. Those include visual attributes such as facial hair
type (beards, mustaches, absence of facial hair), type of eye-
wear (sunglasses, eyeglasses, absence of glasses), hair type
(baldness, hair, wearing a hat), and clothing color. While
face recognition is still a difficult problem, accurate and ef-
ficient face detectors1 based on learning approaches [6] are
available. Those have been demonstrated to work well on
challenging low-resolution images, with variations in pose
and lighting. In our method, we employ this technology to
design detectors for facial attributes from large sets of train-
ing data.
1The face detection problem consists of localizing faces in
images, while face recognition aims to establish the identity
of a person given an image of a face. Face detection is a
challenging problem, but it is arguably not as complex as
face recognition.
Our technique falls into the category of short term recogni-
tion methods, taking advantage of features present in brief
intervals in time, such as clothing color, hairstyle, and makeup,
which are generally considered an annoyance in face recogni-
tion methods. There are several applications that naturally
fit within a short term recognition framework. An example
is in criminal investigation, when the police are interested in
locating a suspect. In those cases, eyewitnesses typically fill
out a suspect description form, where they indicate personal
traits of the suspect as seen at the moment when the crime
was committed. Those include facial hair type, hair color,
clothing type, etc. Based on that description, the police
manually scan the entire video archive looking for a person
with similar characteristics. This process is tedious and time
consuming, and could be drastically accelerated by the use
of our technique. Another application is on finding missing
people. Parents looking for their children in an amusement
park could provide a description including clothing and eye-
wear type, and videos from multiple cameras in the park
would then be automatically searched.
('2000950', 'Daniel A. Vaquero', 'daniel a. vaquero')
('1723233', 'Rogerio S. Feris', 'rogerio s. feris')
('11081274', 'Lisa Brown', 'lisa brown')
('1690709', 'Arun Hampapur', 'arun hampapur')
('1752714', 'Matthew Turk', 'matthew turk')
daniel@cs.ucsb.edu
rsferis@us.ibm.com
lisabr@us.ibm.com
arunh@us.ibm.com
mturk@cs.ucsb.edu
1462bc73834e070201acd6e3eaddd23ce3c1a114International Journal of Science, Engineering and Technology Research (IJSETR), Volume 3, Issue 4, April 2014
FACE AUTHENTICATION /RECOGNITION
SYSTEM FOR FORENSIC APPLICATION
USING SKETCH BASED ON THE SIFT
FEATURES APPROACH
Department of Electronics Engineering KITS,
RTMNU Nagpur University, India
14014a1bdeb5d63563b68b52593e3ac1e3ce7312ALNAJAR et al.: EXPRESSION-INVARIANT AGE ESTIMATION
Expression-Invariant Age Estimation
Jose Alvarez2
ISLA Lab, Informatics Institute
University of Amsterdam
Amsterdam, The Netherlands
2 NICTA
Canberra ACT 2601
Australia
('1765602', 'Fares Alnajar', 'fares alnajar')
('39793067', 'Zhongyu Lou', 'zhongyu lou')
('1695527', 'Theo Gevers', 'theo gevers')
F.alnajar@uva.nl
z.lou@uva.nl
jose.alvarez@nicta.com.au
th.gevers@uva.nl
1473a233465ea664031d985e10e21de927314c94
140c95e53c619eac594d70f6369f518adfea12efPushing the Frontiers of Unconstrained Face Detection and Recognition: IARPA Janus Benchmark A
The development of accurate and scalable unconstrained face recogni-
tion algorithms is a long term goal of the biometrics and computer vision
communities. The term “unconstrained” implies a system can perform suc-
cessful identifications regardless of face image capture presentation (illumi-
nation, sensor, compression) or subject conditions (facial pose, expression,
occlusion). While automatic, as well as human, face identification in certain
scenarios may forever be elusive, such as when a face is heavily occluded or
captured at very low resolutions, there still remains a large gap between au-
tomated systems and human performance on familiar faces. In order to close
this gap, large annotated sets of imagery are needed that are representative
of the end goals of unconstrained face recognition. This will help continue
to push the frontiers of unconstrained face detection and recognition, which
are the primary goals of the IARPA Janus program.
The current state of the art in unconstrained face recognition is high
accuracy (roughly 99% true accept rate at a false accept rate of 1.0%) on
faces that can be detected with a commodity face detectors, but unknown
accuracy on other faces. Despite the fact that face detection and recognition
research generally has advanced somewhat independently, the frontal face
detector filtering approach used for key in the wild face recognition datasets
means that progress in face recognition is currently hampered by progress
in face detection. Hence, a major need exists for a face recognition dataset
that captures as wide of a range of variations as possible to offer challenges
to both face detection as well as face recognition.
In this paper we introduce the IARPA Janus Benchmark A (IJB-A),
which is publicly available for download. The IJB-A contains images and
videos from 500 subjects captured from “in the wild” environment. All la-
belled subjects have been manually localized with bounding boxes for face
detection, as well as fiducial landmarks for the center of the two eyes (if
visible) and base of the nose. Manual bounding box annotations for all non-
labelled subjects (i.e., other persons captured in the imagery) have been cap-
tured as well. All imagery is Creative Commons licensed, which is a license
that allows open re-distribution provided proper attribution is made to the
data creator. The subjects have been intentionally sampled to contain wider
geographic distribution than previous datasets. Recognition and detection
protocols are provided which are motivated by operational deployments of
face recognition systems. An example of images and video from IJB-A can
be found in Figure 3.
The IJB-A dataset has the following claimed contributions: (i) The most
unconstrained database released to date; (ii) The first joint face detection and
face recognition benchmark dataset collected in the wild; (iii) Meta-data
providing subject gender and skin color, and occlusion (eyes, mouth/nose,
and forehead), facial hear, and coarse pose information for each imagery
instance; (iv) Widest geographic distribution of any public face dataset; (v)
The first in the wild dataset to contain a mixture of images and videos; (vi)
Clear authority for re-distribution; (vii) Protocols for identification (search)
and verification (compare); (viii) Baseline accuracies from off the shelf de-
tectors and recognition algorithms; and (ix) Protocols for both template and
model-based face recognition.
Every subject in the dataset contains at least five images and one video.
IJB-A consists of a total of 5,712 images and 2,085 videos, with an average
of 11.4 images and 4.2 videos per subject.
('1885566', 'Emma Taborsky', 'emma taborsky')
('1917247', 'Austin Blanton', 'austin blanton')
('39403529', 'Jordan Cheney', 'jordan cheney')
('2040584', 'Kristen Allen', 'kristen allen')
('2136478', 'Patrick Grother', 'patrick grother')
('2578654', 'Alan Mah', 'alan mah')
('6680444', 'Anil K. Jain', 'anil k. jain')
14418ae9a6a8de2b428acb2c00064da129632f3eDiscovering the Spatial Extent of Relative Attributes
University of California Davis
Introduction
Visual attributes are human-nameable object properties that serve as an in-
termediate representation between low-level image features and high-level
objects or scenes [3, 4, 5]. They can offer a great gateway for human-
object interaction. For example, when we want to interact with an unfa-
miliar object, it is likely that we first infer its attributes from its appear-
ance (e.g., is it furry or slippery?) and then decide how to interact with
it. Thus, modelling visual attributes would be valuable for understanding
human-object interactions. Researchers have developed systems that model
binary attributes [3, 4, 5]—a property’s presence/absence (e.g., “is furry/not
furry”)—and relative attributes [6, 8]—a property’s relative strength (e.g.,
“furrier than”). In this work, we focus on relative attributes since they of-
ten describe object properties better than binary ones [6], especially if the
property exhibits large appearance variations (see Fig. 1).
While most existing work use global image representations to model
attributes (e.g., [5, 6]), recent work demonstrates the effectiveness of using
localized part-based representations [1, 7, 9]. They show that attributes—be
it global (“is male”) or local (“smiling”)—can be more accurately learned
by first bringing the underlying object-parts into correspondence, and then
modeling the attributes conditioned on those object-parts. To compute such
correspondences, pre-trained part detectors are used (e.g., faces [7] and peo-
ple [1, 9]). However, because the part detectors are trained independently of
the attribute, the learned parts may not necessarily be useful for modeling
the desired attribute. Furthermore, some objects do not naturally have well-
defined parts, which means modeling the part-based detector itself becomes
a challenge. The approach of [2] address these issues by discovering useful
and localized attributes. However, it requires a human-in-the-loop, which
limits its scalability.
So, how can we develop robust visual representations for relative at-
tributes, without expensive and potentially uninformative pre-trained part
detectors or humans-in-the-loop? To do so, we will need to automatically
identify the visual patterns in each image whose appearance correlates with
attribute strength.
In this work, we propose a method that automatically
discovers the spatial extent of relative attributes in images across varying at-
tribute strengths. The main idea is to leverage the fact that the visual concept
underlying the attribute undergos a gradual change in appearance across
the attribute spectrum. In this way, we propose to discover a set of local,
transitive connections (“visual chains”) that establish correspondences be-
tween the same object-part, even when its appearance changes drastically
over long ranges. Given the candidate set of visual chains, we then automat-
ically select those that together best model the changing appearance of the
attribute across the attribute spectrum. Importantly, by combining a subset
of the most-informative discovered visual chains, our approach aims to dis-
cover the full spatial extent of the attribute, whether it be concentrated on a
particular object-part or spread across a larger spatial area.
2 Approach
Given an image collection S={I1, . . . ,IN} with pairwise ordered and un-
ordered image-level relative comparisons of an attribute (i.e., in the form of
Ω(Ii)>Ω(Ij) and Ω(Ii)≈Ω(Ij), where i, j∈{1, . . . ,N} and Ω(Ii) is Ii’s at-
tribute strength), our goal is to discover the spatial extent of the attribute in
each image and learn a ranking function that predicts the attribute strength
for any new image.
There are three main steps to our approach: (1) initializing a candidate
set of visual chains; (2) iteratively growing each visual chain along the at-
tribute spectrum; and (3) ranking the chains according to their relevance to
the target attribute to create an ensemble image representation.
Initializing candidate visual chains: A visual attribute can potentially
exhibit large appearance variations across the attribute spectrum. Take the
(top) Given pairs of images, each ordered according to rela-
Figure 1:
tive attribute strength (e.g., “higher/lower-at-the-heel”), (bottom) our ap-
proach automatically discovers the attribute’s spatial extent in each image,
and learns a ranking function that orders the image collection according to
predicted attribute strength.
high-at-the-heel attribute as an example: high-heeled shoes have strong
vertical gradients while flat-heeled shoes have strong horizontal gradients.
However, the attribute’s appearance will be quite similar in any local region
of the attribute spectrum. Therefore, we start with multiple short but visu-
ally homogeneous chains of image regions in a local region of the attribute
spectrum, and smoothly grow them out to cover the entire spectrum.
We start by first sorting the images in S in descending order of predicted
attribute strength—with ˜I1 as the strongest image and ˜IN as the weakest—
using a linear SVM-ranker trained with global image features. To initialize
a single chain, we take the top Ninit images and select a set of patches (one
from each image) whose appearance varies smoothly with its neighbors in
the chain, by minimizing the following objective function:
Ninit∑
||φ (Pi)− φ (Pi−1)||2,
i=2
min
C(P) =
(1)
where φ (Pi) is the appearance feature of patch Pi in ˜Ii, and P ={P1, . . . ,PNinit}
is the set of patches in a chain. Candidate patches for each image are densely
sampled at multiple scales. This objective enforces local smoothness: the
appearances of the patches in the images with neighboring indices should
vary smoothly within a chain. Given the objective’s chain structure, we can
efficiently find its global optimum using Dynamic Programming (DP).
In the backtracking stage of DP, we obtain a large number of K-best
solutions. We then perform a chain-level non-maximum-suppression (NMS)
to remove redundant chains to retain a set of Kinit diverse candidate chains.
Iteratively growing each visual chain: The initial set of Kinit chains are
visually homogeneous but cover only a tiny fraction of the attribute spec-
trum. We next iteratively grow each chain to cover the entire attribute spec-
trum by training a model that adapts to the attribute’s smoothly changing
appearance. Specifically, for each chain, we iteratively train a detector and
in each iteration and use it to grow the chain while simultaneously refining
it. To grow the chain, we again minimize Eqn. 1 but now with an additional
term:
t∗Niter∑
t∗Niter∑
wT
t φ (Pi),
||φ (Pi)− φ (Pi−1)||2 − λ
i=2
i=1
min
C(P) =
(2)
where wt is a linear SVM detector learned from the patches in the chain
from the (t−1)-th iteration, P = {P1, . . . ,Pt∗Niter} is the set of patches in a
chain, and Niter is the number of images considered in each iteration. As
before, the first term enforces local smoothness. The second term is the
detection term: since the ordering of the images in the chain is only a rough
estimate and thus possibly noisy, wt prevents the inference from drifting in
the cases where local smoothness does not strictly hold. λ is a constant that
trades-off the two terms. We use the same DP inference procedure used to
optimize Eqn. 1.
Once P is found, we train a new detector with all of its patches as posi-
tive instances. The negative instances consist of randomly sampled patches
strongweak,Attribute: “high-at-the-heel”,,
('2299381', 'Fanyi Xiao', 'fanyi xiao')
('1883898', 'Yong Jae Lee', 'yong jae lee')
14ba910c46d659871843b31d5be6cba59843a8b8Face Recognition in Movie Trailers via Mean Sequence Sparse
Representation-based Classification
Center for Research in Computer Vision, University of Central Florida, Orlando, FL
('16131262', 'Enrique G. Ortiz', 'enrique g. ortiz')
('2003981', 'Alan Wright', 'alan wright')
('1745480', 'Mubarak Shah', 'mubarak shah')
eortiz@cs.ucf.edu, alanwright@knights.ucf.edu, shah@crcv.ucf.edu
1467c4ab821c3b340abe05a1b13a19318ebbce98Multitask and Transfer Learning for
Multi-Aspect Data
Bernardino Romera Paredes
UCL
A dissertation submitted in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy of University College London
14318d2b5f2cf731134a6964d8193ad761d86942FaceDNA: Intelligent Face Recognition
System with Intel RealSense 3D Camera
National Taiwan University
('1678531', 'Dan Ye', 'dan ye')
('40063567', 'Shih-Wei Liao', 'shih-wei liao')
142dcfc3c62b1f30a13f1f49c608be3e62033042Adaptive Region Pooling for Object Detection
UC Merced
Qualcomm Research, San Diego
UC Merced
('2580349', 'Yi-Hsuan Tsai', 'yi-hsuan tsai')
('1872879', 'Onur C. Hamsici', 'onur c. hamsici')
('1715634', 'Ming-Hsuan Yang', 'ming-hsuan yang')
ytsai2@ucmerced.edu
ohamsici@qti.qualcomm.com
mhyang@ucmerced.edu
14c0f9dc9373bea1e27b11fa0594c86c9e632c8dAdaptive Exponential Smoothing for Online Filtering of Pixel Prediction Maps
School of Electrical and Electronic Engineering,
Nanyang Technological University, Singapore
('3064975', 'Kang Dang', 'kang dang')
('1691251', 'Jiong Yang', 'jiong yang')
('34316743', 'Junsong Yuan', 'junsong yuan')
{dang0025, yang0374}@e.ntu.edu.sg, jsyuan@ntu.edu.sg
1439bf9ba7ff97df9a2da6dae4784e68794da184LGE-KSVD: Flexible Dictionary Learning for Optimized Sparse
Representation Classification
Raymond Ptucha
Rochester Institute of Technology
Rochester, NY, USA
rwpeec@rit.edu
141768ab49a5a9f5adcf0cf7e43a23471a7e5d82Relative Facial Action Unit Detection
Department of Computing and Software
McMaster University
Hamilton, Canada
('1736464', 'Mahmoud Khademi', 'mahmoud khademi')khademm@mcmaster.ca
14e428f2ff3dc5cf96e5742eedb156c1ea12ece1Facial Expression Recognition Using Neural Network Trained with Zernike
Moments
Dept. Génie-Electrique
Université M.C.M Souk-Ahras
Souk-Ahras, Algeria
('3112602', 'Mohammed Saaidia', 'mohammed saaidia')mohamed.saaidia@univ-soukahras.dz
14bca107bb25c4dce89210049bf39ecd55f18568X.HUANG:EMOTIONRECOGNITIONFROMFACIALIMAGES
Emotion recognition from facial images with
arbitrary views
Center for Machine Vision Research
Department of Computer Science and
Engineering
University of Oulu
Oulu, Finland
('18780812', 'Xiaohua Huang', 'xiaohua huang')
('1757287', 'Guoying Zhao', 'guoying zhao')
('1714724', 'Matti Pietikäinen', 'matti pietikäinen')
huang.xiaohua@ee.oulu.fi
gyzhao@ee.oulu.fi
mkp@ee.oulu.fi
14a5feadd4209d21fa308e7a942967ea7c13b7b6978-1-4673-0046-9/12/$26.00 ©2012 IEEE
1025
ICASSP 2012
14fee990a372bcc4cb6dc024ab7fc4ecf09dba2bModeling Spatio-Temporal Human Track Structure for Action
Localization
('2926143', 'Anton Osokin', 'anton osokin')
14ee4948be56caeb30aa3b94968ce663e7496ce4Jang, Y; Gunes, H; Patras, I
© Copyright 2018 IEEE
For additional information about this publication click this link.
http://qmro.qmul.ac.uk/xmlui/handle/123456789/36405
Information about this research object was correct at the time of download; we occasionally
make corrections to records, please therefore check the published record when citing. For
more information contact scholarlycommunications@qmul.ac.uk
8ec82da82416bb8da8cdf2140c740e1574eaf84fCHUNG AND ZISSERMAN: BMVC AUTHOR GUIDELINES
Lip Reading in Profile
http://www.robots.ox.ac.uk/~joon
http://www.robots.ox.ac.uk/~az
Visual Geometry Group
Department of Engineering Science
University of Oxford
Oxford, UK
('2863890', 'Joon Son Chung', 'joon son chung')
('1688869', 'Andrew Zisserman', 'andrew zisserman')
8ee62f7d59aa949b4a943453824e03f4ce19e500Robust Head-Pose Estimation Based on
Partially-Latent Mixture of Linear Regression
∗INRIA Grenoble Rhˆone-Alpes, Montbonnot Saint-Martin, France
†INRIA Rennes Bretagne Atlantique, Rennes, France
('2188660', 'Vincent Drouard', 'vincent drouard')
('1794229', 'Radu Horaud', 'radu horaud')
('3307172', 'Antoine Deleforge', 'antoine deleforge')
('1690536', 'Georgios Evangelidis', 'georgios evangelidis')
8e0ede53dc94a4bfcf1238869bf1113f2a37b667Joint Patch and Multi-label Learning for Facial Action Unit Detection
School of Comm. and Info. Engineering, Beijing University of Posts and Telecom., Beijing China
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
University of Pittsburgh, Pittsburgh, PA
('2393320', 'Kaili Zhao', 'kaili zhao')
('1720776', 'Honggang Zhang', 'honggang zhang')
8e33183a0ed7141aa4fa9d87ef3be334727c76c0– COS429 Written Report, Fall 2017 –
Robustness of Face Recognition to Image Manipulations
1. Motivation
We can often recognize pictures of people we know even if the image has low resolution or obscures
part of the face, if the camera angle resulted in a distorted image of the subject’s face, or if the
subject has aged or put on makeup since we last saw them. Although this is a simple recognition task
for a human, when we think about how we accomplish this task, it seems non-trivial for computer
algorithms to recognize faces despite visual changes.
Computer facial recognition is relied upon for many application where accuracy is important.
Facial recognition systems have applications ranging from airport security and suspect identification
to personal device authentication and face tagging [7]. In these real-world applications, the system
must continue to recognize images of a person who looks slightly different due to the passage of
time, a change in environment, or a difference in clothing.
Therefore, we are interested in investigating face recognition algorithms and their robustness to
image changes resulting from realistically plausible manipulations. Furthermore, we are curious
about whether the impact of image manipulations on computer algorithms’ face recognition ability
mirrors related insights from neuroscience about humans’ face recognition abilities.
2. Goal
In this project, we implement both face recognition algorithms and image manipulations. We then
analyze the impact of each image manipulation on the recognition accuracy each algorithm, and
how these influences depend on the accuracy of each algorithm on non-manipulated images.
3. Background and Related Work
Researchers have developed a wide variety of face recognition algorithms, such as traditional
statistical methods such as PCA, more opaque methods such as deep neural networks, and proprietary
systems used by governments and corporations [1][13][14].
Similarly, others have developed image manipulations using principles from linear algebra, such
as mimicking distortions from lens distortions, as well as using neural networks, such as a system
for transforming images according to specified characteristics [12][16].
Furthermore, researchers in psychology have studied face recognition in humans. A study of
“super-recognizers” (people with extraordinarily high powers of face recognition) and “developmen-
tal prosopagnosics” (people with severely impaired face recognition abilities) found that inverting
images of faces impaired recognition ability more for people with stronger face recognition abilities
[11]. This could indicate that image manipulations tend to equalize face recognition abilities, and
we investigate whether this is the case with the manipulations and face recognition algorithms we
test.
('1897270', 'Cathy Chen', 'cathy chen')
8e3d0b401dec8818cd0245c540c6bc032f169a1dMcGan: Mean and Covariance Feature Matching GAN ('2211263', 'Youssef Mroueh', 'youssef mroueh')
8e3c97e420e0112c043929087d6456d8ab61e95cSAFDARNEJAD et al.: ROBUST GLOBAL MOTION COMPENSATION
Robust Global Motion Compensation in
Presence of Predominant Foreground
https://www.msu.edu/~safdarne/
http://www.cse.msu.edu/~liuxm/
http://www.egr.msu.edu/ndel/profile/lalita-udpa
Michigan State University
East Lansing
Michigan, USA
('2941187', 'Seyed Morteza Safdarnejad', 'seyed morteza safdarnejad')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
('1938832', 'Lalita Udpa', 'lalita udpa')
8e0ab1b08964393e4f9f42ca037220fe98aad7acUV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face
Recognition
Imperial College London
('3234063', 'Jiankang Deng', 'jiankang deng')
('1902288', 'Shiyang Cheng', 'shiyang cheng')
('4091869', 'Niannan Xue', 'niannan xue')
('47943220', 'Yuxiang Zhou', 'yuxiang zhou')
j.deng16, shiyang.cheng11,n.xue15,yuxiang.zhou10,s.zafeiriou@imperial.ac.uk
8e94ed0d7606408a0833e69c3185d6dcbe22bbbe© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or lists, or
reuse of any copyrighted component of this work in other works.
Pre-print of article that will appear at WACV 2012.
8e461978359b056d1b4770508e7a567dbed49776LOMo: Latent Ordinal Model for Facial Analysis in Videos
Marian Bartlett1,∗,‡
1UCSD, USA
2MPI for Informatics, Germany
3IIT Kanpur, India
('39707211', 'Karan Sikka', 'karan sikka')
('39396475', 'Gaurav Sharma', 'gaurav sharma')
8e4808e71c9b9f852dc9558d7ef41566639137f3Adversarial Generative Nets: Neural Network
Attacks on State-of-the-Art Face Recognition
Carnegie Mellon University
University of North Carolina at Chapel Hill
('36301492', 'Mahmood Sharif', 'mahmood sharif')
('38181360', 'Sruti Bhagavatula', 'sruti bhagavatula')
('38572260', 'Lujo Bauer', 'lujo bauer')
('1746214', 'Michael K. Reiter', 'michael k. reiter')
{mahmoods, srutib, lbauer}@cmu.edu
reiter@cs.unc.edu
8ea30ade85880b94b74b56a9bac013585cb4c34bFROM TURBO HIDDEN MARKOV MODELS TO TURBO STATE-SPACE MODELS
Institut Eur´ecom
Multimedia Communications Department
BP 193, 06904 Sophia Antipolis Cedex, France
('1723883', 'Florent Perronnin', 'florent perronnin')
('1709849', 'Jean-Luc Dugelay', 'jean-luc dugelay')
fflorent.perronnin, jean-luc.dugelayg@eurecom.fr
8ed32c8fad924736ebc6d99c5c319312ba1fa80b
8e0ad1ccddc7ec73916eddd2b7bbc0019d8a7958Segment-based SVMs for
Time Series Analysis
CMU-RI-TR-12-1
Submitted in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy in Robotics
The Robotics Institute
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
Version: 20 Jan 2012
Thesis Committee:
Fernando De la Torre (chair)
('1698158', 'Minh Hoai Nguyen', 'minh hoai nguyen')
('1709305', 'Martial Hebert', 'martial hebert')
('1730156', 'Carlos Guestrin', 'carlos guestrin')
('2038264', 'Frank Dellaert', 'frank dellaert')
('1698158', 'Minh Hoai Nguyen', 'minh hoai nguyen')
8e8e3f2e66494b9b6782fb9e3f52aeb8e1b0d125in any current or
future media,
for all other uses,
 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained
including
reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any
copyrighted component of this work in other works.
Pre-print of article that will appear at BTAS 2012.!!
8e378ef01171b33c59c17ff5798f30293fe30686Lehrstuhl f¨ur Mensch-Maschine-Kommunikation
der Technischen Universit¨at M¨unchen
A System for Automatic Face Analysis
Based on
Statistical Shape and Texture Models
Ronald M¨uller
Vollst¨andiger Abdruck der von der Fakult¨at
f¨ur Elektrotechnik und Informationstechnik
der Technischen Universit¨at M¨unchen
zur Erlangung des akademischen Grades eines
Doktor-Ingenieurs
genehmigten Dissertation
Vorsitzender: Prof. Dr. rer. nat. Bernhard Wolf
Pr¨ufer der Dissertation:
1. Prof. Dr.-Ing. habil. Gerhard Rigoll
2. Prof. Dr.-Ing. habil. Alexander W. Koch
Die Dissertation wurde am 28.02.2008 bei der Technischen Universit¨at M¨unchen
eingereicht und durch die Fakult¨at f¨ur Elektrotechnik und Informationstechnik
am 18.09.2008 angenommen.
8ed051be31309a71b75e584bc812b71a0344a019Class-based feature matching across unrestricted
transformations
('1938475', 'Evgeniy Bart', 'evgeniy bart')
('1743045', 'Shimon Ullman', 'shimon ullman')
8e36100cb144685c26e46ad034c524b830b8b2f2Modeling Facial Geometry using Compositional VAEs
1 ´Ecole Polytechnique F´ed´erale de Lausanne
2Facebook Reality Labs, Pittsburgh
('33846296', 'Chenglei Wu', 'chenglei wu')
('14373499', 'Jason Saragih', 'jason saragih')
('1717736', 'Pascal Fua', 'pascal fua')
('1774867', 'Yaser Sheikh', 'yaser sheikh')
{firstname.lastname}@epfl.ch, {firstname.lastname}@fb.com
8ed33184fccde677ec8413ae06f28ea9f2ca70f3Multimodal Visual Concept Learning with Weakly Supervised Techniques
School of E.C.E., National Technical University of Athens, Greece
('7311172', 'Giorgos Bouritsas', 'giorgos bouritsas')
('2539459', 'Petros Koutras', 'petros koutras')
('2641229', 'Athanasia Zlatintsi', 'athanasia zlatintsi')
('1750686', 'Petros Maragos', 'petros maragos')
gbouritsas@gmail.com, {pkoutras, nzlat, maragos}@cs.ntua.gr
8ee5b1c9fb0bded3578113c738060290403ed472Extending Explicit Shape Regression with
Mixed Feature Channels and Pose Priors
Karlsruhe Institute of
Technology (KIT)
Karlsruhe, Germany
Hazım Kemal Ekenel
´Ecole Polytechnique F´ed´erale
de Lausanne (EPFL)
Lausanne, Switzerland
Istanbul Technical
University (ITU
Istanbul, Turkey
('39610204', 'Matthias Richter', 'matthias richter')
('1697965', 'Hua Gao', 'hua gao')
matthias.richter@kit.edu
hua.gao@epfl.ch
ekenel@itu.edu.tr
8e0becfc5fe3ecdd2ac93fabe34634827b21ef2bInternational Journal of Computer Vision manuscript No.
(will be inserted by the editor)
Learning from Longitudinal Face Demonstration -
Where Tractable Deep Modeling Meets Inverse Reinforcement Learning
Savvides · Tien D. Bui
Received: date / Accepted: date
('1876581', 'Chi Nhan Duong', 'chi nhan duong')
8efda5708bbcf658d4f567e3866e3549fe045bbbPre-trained Deep Convolutional Neural Networks
for Face Recognition
Siebert Looije
S2209276
January 2018
MSc. Thesis
Artificial Intelligence
University of Groningen, The Netherlands
Supervisors
Dr. M.A. (Marco) Wiering
K. (Klaas) Dijkstra, MSc.
ALICE Institute
University of Groningen
Nijenborgh 9, 9747 AG, Groningen, The Netherlands
facultyofmathematicsandnaturalsciencesarti cialintelligence22-09-2016|1ATitleA.UthorRijksuniversiteitGroningenSomeFaculty
2227f978f084ebb18cb594c0cfaf124b0df6bf95Pillar Networks for action recognition
B Sengupta
Cortexica Vision Systems Limited
Imperial College London
London, UK
Y Qian
Cortexica Vision Systems Limited
30 Stamford Street SE1 9LQ
London, UK
b.sengupta@imperial.ac.uk
yu.qian@cortexica.com
225fb9181545f8750061c7693661b62d715dc542
22043cbd2b70cb8195d8d0500460ddc00ddb1a62Separability-Oriented Subclass Discriminant
Analysis
('2986129', 'Huan Wan', 'huan wan')
('27838939', 'Hui Wang', 'hui wang')
('35009947', 'Gongde Guo', 'gongde guo')
('10803956', 'Xin Wei', 'xin wei')
22137ce9c01a8fdebf92ef35407a5a5d18730dde
22e2066acfb795ac4db3f97d2ac176d6ca41836cCoarse-to-Fine Auto-Encoder Networks (CFAN)
for Real-Time Face Alignment
1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences
CAS), Institute of Computing Technology, CAS, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
('1698586', 'Jie Zhang', 'jie zhang')
('1685914', 'Shiguang Shan', 'shiguang shan')
('1693589', 'Meina Kan', 'meina kan')
('1710220', 'Xilin Chen', 'xilin chen')
{jie.zhang,shiguang.shan,meina.kan,xilin.chen}@vipl.ict.ac.cn
22717ad3ad1dfcbb0fd2f866da63abbde9af0b09A Learning-based Control Architecture for Socially
Assistive Robots Providing Cognitive Interventions
by
A thesis submitted in conformity with the requirements
for the degree of Masters of Applied Science
Mechanical and Industrial Engineering
University of Toronto
('39999379', 'Jeanie Chan', 'jeanie chan')
('39999379', 'Jeanie Chan', 'jeanie chan')
2288696b6558b7397bdebe3aed77bedec7b9c0a9WU, WANG, YANG, JI: JOINT ATTENTION ON MULTI-LEVEL DEEP FEATURES 1
Action Recognition with Joint Attention
on Multi-Level Deep Features
Dept of Automation
Tsinghua University
Beijing, China
('35585536', 'Jialin Wu', 'jialin wu')
('29644358', 'Gu Wang', 'gu wang')
('3432961', 'Wukui Yang', 'wukui yang')
('7807689', 'Xiangyang Ji', 'xiangyang ji')
wujl13@mails.tsinghua.edu.cn
wangg12@mails.tsinghua.edu.cn
yang-wk15@mails.tsinghua.edu.cn
xyji@mail.tsinghua.edu.cn
22264e60f1dfbc7d0b52549d1de560993dd96e46UnitBox: An Advanced Object Detection Network
Thomas Huang1
University of Illinois at Urbana Champaign
2Megvii Inc
('3451838', 'Jiahui Yu', 'jiahui yu')
('1691963', 'Yuning Jiang', 'yuning jiang')
('2969311', 'Zhangyang Wang', 'zhangyang wang')
('2695115', 'Zhimin Cao', 'zhimin cao')
{jyu79, zwang119, t-huang1}@illinois.edu, {jyn, czm}@megvii.com
22dada4a7ba85625824489375184ba1c3f7f0c8f
221252be5d5be3b3e53b3bbbe7a9930d9d8cad69ZHU, VONDRICK, RAMANAN, AND FOWLKES: MORE DATA OR BETTER MODELS
Do We Need More Training Data or Better
Models for Object Detection?
1 Computer Science Department
University of California
Irvine, CA, USA
2 CSAIL
Massachusetts Institute of Technology
Cambridge, MA, USA
(Work performed while at UC Irvine)
('32542103', 'Xiangxin Zhu', 'xiangxin zhu')
('1856025', 'Carl Vondrick', 'carl vondrick')
('1770537', 'Deva Ramanan', 'deva ramanan')
('3157443', 'Charless C. Fowlkes', 'charless c. fowlkes')
xzhu@ics.uci.edu
vondrick@mit.edu
dramanan@ics.uci.edu
fowlkes@ics.uci.edu
223ec77652c268b98c298327d42aacea8f3ce23fTR-CS-11-02
Acted Facial Expressions In The Wild
Database
September 2011
ANU Computer Science Technical Report Series
('1735697', 'Abhinav Dhall', 'abhinav dhall')
('1717204', 'Roland Goecke', 'roland goecke')
('27011207', 'Tom Gedeon', 'tom gedeon')
22df6b6c87d26f51c0ccf3d4dddad07ce839deb0Fast Action Proposals for Human Action Detection and Search
School of Electrical and Electronic Engineering
Nanyang Technological University, Singapore
('2352391', 'Gang Yu', 'gang yu')
('34316743', 'Junsong Yuan', 'junsong yuan')
iskicy@gmail.com, jsyuan@ntu.edu.sg
228558a2a38a6937e3c7b1775144fea290d65d6cNonparametric Context Modeling of Local Appearance
for Pose- and Expression-Robust Facial Landmark Localization
University of Wisconsin Madison
Zhe Lin2
2Adobe Research
http://www.cs.wisc.edu/~lizhang/projects/face-landmark-localization/
('1893050', 'Brandon M. Smith', 'brandon m. smith')
('1721019', 'Jonathan Brandt', 'jonathan brandt')
('40396555', 'Li Zhang', 'li zhang')
22fdd8d65463f520f054bf4f6d2d216b54fc5677International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 8, August 2013)
Efficient Small and Capital Handwritten Character
Recognition with Noise Reduction
IES College of Technology, Bhopal
('1926347', 'Shailendra Tiwari', 'shailendra tiwari')
('2152231', 'Sandeep Kumar', 'sandeep kumar')
2251a88fbccb0228d6d846b60ac3eeabe468e0f1Matrix-Based Kernel Subspace Methods
Integrated Data Systems Department
Siemens Corporate Research
College Road East, Princeton, NJ
('1682187', 'S. Kevin Zhou', 's. kevin zhou')Email: {kzhou}@scr.siemens.com
22e678d3e915218a7c09af0d1602e73080658bb7Adventures in Archiving and Using Three Years of Webcam Images
Department of Computer Science and Engineering
Washington University, St. Louis, MO, USA
('1990750', 'Nathan Jacobs', 'nathan jacobs')
('39795519', 'Walker Burgin', 'walker burgin')
('1761429', 'Robert Pless', 'robert pless')
{jacobsn,wsb1,rzs1,dyr1,pless}@cse.wustl.edu
2201f187a7483982c2e8e2585ad9907c5e66671dJoint Face Alignment and 3D Face Reconstruction
College of Computer Science, Sichuan University, Chengdu, China
2 Department of Computer Science and Engineering
Michigan State University, East Lansing, MI, U.S.A
('50207647', 'Feng Liu', 'feng liu')
('39422721', 'Dan Zeng', 'dan zeng')
('7345195', 'Qijun Zhao', 'qijun zhao')
('1759169', 'Xiaoming Liu', 'xiaoming liu')
227b18fab568472bf14f9665cedfb95ed33e5fceCompositional Dictionaries for Domain Adaptive
Face Recognition
('2077648', 'Qiang Qiu', 'qiang qiu')
('9215658', 'Rama Chellappa', 'rama chellappa')
227b1a09b942eaf130d1d84cdcabf98921780a22Yang et al. EURASIP Journal on Advances in Signal Processing (2018) 2018:51
https://doi.org/10.1186/s13634-018-0572-6
EURASIP Journal on Advances
in Signal Processing
R ES EAR CH
Multi-feature shape regression for face
alignment
Open Access
('3413708', 'Wei-jong Yang', 'wei-jong yang')
('49070426', 'Yi-Chen Chen', 'yi-chen chen')
('1789917', 'Pau-Choo Chung', 'pau-choo chung')
('1749263', 'Jar-Ferr Yang', 'jar-ferr yang')
2241eda10b76efd84f3c05bdd836619b4a3df97eOne-to-many face recognition with bilinear CNNs
Aruni RoyChowdhury
University of Massachusetts, Amherst
Erik Learned-Miller
('2144284', 'Tsung-Yu Lin', 'tsung-yu lin')
('35208858', 'Subhransu Maji', 'subhransu maji')
{arunirc,tsungyulin,smaji,elm}@cs.umass.edu
22646cf884cc7093b0db2c1731bd52f43682eaa8Human Action Adverb Recognition: ADHA Dataset and A Three-Stream
Hybrid Model
Shanghai Jiao Tong University, China
('1717692', 'Bo Pang', 'bo pang')
('15376265', 'Kaiwen Zha', 'kaiwen zha')
('1830034', 'Cewu Lu', 'cewu lu')
pangbo@sjtu.edu.cn,Kevin zha@sjtu.edu.cn,lucewu@cs.sjtu.edu.cn
22f94c43dd8b203f073f782d91e701108909690bMovieScope: Movie trailer classification using Deep Neural Networks
Dept of Computer Science
University of Virginia
{ks6cq, gs9ed}@virginia.edu
22dabd4f092e7f3bdaf352edd925ecc59821e168 Deakin Research Online
This is the published version:
An, Senjian, Liu, Wanquan and Venkatesh, Svetha 2008, Exploiting side information in
locality preserving projection, in CVPR 2008 : Proceedings of the 26th IEEE Conference on
Computer Vision and Pattern Recognition, IEEE, Washington, D. C., pp. 1-8.
Available from Deakin Research Online:
http://hdl.handle.net/10536/DRO/DU:30044576

Reproduced with the kind permissions of the copyright owner.
Personal use of this material is permitted. However, permission to reprint/republish this
material for advertising or promotional purposes or for creating new collective works for
resale or redistribution to servers or lists, or to reuse any copyrighted component of this work
in other works must be obtained from the IEEE.
Copyright : 2008, IEEE
22f656d0f8426c84a33a267977f511f127bfd7f3
22143664860c6356d3de3556ddebe3652f9c912aFacial Expression Recognition for Human-robot
Interaction – A Prototype
1 Department of Informatics, Technische Universitat M¨unchen, Germany
Electrical and Computer Engineering, University of Auckland, New Zealand
('32131501', 'Matthias Wimmer', 'matthias wimmer')
('1761487', 'Bruce A. MacDonald', 'bruce a. macdonald')
('3235721', 'Dinuka Jayamuni', 'dinuka jayamuni')
('2607879', 'Arpit Yadav', 'arpit yadav')
2271d554787fdad561fafc6e9f742eea94d35518TECHNISCHE UNIVERSIT ¨AT M ¨UNCHEN
Lehrstuhl f¨ur Mensch-Maschine-Kommunikation
Multimodale Mensch-Roboter-Interaktion
f¨ur Ambient Assisted Living
Tobias F. Rehrl
Vollst¨andiger Abdruck der von der Fakult¨at f¨ur Elektrotechnik und Informationstechnik
der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines
Doktor-Ingenieurs (Dr.-Ing.)
genehmigten Dissertation.
Vorsitzende:
Pr¨ufer der Dissertation: 1. Univ.-Prof. Dr.-Ing. habil. Gerhard Rigoll
2. Univ.-Prof. Dr.-Ing. Horst-Michael Groß
Univ.-Prof. Dr.-Ing. Sandra Hirche
(Technische Universit¨at Ilmenau)
Die Dissertation wurde am 17. April 2013 bei der Technischen Universit¨at M¨unchen
eingereicht und durch die Fakult¨at f¨ur Elektrotechnik und Informationstechnik am
8. Oktober 2013 angenommen.
22ec256400e53cee35f999244fb9ba6ba11c1d06
22a7f1aebdb57eecd64be2a1f03aef25f9b0e9a7
22e189a813529a8f43ad76b318207d9a4b6de71aWhat will Happen Next?
Forecasting Player Moves in Sports Videos
UC Berkeley, STATS
UC Berkeley
UC Berkeley
('2986395', 'Panna Felsen', 'panna felsen')
('33932184', 'Pulkit Agrawal', 'pulkit agrawal')
('1689212', 'Jitendra Malik', 'jitendra malik')
panna@berkeley.edu
pulkitag@berkeley.edu
malik@berkeley.edu
25ff865460c2b5481fa4161749d5da8501010aa0Seeing What Is Not There:
Learning Context to Determine Where Objects Are Missing
Department of Computer Science
University of Maryland
Figure 1: When curb ramps (green rectangle) are missing from a segment of sidewalks in an intersection (orange rectangle),
people with mobility impairments are unable to cross the street. We propose an approach to determine where objects are
missing by learning a context model so that it can be combined with object detection results.
('39516880', 'Jin Sun', 'jin sun')
('34734622', 'David W. Jacobs', 'david w. jacobs')
{jinsun,djacobs}@cs.umd.edu
25d514d26ecbc147becf4117512523412e1f060bAnnotated Crowd Video Face Database
IIIT-Delhi, India
('2952437', 'Tejas I. Dhamecha', 'tejas i. dhamecha')
('2578160', 'Priyanka Verma', 'priyanka verma')
('3239512', 'Mahek Shah', 'mahek shah')
('39129417', 'Richa Singh', 'richa singh')
('2338122', 'Mayank Vatsa', 'mayank vatsa')
{tejasd,priyanka13100,mahek13106,rsingh,mayank}@iiitd.ac.in
25c19d8c85462b3b0926820ee5a92fc55b81c35aNoname manuscript No.
(will be inserted by the editor)
Pose-Invariant Facial Expression Recognition
Using Variable-Intensity Templates
Received: date / Accepted: date
('3325574', 'Shiro Kumano', 'shiro kumano')
('38178548', 'Eisaku Maeda', 'eisaku maeda')
258a8c6710a9b0c2dc3818333ec035730062b1a5Benelearn 2005
Annual Machine Learning Conference of
Belgium and the Netherlands
CTIT PROCEEDINGS OF THE FOURTEENTH
ANNUAL MACHINE LEARNING CONFERENCE
OF BELGIUM AND THE NETHERLANDS
('2541098', 'Martijn van Otterlo', 'martijn van otterlo')
('1688157', 'Mannes Poel', 'mannes poel')
('1745198', 'Anton Nijholt', 'anton nijholt')
25695abfe51209798f3b68fb42cfad7a96356f1fAN INVESTIGATION INTO COMBINING
BOTH FACIAL DETECTION AND
LANDMARK LOCALISATION INTO A
UNIFIED PROCEDURE USING GPU
COMPUTING
MSc by Research
2016
('32464788', 'J M McDonagh', 'j m mcdonagh')
250ebcd1a8da31f0071d07954eea4426bb80644cDenseBox: Unifying Landmark Localization with
End to End Object Detection
Institute of Deep Learning
Baidu Research
('3168646', 'Lichao Huang', 'lichao huang')
('1698559', 'Yi Yang', 'yi yang')
('1987538', 'Yafeng Deng', 'yafeng deng')
('2278628', 'Yinan Yu', 'yinan yu')
2{huanglichao01,yangyi05,dengyafeng}@baidu.com
1alanhuang1990@gmail.com
3bebekifis@gmail.com
25337690fed69033ef1ce6944e5b78c4f06ffb81STRATEGIC ENGAGEMENT REGULATION:
AN INTEGRATION OF SELF-ENHANCEMENT AND ENGAGEMENT
by
A dissertation submitted to the Faculty of the University of Delaware in partial
fulfillment of the requirements for the degree of Doctor of Philosophy in Psychology
Spring 2014
All Rights Reserved
('2800616', 'Jordan B. Leitner', 'jordan b. leitner')
('2800616', 'Jordan B. Leitner', 'jordan b. leitner')
25c3cdbde7054fbc647d8be0d746373e7b64d150ForgetMeNot: Memory-Aware Forensic Facial Sketch Matching
Beijing University of Posts and Telecommunications
Queen Mary University of London, UK
('2961830', 'Shuxin Ouyang', 'shuxin ouyang')
('1697755', 'Timothy M. Hospedales', 'timothy m. hospedales')
('1705408', 'Yi-Zhe Song', 'yi-zhe song')
('7823169', 'Xueming Li', 'xueming li')
{s.ouyang, t.hospedales, yizhe.song}@qmul.ac.uk
lixm@bupt.edu.cn
25bf288b2d896f3c9dab7e7c3e9f9302e7d6806bNeural Networks with Smooth Adaptive Activation Functions
for Regression
Stony Brook University, NY, USA
Stony Brook University, NY, USA
3Oak Ridge National Laboratory, USA
4Department of Applied Mathematics and Statistics, NY, USA
5Department of Pathology, Stony Brook Hospital, NY, USA
6Cancer Center, Stony Brook Hospital, NY, USA
August 24, 2016
('2321406', 'Le Hou', 'le hou')
('1686020', 'Dimitris Samaras', 'dimitris samaras')
('1755448', 'Yi Gao', 'yi gao')
('1735710', 'Joel H. Saltz', 'joel h. saltz')
{lehhou,samaras}@cs.stonybrook.edu
{tahsin.kurc,joel.saltz}@stonybrook.edu
yi.gao@stonybrookmedicine.edu
25d3e122fec578a14226dc7c007fb1f05ddf97f7The First Facial Expression Recognition and Analysis Challenge ('1795528', 'Michel F. Valstar', 'michel f. valstar')
('39532631', 'Bihan Jiang', 'bihan jiang')
('1875347', 'Marc Mehu', 'marc mehu')
('1694605', 'Maja Pantic', 'maja pantic')
2597b0dccdf3d89eaffd32e202570b1fbbedd1d6Towards predicting the likeability of fashion images ('2569065', 'Jinghua Wang', 'jinghua wang')
('2613790', 'Abrar Abdul Nabi', 'abrar abdul nabi')
('22804340', 'Gang Wang', 'gang wang')
('2737180', 'Chengde Wan', 'chengde wan')
('2475944', 'Tian-Tsong Ng', 'tian-tsong ng')
2588acc7a730d864f84d4e1a050070ff873b03d5Article
Action Recognition by an Attention-Aware Temporal
Weighted Convolutional Neural Network
Institute of Arti cial Intelligence and Robotics, Xi an Jiaotong University, Xi an 710049, China
Received: 27 April 2018; Accepted: 19 June 2018; Published: 21 June 2018
('40367806', 'Le Wang', 'le wang')
('14800230', 'Jinliang Zang', 'jinliang zang')
('46324995', 'Qilin Zhang', 'qilin zhang')
('1786361', 'Zhenxing Niu', 'zhenxing niu')
('1745420', 'Gang Hua', 'gang hua')
('1715389', 'Nanning Zheng', 'nanning zheng')
zjl19920904@stu.xjtu.edu.cn (J.Z.); nnzheng@xjtu.edu.cn (N.Z.)
2 HERE Technologies, Chicago, IL 60606, USA; qilin.zhang@here.com
3 Alibaba Group, Hangzhou 311121, China; zhenxing.nzx@alibaba-inc.com
4 Microsoft Research, Redmond, WA 98052, USA; ganghua@microsoft.com
* Correspondence: lewang@xjtu.edu.cn; Tel.: +86-29-8266-8672
25982e2bef817ebde7be5bb80b22a9864b979fb0
25c108a56e4cb757b62911639a40e9caf07f1b4fRecurrent Scale Approximation for Object Detection in CNN
Multimedia Laboratory at The Chinese University of Hong Kong
1SenseTime Group Limited
('1715752', 'Yu Liu', 'yu liu')
('1929886', 'Hongyang Li', 'hongyang li')
('1721677', 'Junjie Yan', 'junjie yan')
('22181490', 'Fangyin Wei', 'fangyin wei')
('31843833', 'Xiaogang Wang', 'xiaogang wang')
('1741901', 'Xiaoou Tang', 'xiaoou tang')
liuyuisanai@gmail.com,{yangli,xgwang}@ee.cuhk.edu.hk,
{yanjunjie,weifangyin}@sensetime.com, xtang@ie.cuhk.edu.hk
2594a77a3f0dd5073f79ba620e2f287804cec630TRANSFERRING FACE VERIFICATION NETS TO PAIN AND EXPRESSION REGRESSION
Dept. of {Computer Science1, Electrical & Computer Engineering2, Radiation Oncology3, Cognitive Science4}
Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218, USA
5Dept. of EE, UESTC, 2006 Xiyuan Ave, Chengdu, Sichuan 611731, China
Tsinghua University, Beijing 100084, China
('1713335', 'Feng Wang', 'feng wang')
('40031188', 'Xiang Xiang', 'xiang xiang')
('1692867', 'Chang Liu', 'chang liu')
('1709073', 'Trac D. Tran', 'trac d. tran')
('3207112', 'Austin Reiter', 'austin reiter')
('1678633', 'Gregory D. Hager', 'gregory d. hager')
('2095823', 'Harry Quon', 'harry quon')
('1709439', 'Jian Cheng', 'jian cheng')
('1746141', 'Alan L. Yuille', 'alan l. yuille')
25e2d3122d4926edaab56a576925ae7a88d68a77ORIGINAL RESEARCH
published: 23 February 2016
doi: 10.3389/fpsyg.2016.00166
Communicative-Pragmatic
Treatment in Schizophrenia: A Pilot
Study
Center for Cognitive Science, University of Turin, Turin, Italy, 2 Neuroscience Institute of Turin
Turin, Italy, 3 Faculty of Humanities, Research Unit of Logopedics, Child Language Research Center, University of Oulu, Oulu
Finland, 4 AslTo2 Department of Mental Health, Turin, Italy, 5 Brain Imaging Group, Turin, Italy
This paper aims to verify the efficacy of Cognitive Pragmatic Treatment (CPT), a new
remediation training for the improvement of the communicative-pragmatic abilities, in
patients with schizophrenia. The CPT program is made up of 20 group sessions,
focused on a number of communication modalities, i.e., linguistic, extralinguistic and
paralinguistic, theory of mind (ToM) and other cognitive functions able to play a role
on the communicative performance, such as awareness and planning. A group of 17
patients with schizophrenia took part in the training program. They were evaluated
before and after training, through the equivalent forms of the Assessment Battery for
Communication (ABaCo), a tool for testing, both in comprehension and in production,
a wide range of pragmatic phenomena such as direct and indirect speech acts,
irony and deceit, and a series of neuropsychological and ToM tests. The results
showed a significant improvement in patients’ performance on both production and
comprehension tasks following the program, and in all the communication modalities
evaluated through the ABaCo, i.e., linguistic, extralinguistic, paralinguistic, and social
appropriateness. This improvement persisted after 3 months from the end of the training
program, as shown by the follow-up tests. These preliminary findings provide evidence
of the efficacy of the CPT program in improving communicative-pragmatic abilities in
schizophrenic individuals.
Keywords: rehabilitation, schizophrenia, pragmatic, communication, training
INTRODUCTION
People with schizophrenia experience symptoms such as delusions, hallucinations, disorganized
speech and behavior, that cause difficulty in social relationships (DSM 5; American Psychiatric
Association [APA], 2013). In the clinical pragmatic domain (Cummings, 2014), the area of study
of pragmatic impairment in patients with communicative disorders, several studies have reported
that communicative ability is impaired in patients with schizophrenia (Langdon et al., 2002; Bazin
et al., 2005; Linscott, 2005; Marini et al., 2008; Colle et al., 2013). For example, Bazin et al. (2005),
created a structured interview, the Schizophrenia Communication Disorder Scale, which they
administered to patients with schizophrenia. The authors observed that these patients performed
less well than those affected by mania or depression in managing a conversation on everyday
topics, such as family, job, hobbies, and so on. Likewise, non-compliance with conversational
rules, such as consistency with the agreed purpose of the interaction, giving the partner too little
Edited by:
Sayyed Mohsen Fatemi,
Harvard University, USA
Reviewed by:
Silvia Serino,
IRCCS Istituto Auxologico Italiano,
Italy
Michelle Dow Keawphalouk,
Harvard and Massachusetts Institute
of Technology, USA
*Correspondence:
Specialty section:
This article was submitted to
Psychology for Clinical Settings,
a section of the journal
Frontiers in Psychology
Received: 07 October 2015
Accepted: 28 January 2016
Published: 23 February 2016
Citation:
Bosco FM, Gabbatore I, Gastaldo L
and Sacco K (2016)
Communicative-Pragmatic Treatment
in Schizophrenia: A Pilot Study.
Front. Psychol. 7:166.
doi: 10.3389/fpsyg.2016.00166
Frontiers in Psychology | www.frontiersin.org
February 2016 | Volume 7 | Article 166
('2261858', 'Francesca M. Bosco', 'francesca m. bosco')
('3175646', 'Ilaria Gabbatore', 'ilaria gabbatore')
('39551201', 'Luigi Gastaldo', 'luigi gastaldo')
('2159033', 'Katiuscia Sacco', 'katiuscia sacco')
('3175646', 'Ilaria Gabbatore', 'ilaria gabbatore')
ilaria.gabbatore@oulu.fi;
ilariagabbatore@gmail.com
25e05a1ea19d5baf5e642c2a43cca19c5cbb60f8Label Distribution Learning ('1735299', 'Xin Geng', 'xin geng')
2559b15f8d4a57694a0a33bdc4ac95c479a3c79a570
Contextual Object Localization With Multiple
Kernel Nearest Neighbor
Gert Lanckriet, Member, IEEE
('3215419', 'Brian McFee', 'brian mcfee')
('1954793', 'Carolina Galleguillos', 'carolina galleguillos')
2574860616d7ffa653eb002bbaca53686bc71cdd
25f1f195c0efd84c221b62d1256a8625cb4b450c1-4244-1017-7/07/$25.00 ©2007 IEEE
1091
ICME 2007
25885e9292957feb89dcb4a30e77218ffe7b9868JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2016
Analyzing the Affect of a Group of People Using
Multi-modal Framework
('18780812', 'Xiaohua Huang', 'xiaohua huang')
('1735697', 'Abhinav Dhall', 'abhinav dhall')
('40357816', 'Xin Liu', 'xin liu')
('1757287', 'Guoying Zhao', 'guoying zhao')
('2473859', 'Jingang Shi', 'jingang shi')
259706f1fd85e2e900e757d2656ca289363e74aaImproving People Search Using Query Expansions
How Friends Help To Find People
LEAR - INRIA Rhˆone Alpes - Grenoble, France
('1722052', 'Thomas Mensink', 'thomas mensink')
('34602236', 'Jakob Verbeek', 'jakob verbeek')
{thomas.mensink,jakob.verbeek}@inria.fr
25728e08b0ee482ee6ced79c74d4735bb5478e29
258a2dad71cb47c71f408fa0611a4864532f5ebaDiscriminative Optimization
of Local Features for Face Recognition

H O S S E I N A Z I Z P O U R

Master of Science Thesis
Stockholm, Sweden 2011
25127c2d9f14d36f03d200a65de8446f6a0e3bd6Journal of Theoretical and Applied Information Technology
20th May 2016. Vol.87. No.2
© 2005 - 2016 JATIT & LLS. All rights reserved.
ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195
EVALUATING THE PERFORMANCE OF DEEP SUPERVISED
AUTO ENCODER IN SINGLE SAMPLE FACE RECOGNITION
PROBLEM USING KULLBACK-LEIBLER DIVERGENCE
SPARSITY REGULARIZER
Faculty of Computer of Computer Science, Universitas Indonesia, Kampus UI Depok, Indonesia
('9324684', 'ARIDA F. SYAFIANDINI', 'arida f. syafiandini')E-mail: 1otniel.yosi@ui.ac.id , 2ito.wasito@cs.ui.ac.id, 2arida.ferti@ui.ac.id