PDF Report: Unknown Bigrams

of the4760
in the2840
computer science2687
of computer2418
face recognition1920
science and1502
to the1498
member ieee1381
of technology1248
for the1201
on the1161
and the858
computer vision817
of electrical785
by the782
facial expression778
from the773
in this758
and technology719
and computer715
classi cation668
has been653
with the630
identi cation614
computer engineering584
of science582
of engineering576
international journal571
facial expressions570
center for568
electrical engineering565
that the561
for face560
and engineering544
of information543
of this530
open access525
engineering and524
of psychology517
beijing china513
this paper505
of california497
institute for494
is the479
the university462
http www451
this article450
electrical and438
have been437
of facial411
neural networks402
the degree395
doi org394
of face393
carnegie mellon390
the face385
mellon university384
this work384
re identi380
expression recognition380
the same374
state university372
at the369
and information365
senior member356
as the348
of sciences347
ieee and343
pattern recognition341
real time332
hong kong326
recognition using322
chinese academy320
of computing314
deep learning314
information technology313
pose estimation299
with autism297
in partial296
the most296
the requirements292
object detection292
new york288
autism spectrum285
fellow ieee285
dx doi284
neural network283
of social281
detection and275
national university272
machine learning271
of philosophy265
be addressed261
student member259
research article256
large scale255
of human253
face detection253
science university252
in computer248
convolutional neural246
the proposed245
key laboratory244
vision and243
arti cial242
ca usa242
learning for240
as well239
of these237
individuals with233
the author231
of our230
they are227
of electronics225
requirements for224
engineering university224
with asd224
the netherlands224
emotion recognition224
associated with223
feature extraction223
information engineering222
the art220
for example220
the original220
we propose218
signal processing216
international conference212
under the211
recognition and207
in face206
volume article206
that are202
the image201
research center200
for facial200
is not200
psychology university197
image processing197
united kingdom196
correspondence should194
in social194
face images193
show that192
for human192
cial intelligence191
action recognition190
college london190
information science189
of electronic188
there are187
object recognition187
in order186
in autism186
centre for185
ef cient184
using the184
the human184
electronics and184
of automation183
and communication183
in which181
published online180
analysis and178
image and177
the rst177
san diego176
face processing176
partial ful175
the main175
networks for175
to this175
for visual174
in any174
los angeles174
ful llment173
graduate school171
of hong170
computer and169
rights reserved169
research institute168
human pose168
the authors168
recognition system167
veri cation167
for image167
all rights167
of faces166
corresponding author166
science department165
max planck164
volume issue163
method for163
and research163
an open162
microsoft research162
children with162
model for161
united states159
of advanced158
eth zurich158
the other158
face and157
electronic engineering155
cite this155
do not155
are not154
be used154
the wild153
of visual152
we use152
and social151
network for151
of images151
creative commons151
the data150
technical university150
used for150
in asd150
for person149
in video149
information and149
of informatics149
to cite148
real world147
for each147
an image147
planck institute147
of emotion147
of psychiatry147
between the146
features for146
spectrum disorders146
it has145
the number145
we present144
and its143
recognition with143
approach for142
in addition142
the facial142
massachusetts institute142
spectrum disorder141
ma usa141
studies have141
the amygdala141
or not140
pedestrian detection140
of chinese139
the chinese139
ny usa138
the performance138
of medicine138
intelligent systems137
commons attribution137
technology and136
of pattern136
to learn135
stanford university135
images and134
of china133
and applications133
signi cant133
for research133
conference paper133
the problem132
to make132
framework for132
chinese university132
national laboratory132
based face131
barcelona spain131
information processing131
research and131
the use130
we show130
deep neural130
learning and130
to face130
semantic segmentation129
www frontiersin129
frontiersin org129
computer applications128
of maryland128
for object128
in our128
as conference128
pa usa127
the results127
of mathematics127
shanghai china127
which are126
over the126
and recognition126
at iclr126
suggest that124
is that124
face image124
for this123
we are123
object tracking122
key words122
eye tracking122
for more122
social cognition122
the eyes122
van gool121
be inserted121
does not121
university china121
face perception121
for video120
the following120
the system120
the creative120
of image120
of interest120
the visual120
models for119
of texas119
the first119
accepted for119
not the119
system for118
association for118
human computer118
that can118
image retrieval117
the ability117
features and116
the editor116
national institute116
faces and116
is also116
processing and116
luc van115
to recognize115
university college115
doi fpsyg114
assistant professor114
tsinghua university114
access article113
original research113
for informatics113
computing and113
of research113
the paper112
in many112
during the112
this version112
and their112
provided the111
for computational111
in terms111
and face111
tokyo japan110
eye gaze110
sciences beijing110
of autism109
ieee transactions109
the recognition109
of data109
de ned109
the department109
human face109
support vector109
college park108
ku leuven108
is used108
engineering department108
computational linguistics108
of intelligent108
teaching and108
recognition based108
an important107
of southern107
to improve107
this study107
of oxford107
question answering106
feature selection106
classi ers106
our method106
article was106
in psychology106
science engineering106
data and106
in particular106
available online105
facial features105
of each105
of london105
wang and105
distribution and105
https doi105
where the105
and video104
whether they104
the social104
component analysis104
accepted date104
about the104
we have104
when the104
the development103
by using103
southern california103
to extract103
indian institute103
issn online103
local binary103
laboratory for103
technical report102
than the102
found that102
imperial college102
correspondence tel102
on image102
robust face102
zero shot102
received date102
al and102
of new101
of illinois101
of all101
the model101
high dimensional101
ai research101
into the101
high level101
of sci101
the work100
the object100
more information100
come from100
generative adversarial100
in human100
in real100
and then100
and image100
discriminant analysis100
people with99
for multi99
information about99
we can99
distributed under99
original work99
and tracking99
or from99
sparse representation99
not only99
the current99
low resolution99
published version98
of singapore98
for action98
date accepted98
we also98
of amsterdam98
spatio temporal97
this material97
mathematics and97
university beijing97
the scene96
semi supervised96
the training96
any medium96
however the96
the last96
the two96
is multi96
and facial96
vision center96
are the96
on computer96
north carolina96
we will95
urbana champaign95
permits unrestricted95
downloaded from95
recognition from95
the present95
compared with95
visual recognition95
and cognitive95
pose and94
of emotional94
and reproduction94
may come94
from public94
machine vision94
methods for94
of applied94
cognition and94
california san94
or private93
and systems93
for instance93
low rank93
which permits93
the documents93
in other93
social interaction93
academic editor93
of washington93
our approach92
key lab92
and electronic92
medium provided92
is properly92
and dissemination92
private research92
research centers92
representation for92
california los92
of michigan92
psychology and92
and human92
metric learning92
in both91
multi disciplinary91
disciplinary open91
rchive for91
the deposit91
deposit and91
research documents91
documents whether91
are pub91
documents may91
research institutions91
in france91
archive ouverte91
ouverte pluridisciplinaire91
pluridisciplinaire hal91
hal est91
la diffusion91
de documents91
de niveau91
niveau recherche91
recherche publi91
ou non91
recherche fran91
des laboratoires91
ou priv91
and pattern91
and machine91
chapel hill91
fine grained90
uc berkeley90
all the90
training data90
article distributed90
facial feature90
thesis submitted90
within the90
communication engineering89
of people89
that this89
use distribution89
the journal89
please contact89
to detect89
rather than89
image analysis89
latex class89
software engineering88
of central88
on face88
robotics institute88
in videos88
expression analysis88
image classi88
facial emotion88
emotional expressions88
visual question88
weakly supervised88
head pose88
class files88
the eye88
detection using87
the second87
unrestricted use87
of different87
the best87
experimental results87
been accepted87
signi cantly87
for all87
of latex87
artificial intelligence86
to identify86
michigan state86
at http86
of toronto86
ef icient86
of their86
of cse86
multi view85
the role85
zhang and85
from video85
queen mary85
automation chinese85
vol issue85
eurasip journal85
duke university85
social and85
suggests that85
to faces85
the past85
and other85
and that84
tel fax84
analysis for84
use the84
university usa84
is one84
was supported84
of any84
in children84
face alignment83
engineering the83
pattern analysis83
the task83
files vol83
multi task82
state key82
systems and82
of machine82
face veri82
of objects82
the neural82
the input82
natural language81
to achieve81
the feature81
attribution license81
georgia institute81
to obtain81
this research81
but not81
computer sciences80
recognition systems80
and are80
the full80
jiaotong university80
is available80
tracking and80
al this80
jiao tong80
appearance based80
learning with80
the presence80
in section80
of brain80
images are80
of deep79
domain adaptation79
properly cited79
shanghai jiao79
and computing79
to social79
nd the79
nanjing university79
speech and79
under review79
de lausanne79
computer interaction79
supervised learning78
chen and78
was submitted78
tong university78
images with78
shown that78
sciences and78
perception and78
transfer learning77
speci cally77
proposed method77
mary university77
facial action77
engineering science77
in figure77
brain and77
de barcelona76
engineering research76
the effect76
archives ouvertes76
vision group76
partial fulfillment76
from single76
information sciences76
of surrey76
and signal76
of object76
ieee international76
for intelligent76
technological university76
the target76
dimensionality reduction76
received april76
asd and76
improve the76
the brain76
shuicheng yan75
id pages75
for robust75
re identification75
of others75
noname manuscript75
and control75
these results75
but also75
human faces75
the user75
paris france75
authors and75
among the74
the state74
learning based74
york university74
dissertation submitted74
model based74
which the74
issn print74
technische universit74
machine intelligence74
to have74
age estimation74
to whom74
to end74
cation and74
the right74
deep convolutional73
central florida73
the images73
to address73
more than73
may not73
and pose73
adversarial networks73
dictionary learning73
bernt schiele73
this journal73
to solve73
which can73
to image73
if the73
the effects73
of ece73
california berkeley73
berlin germany73
for semantic72
multi modal72
dataset for72
cornell university72
vision laboratory72
the study72
the mouth72
features are72
the accuracy72
li and72
springer science71
of use71
if you71
the public71
at urbana71
is more71
this problem71
sagepub com71
australian national71
in facial71
peking university71
the context71
principal component71
demonstrate that71
lausanne switzerland71
it can71
of wisconsin71
magnetic resonance71
seoul korea71
science business70
business media70
multi scale70
information systems70
low level70
xiaogang wang70
in contrast70
based methods70
research group70
no august70
to facial70
with high70
in individuals70
super resolution70
received july70
optical flow70
for any70
de cits70
singapore singapore70
for publication70
or other69
we found69
vision lab69
been proposed69
of features69
fran ais69
autism research69
of software69
nanyang technological69
liu and69
gaze direction69
whom correspondence69
adults with69
eye contact69
resonance imaging69
of north69
learning from69
to publication68
single image68
invariant face68
activity recognition68
stefanos zafeiriou68
intelligent information68
specialty section68
or the68
based image68
to their68
in image68
taipei taiwan68
target tracking68
engineering college68
not been68
for computer67
tx usa67
data set67
electronic and67
disorder asd67
facial landmark67
adobe research67
to its67
typically developing67
of pennsylvania67
zurich switzerland67
dr ing67
high resolution67
has not67
maryland college66
publishing corporation66
of training66
accepted june66
of doctor66
of eye66
information from66
automatic face66
ecole polytechnique66
the video66
binary pattern66
model and66
in their66
received may66
been shown66
social interactions66
in revised66
revised form66
montr eal65
algorithm for65
is often65
hindawi publishing65
ground truth65
of cognitive65
shot learning65
for large65
recent years65
double blind65
with respect65
expression and65
have shown65
karlsruhe germany65
on their65
the research65
columbia university65
associate professor65
facial images64
both the64
dif cult64
human action64
technology cas64
video surveillance64
received december64
the world64
national taiwan64
recognition under64
intelligence and64
video based64
multi target64
and applied64
detection with63
autism and63
this document63
believe that63
human detection63
and more63
university shanghai63
personal use63
wa usa63
cation using63
and intelligent63
ne grained63
on pattern63
applied sciences63
while the63
idiap research63
extracted from63
cation with63
the dataset63
received march63
received june62
multi object62
de montr62
of experimental62
of multiple62
sciences university62
nearest neighbor62
engineering national62
taiwan university62
the goal62
we used62
representations for62
based approach62
data driven62
the computer62
to reduce62
vector machine62
feature based62
june accepted61
at www61
computer graphics61
of tokyo61
international joint61
objects and61
images for61
large number61
shiguang shan61
shaogang gong61
received october61
an object61
this thesis61
are more61
communication and61
with deep61
recognition has61
the appearance61
accepted march61
of two61
and emotion61
human robot61
as follows61
california institute61
of computational61
that they60
peer reviewed60
words and60
the shape60
in each60
th international60
is still60
using deep60
and electrical60
emotional facial60
of its60
showed that60
ann arbor60
these methods60
are used60
stony brook60
supplementary material60
illumination and60
received january60
such that60
linear discriminant60
subspace clustering59
to determine59
il usa59
published october59
entific research59
manant des59
des tablissements59
tablissements enseignement59
ou trangers59
trangers des59
material for59
joint conference59
wide range59
nanjing china59
normal university59
for learning59
for real59
visual information59
that our59