| of the | 5868 |
| in the | 3372 |
| computer science | 3295 |
| of computer | 2974 |
| face recognition | 2484 |
| science and | 1911 |
| to the | 1836 |
| member ieee | 1608 |
| of technology | 1578 |
| for the | 1515 |
| on the | 1419 |
| computer vision | 1053 |
| and the | 1052 |
| of electrical | 1020 |
| by the | 945 |
| and computer | 928 |
| in this | 919 |
| and technology | 910 |
| from the | 894 |
| classi cation | 813 |
| has been | 798 |
| facial expression | 794 |
| with the | 787 |
| of science | 764 |
| computer engineering | 752 |
| of engineering | 741 |
| identi cation | 728 |
| electrical engineering | 721 |
| international journal | 706 |
| of information | 690 |
| that the | 687 |
| for face | 685 |
| beijing china | 678 |
| and engineering | 674 |
| open access | 666 |
| center for | 661 |
| engineering and | 658 |
| this paper | 653 |
| of this | 646 |
| is the | 603 |
| institute for | 597 |
| electrical and | 572 |
| facial expressions | 571 |
| the university | 565 |
| of california | 564 |
| neural networks | 559 |
| of psychology | 558 |
| http www | 553 |
| have been | 543 |
| the degree | 497 |
| of face | 491 |
| this article | 484 |
| state university | 479 |
| the face | 476 |
| carnegie mellon | 465 |
| doi org | 464 |
| the same | 463 |
| this work | 459 |
| mellon university | 459 |
| and information | 443 |
| real time | 439 |
| pattern recognition | 439 |
| of facial | 435 |
| senior member | 433 |
| at the | 428 |
| of sciences | 422 |
| re identi | 418 |
| as the | 415 |
| object detection | 412 |
| hong kong | 404 |
| recognition using | 403 |
| deep learning | 397 |
| expression recognition | 395 |
| chinese academy | 394 |
| ieee and | 393 |
| information technology | 392 |
| neural network | 377 |
| the requirements | 367 |
| new york | 365 |
| in partial | 364 |
| the most | 364 |
| detection and | 363 |
| of computing | 362 |
| fellow ieee | 343 |
| national university | 342 |
| convolutional neural | 336 |
| dx doi | 333 |
| face detection | 329 |
| machine learning | 327 |
| of philosophy | 322 |
| the proposed | 318 |
| pose estimation | 315 |
| student member | 311 |
| ca usa | 306 |
| of human | 306 |
| of electronics | 303 |
| large scale | 302 |
| learning for | 301 |
| vision and | 301 |
| be addressed | 300 |
| in computer | 300 |
| research article | 299 |
| of social | 298 |
| with autism | 298 |
| science university | 294 |
| they are | 291 |
| engineering university | 291 |
| as well | 290 |
| signal processing | 289 |
| autism spectrum | 287 |
| arti cial | 285 |
| of these | 285 |
| key laboratory | 285 |
| the netherlands | 283 |
| the author | 280 |
| information engineering | 277 |
| for example | 276 |
| the original | 275 |
| international conference | 275 |
| feature extraction | 275 |
| of our | 273 |
| the art | 269 |
| image processing | 269 |
| recognition and | 267 |
| the image | 265 |
| requirements for | 265 |
| in face | 259 |
| we propose | 258 |
| under the | 257 |
| is not | 254 |
| object recognition | 254 |
| face images | 253 |
| research center | 249 |
| associated with | 246 |
| show that | 244 |
| centre for | 243 |
| image and | 243 |
| in order | 240 |
| electronics and | 239 |
| emotion recognition | 239 |
| there are | 239 |
| volume article | 238 |
| and communication | 237 |
| that are | 235 |
| veri cation | 235 |
| individuals with | 235 |
| of electronic | 233 |
| information science | 229 |
| correspondence should | 229 |
| cial intelligence | 227 |
| of automation | 226 |
| using the | 225 |
| in any | 224 |
| with asd | 224 |
| pedestrian detection | 222 |
| the human | 222 |
| for human | 222 |
| corresponding author | 221 |
| ef cient | 221 |
| analysis and | 219 |
| partial ful | 219 |
| ful llment | 217 |
| united kingdom | 217 |
| for facial | 216 |
| the main | 214 |
| the authors | 213 |
| and research | 213 |
| computer and | 213 |
| los angeles | 213 |
| the rst | 212 |
| psychology university | 211 |
| to this | 211 |
| published online | 210 |
| for image | 210 |
| graduate school | 209 |
| science department | 209 |
| recognition system | 209 |
| in which | 207 |
| networks for | 206 |
| college london | 206 |
| all rights | 205 |
| be used | 205 |
| rights reserved | 204 |
| for visual | 202 |
| method for | 201 |
| microsoft research | 200 |
| of faces | 200 |
| san diego | 199 |
| based face | 198 |
| in social | 198 |
| of hong | 198 |
| volume issue | 198 |
| cite this | 197 |
| model for | 197 |
| an open | 196 |
| used for | 196 |
| research institute | 195 |
| information and | 195 |
| united states | 193 |
| action recognition | 193 |
| network for | 192 |
| it has | 192 |
| face processing | 192 |
| semantic segmentation | 190 |
| the other | 189 |
| for each | 188 |
| of visual | 188 |
| electronic engineering | 187 |
| to cite | 187 |
| an image | 187 |
| in autism | 186 |
| we present | 185 |
| creative commons | 185 |
| are not | 185 |
| in video | 184 |
| for object | 184 |
| do not | 184 |
| or not | 182 |
| of images | 182 |
| features for | 182 |
| approach for | 181 |
| framework for | 181 |
| real world | 179 |
| max planck | 179 |
| the wild | 178 |
| face and | 178 |
| we use | 178 |
| the performance | 178 |
| technical university | 177 |
| images and | 177 |
| the data | 176 |
| deep neural | 176 |
| the number | 175 |
| of advanced | 174 |
| and recognition | 174 |
| human pose | 174 |
| technology and | 174 |
| and its | 172 |
| recognition with | 172 |
| eth zurich | 172 |
| of informatics | 171 |
| in addition | 170 |
| massachusetts institute | 170 |
| stanford university | 169 |
| between the | 169 |
| signi cant | 168 |
| the chinese | 167 |
| the use | 166 |
| for person | 166 |
| of china | 165 |
| commons attribution | 164 |
| face image | 162 |
| planck institute | 162 |
| for more | 162 |
| the results | 162 |
| children with | 162 |
| tsinghua university | 161 |
| learning and | 160 |
| of pattern | 160 |
| not the | 160 |
| key words | 160 |
| university china | 160 |
| image retrieval | 159 |
| of chinese | 159 |
| computer applications | 159 |
| for this | 158 |
| and social | 158 |
| does not | 158 |
| of medicine | 158 |
| research and | 158 |
| information processing | 157 |
| the problem | 157 |
| chinese university | 157 |
| intelligent systems | 157 |
| studies have | 156 |
| object tracking | 156 |
| and applications | 155 |
| of psychiatry | 155 |
| to make | 155 |
| we show | 154 |
| to face | 154 |
| of image | 154 |
| component analysis | 154 |
| the paper | 154 |
| is that | 153 |
| engineering department | 153 |
| national laboratory | 152 |
| of emotion | 152 |
| ny usa | 152 |
| human face | 151 |
| the ability | 151 |
| which are | 151 |
| this version | 151 |
| of maryland | 151 |
| be inserted | 151 |
| in asd | 151 |
| features and | 150 |
| conference paper | 150 |
| that can | 150 |
| the system | 149 |
| over the | 149 |
| accepted for | 149 |
| in our | 149 |
| the creative | 148 |
| we are | 148 |
| of mathematics | 148 |
| assistant professor | 147 |
| the facial | 147 |
| spectrum disorders | 146 |
| teaching and | 146 |
| support vector | 146 |
| barcelona spain | 145 |
| ma usa | 145 |
| on image | 145 |
| processing and | 145 |
| the editor | 145 |
| the amygdala | 145 |
| national institute | 144 |
| as conference | 144 |
| at iclr | 143 |
| for research | 143 |
| the visual | 143 |
| access article | 142 |
| spectrum disorder | 142 |
| and video | 142 |
| to learn | 141 |
| in many | 141 |
| the following | 141 |
| models for | 141 |
| shanghai china | 140 |
| whether they | 140 |
| pa usa | 140 |
| the first | 139 |
| www frontiersin | 139 |
| frontiersin org | 139 |
| ieee transactions | 139 |
| classi ers | 139 |
| of research | 138 |
| de ned | 138 |
| system for | 138 |
| by using | 138 |
| correspondence tel | 138 |
| suggest that | 137 |
| of sci | 137 |
| provided the | 136 |
| faces and | 136 |
| and image | 136 |
| of texas | 136 |
| come from | 136 |
| is also | 136 |
| distribution and | 135 |
| van gool | 135 |
| to improve | 135 |
| or from | 135 |
| in terms | 135 |
| of interest | 135 |
| is used | 135 |
| to recognize | 135 |
| feature selection | 135 |
| is multi | 134 |
| for video | 134 |
| of each | 133 |
| and their | 133 |
| indian institute | 133 |
| ku leuven | 132 |
| from public | 132 |
| wang and | 132 |
| into the | 132 |
| face perception | 132 |
| of intelligent | 131 |
| science engineering | 131 |
| or private | 131 |
| the recognition | 131 |
| more information | 131 |
| of new | 131 |
| sciences beijing | 130 |
| human computer | 130 |
| local binary | 130 |
| may come | 130 |
| of applied | 130 |
| eye tracking | 130 |
| available online | 130 |
| the documents | 129 |
| communication engineering | 129 |
| where the | 129 |
| university college | 129 |
| ai research | 128 |
| facial features | 128 |
| and dissemination | 128 |
| documents may | 128 |
| private research | 128 |
| research centers | 128 |
| and face | 128 |
| university beijing | 128 |
| the two | 128 |
| luc van | 127 |
| recognition based | 127 |
| we have | 127 |
| when the | 127 |
| methods for | 127 |
| multi disciplinary | 127 |
| disciplinary open | 127 |
| rchive for | 127 |
| the deposit | 127 |
| deposit and | 127 |
| research documents | 127 |
| documents whether | 127 |
| are pub | 127 |
| research institutions | 127 |
| in france | 127 |
| archive ouverte | 127 |
| ouverte pluridisciplinaire | 127 |
| pluridisciplinaire hal | 127 |
| hal est | 127 |
| la diffusion | 127 |
| de documents | 127 |
| de niveau | 127 |
| niveau recherche | 127 |
| recherche publi | 127 |
| ou non | 127 |
| recherche fran | 127 |
| des laboratoires | 127 |
| ou priv | 127 |
| of data | 127 |
| this material | 127 |
| during the | 126 |
| https doi | 126 |
| college park | 126 |
| laboratory for | 126 |
| of oxford | 125 |
| the last | 125 |
| not only | 125 |
| accepted date | 125 |
| uc berkeley | 125 |
| the eyes | 124 |
| for informatics | 124 |
| association for | 124 |
| however the | 124 |
| are the | 124 |
| machine vision | 124 |
| please contact | 124 |
| social cognition | 124 |
| our method | 124 |
| tokyo japan | 124 |
| distributed under | 123 |
| original work | 123 |
| data and | 123 |
| in particular | 123 |
| the department | 123 |
| received date | 123 |
| of all | 123 |
| we can | 123 |
| an important | 123 |
| original research | 122 |
| for computational | 122 |
| and tracking | 122 |
| semi supervised | 122 |
| and then | 122 |
| any medium | 121 |
| the development | 121 |
| the training | 121 |
| for multi | 121 |
| and machine | 121 |
| zero shot | 121 |
| and reproduction | 120 |
| doi fpsyg | 120 |
| on computer | 120 |
| academic editor | 120 |
| eurasip journal | 120 |
| of any | 120 |
| high level | 119 |
| issn online | 119 |
| of london | 119 |
| of southern | 119 |
| in real | 119 |
| weakly supervised | 118 |
| about the | 118 |
| in human | 118 |
| in other | 118 |
| which permits | 117 |
| permits unrestricted | 117 |
| medium provided | 117 |
| ef icient | 117 |
| computing and | 117 |
| to extract | 117 |
| this study | 117 |
| than the | 117 |
| robust face | 117 |
| detection using | 117 |
| published version | 117 |
| we also | 116 |
| found that | 116 |
| article was | 116 |
| in psychology | 116 |
| computational linguistics | 116 |
| use distribution | 116 |
| downloaded from | 116 |
| the current | 116 |
| information about | 116 |
| date accepted | 116 |
| technical report | 116 |
| the object | 116 |
| thesis submitted | 116 |
| is properly | 115 |
| the model | 115 |
| mathematics and | 115 |
| high dimensional | 115 |
| facial feature | 115 |
| that this | 115 |
| and systems | 115 |
| of amsterdam | 115 |
| article distributed | 114 |
| of illinois | 114 |
| software engineering | 114 |
| robotics institute | 114 |
| the past | 113 |
| to detect | 113 |
| the work | 113 |
| artificial intelligence | 113 |
| in section | 113 |
| of different | 113 |
| for instance | 113 |
| california san | 113 |
| the best | 113 |
| the present | 112 |
| the social | 112 |
| imperial college | 112 |
| principal component | 112 |
| to identify | 112 |
| of michigan | 112 |
| sparse representation | 111 |
| signi cantly | 111 |
| of singapore | 111 |
| low resolution | 111 |
| been accepted | 111 |
| southern california | 111 |
| and pattern | 111 |
| eye gaze | 110 |
| our approach | 110 |
| al and | 110 |
| archives ouvertes | 110 |
| generative adversarial | 110 |
| recognition from | 110 |
| of washington | 110 |
| discriminant analysis | 110 |
| on face | 110 |
| unrestricted use | 109 |
| key lab | 109 |
| the second | 109 |
| at http | 109 |
| and cognitive | 109 |
| of autism | 109 |
| vol issue | 109 |
| we will | 108 |
| question answering | 108 |
| of their | 108 |
| tel fax | 108 |
| image classi | 108 |
| experimental results | 108 |
| pose and | 107 |
| visual recognition | 107 |
| georgia institute | 107 |
| speech and | 107 |
| spatio temporal | 107 |
| all the | 107 |
| image analysis | 107 |
| is one | 107 |
| and that | 106 |
| appearance based | 106 |
| of people | 106 |
| vision center | 106 |
| and electronic | 106 |
| in both | 106 |
| proposed method | 106 |
| for all | 106 |
| the scene | 105 |
| multi task | 105 |
| training data | 105 |
| improve the | 105 |
| fran ais | 105 |
| rather than | 105 |
| pattern analysis | 105 |
| of surrey | 105 |
| of toronto | 104 |
| jiaotong university | 104 |
| compared with | 104 |
| deep convolutional | 104 |
| tracking and | 104 |
| systems and | 104 |
| the state | 104 |
| zhang and | 103 |
| automation chinese | 103 |
| but also | 103 |
| of deep | 103 |
| metric learning | 103 |
| within the | 103 |
| recognition systems | 103 |
| and other | 103 |
| north carolina | 102 |
| and facial | 102 |
| multi view | 102 |
| optical flow | 102 |
| and human | 102 |
| urbana champaign | 102 |
| multi modal | 102 |
| california los | 102 |
| cognition and | 102 |
| latex class | 101 |
| cation and | 101 |
| of cse | 101 |
| the full | 101 |
| and signal | 101 |
| state key | 101 |
| university usa | 101 |
| class files | 100 |
| representation for | 100 |
| the images | 100 |
| computer sciences | 100 |
| and are | 100 |
| may not | 100 |
| to obtain | 99 |
| de lausanne | 99 |
| psychology and | 99 |
| of latex | 99 |
| multi scale | 99 |
| features are | 99 |
| the journal | 98 |
| australian national | 98 |
| for semantic | 98 |
| the task | 98 |
| supervised learning | 98 |
| learning with | 98 |
| low rank | 98 |
| use the | 97 |
| of central | 97 |
| partial fulfillment | 97 |
| and computing | 97 |
| to end | 97 |
| queen mary | 97 |
| id pages | 97 |
| from single | 96 |
| the feature | 96 |
| chapel hill | 96 |
| natural language | 96 |
| engineering the | 96 |
| images are | 96 |
| speci cally | 96 |
| noname manuscript | 96 |
| shown that | 96 |
| in videos | 96 |
| the neural | 96 |
| head pose | 96 |
| of ece | 96 |
| the research | 96 |
| for action | 96 |
| re identification | 96 |
| if the | 95 |
| york university | 95 |
| the eye | 95 |
| face alignment | 95 |
| perception and | 95 |
| of software | 95 |
| machine intelligence | 95 |
| authors and | 95 |
| paris france | 95 |
| sagepub com | 95 |
| face veri | 95 |
| duke university | 95 |
| properly cited | 94 |
| to achieve | 94 |
| of objects | 94 |
| human faces | 94 |
| images with | 94 |
| files vol | 94 |
| of brain | 94 |
| vision group | 94 |
| model based | 94 |
| fine grained | 94 |
| al this | 94 |
| attribution license | 93 |
| engineering science | 93 |
| of emotional | 93 |
| the input | 93 |
| transfer learning | 93 |
| facial emotion | 93 |
| information sciences | 93 |
| was supported | 93 |
| for intelligent | 93 |
| analysis for | 93 |
| suggests that | 93 |
| nd the | 93 |
| or other | 93 |
| social interaction | 93 |
| the presence | 92 |
| learning based | 92 |
| vision laboratory | 92 |
| the role | 91 |
| using deep | 91 |
| nanjing university | 91 |
| the effect | 91 |
| of object | 91 |
| dictionary learning | 91 |
| algorithm for | 90 |
| to solve | 90 |
| visual question | 90 |
| ieee international | 90 |
| to image | 90 |
| in image | 90 |
| technological university | 90 |
| chen and | 90 |
| binary pattern | 89 |
| entific research | 89 |
| manant des | 89 |
| des tablissements | 89 |
| tablissements enseignement | 89 |
| ou trangers | 89 |
| trangers des | 89 |
| for any | 89 |
| and control | 89 |
| jiao tong | 89 |
| among the | 89 |
| the world | 89 |
| expression analysis | 89 |
| if you | 89 |
| vision lab | 89 |
| based methods | 89 |
| recent years | 89 |
| seoul korea | 88 |
| the accuracy | 88 |
| target tracking | 88 |
| of two | 88 |
| mary university | 88 |
| shanghai jiao | 88 |
| dissertation submitted | 88 |
| under review | 88 |
| information systems | 88 |
| sciences and | 88 |
| of machine | 88 |
| computer interaction | 88 |
| the public | 88 |
| which can | 88 |
| the right | 88 |
| this research | 87 |
| the target | 87 |
| de barcelona | 87 |
| michigan state | 87 |
| liu and | 87 |
| for robust | 87 |
| tong university | 87 |
| it can | 87 |
| engineering research | 87 |
| people with | 87 |
| to social | 87 |
| issn print | 87 |
| peking university | 87 |
| the brain | 87 |
| is more | 86 |
| to whom | 86 |
| brain and | 86 |
| to address | 86 |
| been proposed | 86 |
| this thesis | 86 |
| to faces | 86 |
| demonstrate that | 86 |
| domain adaptation | 86 |
| nanyang technological | 86 |
| based image | 86 |
| feature based | 86 |
| this problem | 85 |
| was submitted | 85 |
| lausanne switzerland | 85 |
| of use | 85 |
| is based | 85 |
| social and | 85 |
| super resolution | 85 |
| dimensionality reduction | 85 |
| or the | 85 |
| publishing corporation | 85 |
| is available | 85 |
| the context | 85 |
| learning from | 85 |
| california berkeley | 84 |
| the user | 84 |
| ecole polytechnique | 84 |
| from video | 84 |
| on pattern | 84 |
| for computer | 84 |
| model and | 84 |
| received april | 84 |
| more than | 84 |
| hindawi publishing | 84 |
| dataset for | 84 |
| in contrast | 84 |
| the study | 84 |
| both the | 84 |
| video based | 84 |
| research portal | 84 |
| received may | 83 |
| of training | 83 |
| which the | 83 |
| to their | 83 |
| and pose | 83 |
| of wisconsin | 83 |
| adversarial networks | 83 |
| for publication | 83 |
| li and | 83 |
| in children | 83 |
| recognition has | 83 |
| video surveillance | 82 |
| in figure | 82 |
| but not | 82 |
| in images | 82 |
| emotional expressions | 82 |
| and applied | 82 |
| maryland college | 82 |
| high resolution | 82 |
| multi target | 82 |
| bernt schiele | 82 |
| facial action | 82 |
| technische universit | 82 |
| vector machine | 82 |
| these results | 82 |
| invariant face | 82 |
| detection with | 82 |
| to publication | 82 |
| low level | 81 |
| the computer | 81 |
| whom correspondence | 81 |
| received july | 81 |
| cornell university | 81 |
| electronic and | 81 |
| in facial | 81 |
| that our | 81 |
| received march | 81 |
| the effects | 81 |
| human detection | 81 |
| nearest neighbor | 81 |
| of features | 81 |
| of its | 81 |
| of pennsylvania | 81 |
| ground truth | 80 |
| automatic face | 80 |
| to have | 80 |
| vision based | 80 |
| google brain | 80 |
| single image | 80 |
| springer science | 80 |
| is still | 80 |
| no august | 80 |
| an object | 80 |
| magnetic resonance | 80 |
| resonance imaging | 80 |
| to its | 80 |
| intelligent information | 79 |
| central florida | 79 |
| received october | 79 |
| science business | 79 |
| business media | 79 |
| with deep | 79 |
| engineering college | 79 |
| adobe research | 79 |
| are used | 79 |
| to use | 79 |
| zhejiang university | 79 |
| shuicheng yan | 78 |
| images from | 78 |
| research group | 78 |
| the mouth | 78 |
| in fig | 78 |
| age estimation | 78 |
| for real | 78 |
| this and | 78 |
| received december | 78 |
| notre dame | 78 |
| berlin germany | 78 |
| audio visual | 78 |
| received june | 77 |
| this document | 77 |
| with high | 77 |
| in revised | 77 |
| revised form | 77 |
| applied sciences | 77 |
| at urbana | 77 |
| received january | 77 |
| facial images | 77 |
| are more | 77 |
| xiaogang wang | 77 |
| the goal | 77 |
| and electrical | 77 |
| this journal | 77 |
| dif cult | 76 |
| the high | 76 |
| city university | 76 |
| as follows | 76 |
| local features | 76 |
| computer graphics | 76 |
| access books | 76 |
| image based | 76 |
| eye contact | 76 |
| asd and | 76 |
| with respect | 76 |
| material for | 76 |
| singapore singapore | 76 |
| extraction and | 76 |
| in recent | 76 |
| of north | 75 |
| of doctor | 75 |
| stefanos zafeiriou | 75 |
| engineering national | 75 |
| la jolla | 75 |
| received september | 75 |
| have shown | 75 |
| follow this | 75 |
| intelligence and | 75 |
| at www | 75 |
| while the | 75 |
| for large | 75 |
| sophia antipolis | 75 |
| in their | 75 |
| columbia university | 75 |
| data set | 75 |
| in each | 75 |
| dr ing | 75 |
| personal use | 75 |
| facial landmark | 74 |
| https hal | 74 |
| june accepted | 74 |
| of computational | 74 |
| and additional | 74 |
| additional works | 74 |
| and open | 74 |
| electrical computer | 74 |
| accepted june | 74 |
| of others | 74 |
| extracted from | 74 |
| along with | 74 |
| large number | 74 |
| been shown | 74 |
| believe that | 74 |
| shot learning | 74 |
| shiguang shan | 73 |
| image segmentation | 73 |
| we found | 73 |
| wide range | 73 |
| and more | 73 |
| th international | 73 |
| international joint | 73 |
| taipei taiwan | 73 |
| and telecommunications | 73 |
| cordelia schmid | 73 |
| the content | 73 |
| facial image | 73 |
| we used | 73 |
| information from | 73 |
| for personal | 73 |
| the appearance | 73 |
| karlsruhe germany | 73 |
| research laboratory | 73 |
| of multiple | 73 |
| stony brook | 73 |
| and object | 73 |
| normal university | 73 |
| md usa | 73 |
| peer reviewed | 73 |
| supplementary material | 72 |
| idiap research | 72 |
| of cognitive | 72 |
| specialty section | 72 |
| joint conference | 72 |
| california institute | 72 |
| for learning | 72 |
| is often | 72 |
| accepted march | 72 |
| of education | 72 |
| of statistics | 72 |
| accepted july | 72 |
| for vision | 72 |
| activity recognition | 72 |
| the local | 72 |
| has not | 72 |
| johns hopkins | 72 |
| hopkins university | 72 |
| of korea | 72 |
| not been | 72 |
| an china | 72 |
| video processing | 72 |
| based approach | 71 |
| image captioning | 71 |
| university shanghai | 71 |
| detection for | 71 |
| recognition for | 71 |
| to facial | 71 |
| accepted may | 71 |
| wa usa | 71 |
| remote sensing | 71 |
| multi object | 71 |
| of neural | 71 |
| professor department | 71 |
| in visual | 71 |
| cation using | 71 |