| of the | 5838 |
| in the | 3346 |
| computer science | 3021 |
| of computer | 2685 |
| face recognition | 2335 |
| to the | 1831 |
| science and | 1659 |
| for the | 1510 |
| of technology | 1434 |
| on the | 1406 |
| and the | 1048 |
| computer vision | 1028 |
| of electrical | 988 |
| by the | 940 |
| in this | 912 |
| and computer | 901 |
| from the | 892 |
| with the | 781 |
| has been | 775 |
| classi cation | 771 |
| member ieee | 770 |
| facial expression | 766 |
| and technology | 765 |
| computer engineering | 722 |
| electrical engineering | 699 |
| that the | 687 |
| identi cation | 685 |
| of science | 673 |
| open access | 659 |
| this paper | 650 |
| of engineering | 639 |
| center for | 630 |
| for face | 628 |
| of this | 626 |
| of information | 614 |
| and engineering | 602 |
| engineering and | 600 |
| is the | 598 |
| institute for | 583 |
| facial expressions | 565 |
| of california | 558 |
| electrical and | 556 |
| of psychology | 550 |
| the university | 546 |
| international journal | 545 |
| have been | 539 |
| neural networks | 535 |
| http www | 527 |
| the degree | 497 |
| beijing china | 489 |
| of face | 487 |
| the face | 476 |
| this article | 466 |
| the same | 463 |
| this work | 459 |
| state university | 458 |
| carnegie mellon | 458 |
| mellon university | 452 |
| doi org | 450 |
| real time | 429 |
| at the | 428 |
| of facial | 426 |
| pattern recognition | 417 |
| as the | 413 |
| and information | 402 |
| object detection | 384 |
| deep learning | 381 |
| re identi | 379 |
| expression recognition | 375 |
| the requirements | 367 |
| of sciences | 366 |
| in partial | 364 |
| recognition using | 363 |
| the most | 360 |
| hong kong | 359 |
| new york | 356 |
| neural network | 354 |
| detection and | 347 |
| of computing | 335 |
| senior member | 334 |
| chinese academy | 331 |
| information technology | 330 |
| of philosophy | 322 |
| machine learning | 320 |
| dx doi | 319 |
| convolutional neural | 319 |
| the proposed | 317 |
| face detection | 312 |
| national university | 309 |
| of human | 302 |
| be addressed | 300 |
| pose estimation | 299 |
| with autism | 298 |
| ca usa | 297 |
| of social | 297 |
| vision and | 293 |
| they are | 291 |
| as well | 289 |
| science university | 288 |
| autism spectrum | 287 |
| in computer | 286 |
| of these | 285 |
| large scale | 283 |
| the netherlands | 281 |
| arti cial | 279 |
| learning for | 277 |
| research article | 277 |
| for example | 276 |
| the original | 275 |
| the author | 275 |
| of our | 273 |
| signal processing | 273 |
| engineering university | 272 |
| the art | 268 |
| international conference | 266 |
| requirements for | 265 |
| the image | 261 |
| feature extraction | 261 |
| we propose | 258 |
| of electronics | 258 |
| recognition and | 258 |
| under the | 257 |
| in face | 257 |
| is not | 254 |
| object recognition | 249 |
| associated with | 246 |
| face images | 246 |
| show that | 243 |
| information engineering | 242 |
| image and | 240 |
| in order | 239 |
| there are | 239 |
| centre for | 238 |
| volume article | 238 |
| image processing | 235 |
| individuals with | 235 |
| that are | 233 |
| research center | 231 |
| key laboratory | 231 |
| correspondence should | 229 |
| emotion recognition | 228 |
| using the | 225 |
| fellow ieee | 224 |
| veri cation | 224 |
| with asd | 224 |
| in any | 223 |
| cial intelligence | 222 |
| the human | 221 |
| partial ful | 219 |
| corresponding author | 218 |
| ful llment | 217 |
| united kingdom | 215 |
| the authors | 213 |
| the main | 213 |
| los angeles | 213 |
| the rst | 212 |
| information science | 211 |
| for human | 211 |
| psychology university | 210 |
| ef cient | 210 |
| electronics and | 208 |
| and communication | 208 |
| to this | 207 |
| published online | 207 |
| for facial | 207 |
| in which | 207 |
| all rights | 205 |
| rights reserved | 204 |
| be used | 204 |
| of electronic | 203 |
| pedestrian detection | 203 |
| college london | 202 |
| computer and | 202 |
| for image | 199 |
| san diego | 198 |
| graduate school | 197 |
| in social | 197 |
| of faces | 197 |
| cite this | 197 |
| and research | 196 |
| used for | 196 |
| an open | 195 |
| analysis and | 195 |
| recognition system | 194 |
| method for | 194 |
| science department | 193 |
| face processing | 192 |
| it has | 190 |
| united states | 189 |
| networks for | 189 |
| action recognition | 189 |
| the other | 189 |
| for each | 187 |
| to cite | 187 |
| an image | 187 |
| based face | 186 |
| in autism | 186 |
| research institute | 186 |
| we present | 185 |
| creative commons | 185 |
| for visual | 184 |
| model for | 184 |
| are not | 184 |
| do not | 184 |
| ieee and | 183 |
| microsoft research | 183 |
| of visual | 183 |
| or not | 182 |
| of hong | 182 |
| student member | 181 |
| of images | 180 |
| of automation | 180 |
| real world | 178 |
| we use | 178 |
| semantic segmentation | 178 |
| the performance | 178 |
| framework for | 176 |
| the data | 176 |
| for object | 176 |
| face and | 175 |
| images and | 175 |
| information and | 175 |
| the number | 175 |
| in video | 174 |
| network for | 172 |
| technical university | 171 |
| max planck | 171 |
| in addition | 170 |
| of informatics | 170 |
| massachusetts institute | 170 |
| electronic engineering | 170 |
| human pose | 169 |
| stanford university | 168 |
| between the | 168 |
| signi cant | 168 |
| features for | 168 |
| eth zurich | 167 |
| and its | 165 |
| deep neural | 165 |
| commons attribution | 164 |
| approach for | 164 |
| the use | 164 |
| recognition with | 164 |
| technology and | 163 |
| the wild | 162 |
| for more | 162 |
| the results | 162 |
| children with | 162 |
| not the | 160 |
| key words | 160 |
| face image | 159 |
| and recognition | 159 |
| for this | 158 |
| and social | 158 |
| does not | 158 |
| the problem | 157 |
| learning and | 156 |
| studies have | 156 |
| of medicine | 156 |
| the chinese | 155 |
| to make | 155 |
| we show | 154 |
| planck institute | 154 |
| of psychiatry | 154 |
| volume issue | 153 |
| is that | 153 |
| research and | 153 |
| the paper | 153 |
| intelligent systems | 152 |
| of emotion | 152 |
| ny usa | 152 |
| the ability | 151 |
| which are | 151 |
| be inserted | 151 |
| in asd | 151 |
| of image | 150 |
| that can | 150 |
| the system | 149 |
| conference paper | 149 |
| in our | 149 |
| the creative | 148 |
| human face | 148 |
| for person | 148 |
| image retrieval | 147 |
| we are | 147 |
| over the | 147 |
| this version | 147 |
| of maryland | 147 |
| spectrum disorders | 146 |
| of pattern | 146 |
| to face | 146 |
| teaching and | 146 |
| and applications | 146 |
| object tracking | 146 |
| ma usa | 145 |
| chinese university | 145 |
| features and | 145 |
| the editor | 145 |
| component analysis | 145 |
| the amygdala | 145 |
| the facial | 144 |
| of china | 143 |
| barcelona spain | 143 |
| for research | 143 |
| the visual | 143 |
| as conference | 143 |
| engineering department | 143 |
| support vector | 143 |
| of advanced | 142 |
| access article | 142 |
| at iclr | 142 |
| spectrum disorder | 141 |
| the following | 141 |
| to learn | 140 |
| in many | 140 |
| whether they | 140 |
| and video | 140 |
| information processing | 139 |
| of chinese | 139 |
| the first | 139 |
| www frontiersin | 139 |
| frontiersin org | 139 |
| de ned | 138 |
| of mathematics | 138 |
| correspondence tel | 138 |
| suggest that | 137 |
| on image | 137 |
| classi ers | 137 |
| pa usa | 137 |
| provided the | 136 |
| of sci | 136 |
| come from | 136 |
| is also | 136 |
| distribution and | 135 |
| faces and | 135 |
| or from | 135 |
| of interest | 135 |
| is used | 135 |
| models for | 135 |
| to improve | 134 |
| national institute | 134 |
| national laboratory | 134 |
| is multi | 134 |
| to recognize | 134 |
| of each | 133 |
| system for | 133 |
| in terms | 133 |
| university china | 133 |
| feature selection | 133 |
| of research | 132 |
| and image | 132 |
| of texas | 132 |
| from public | 132 |
| into the | 132 |
| face perception | 132 |
| ku leuven | 131 |
| tsinghua university | 131 |
| or private | 131 |
| the recognition | 131 |
| and their | 131 |
| more information | 131 |
| accepted for | 131 |
| may come | 130 |
| eye tracking | 130 |
| of new | 130 |
| the documents | 129 |
| where the | 129 |
| by using | 129 |
| and dissemination | 128 |
| documents may | 128 |
| private research | 128 |
| research centers | 128 |
| wang and | 128 |
| the two | 128 |
| van gool | 127 |
| ai research | 127 |
| we have | 127 |
| when the | 127 |
| facial features | 127 |
| multi disciplinary | 127 |
| disciplinary open | 127 |
| rchive for | 127 |
| the deposit | 127 |
| deposit and | 127 |
| research documents | 127 |
| documents whether | 127 |
| are pub | 127 |
| research institutions | 127 |
| in france | 127 |
| archive ouverte | 127 |
| ouverte pluridisciplinaire | 127 |
| pluridisciplinaire hal | 127 |
| hal est | 127 |
| la diffusion | 127 |
| de documents | 127 |
| de niveau | 127 |
| niveau recherche | 127 |
| recherche publi | 127 |
| ou non | 127 |
| recherche fran | 127 |
| des laboratoires | 127 |
| ou priv | 127 |
| this material | 127 |
| during the | 126 |
| https doi | 126 |
| university college | 126 |
| human computer | 125 |
| the last | 125 |
| local binary | 125 |
| not only | 125 |
| accepted date | 125 |
| however the | 124 |
| for video | 124 |
| please contact | 124 |
| social cognition | 124 |
| our method | 124 |
| distributed under | 123 |
| original work | 123 |
| in particular | 123 |
| methods for | 123 |
| and face | 123 |
| received date | 123 |
| of all | 123 |
| are the | 123 |
| we can | 123 |
| original research | 122 |
| processing and | 122 |
| college park | 122 |
| computer applications | 122 |
| an important | 122 |
| and then | 122 |
| any medium | 121 |
| the department | 121 |
| of data | 121 |
| uc berkeley | 121 |
| and reproduction | 120 |
| the eyes | 120 |
| the training | 120 |
| doi fpsyg | 120 |
| of applied | 120 |
| academic editor | 120 |
| eurasip journal | 120 |
| of any | 120 |
| luc van | 119 |
| the development | 119 |
| association for | 119 |
| of southern | 119 |
| data and | 118 |
| about the | 118 |
| on computer | 118 |
| of london | 118 |
| in other | 118 |
| in real | 118 |
| which permits | 117 |
| permits unrestricted | 117 |
| medium provided | 117 |
| to extract | 117 |
| this study | 117 |
| than the | 117 |
| and tracking | 117 |
| published version | 117 |
| we also | 116 |
| found that | 116 |
| article was | 116 |
| in psychology | 116 |
| for computational | 116 |
| use distribution | 116 |
| downloaded from | 116 |
| the current | 116 |
| information about | 116 |
| date accepted | 116 |
| the object | 116 |
| thesis submitted | 116 |
| is properly | 115 |
| high level | 115 |
| for informatics | 115 |
| for multi | 115 |
| semi supervised | 115 |
| that this | 115 |
| article distributed | 114 |
| of intelligent | 114 |
| recognition based | 114 |
| the model | 114 |
| in human | 114 |
| zero shot | 114 |
| technical report | 114 |
| weakly supervised | 113 |
| the work | 113 |
| in section | 113 |
| for instance | 113 |
| robotics institute | 113 |
| the best | 113 |
| of amsterdam | 113 |
| sciences beijing | 112 |
| the past | 112 |
| of different | 112 |
| the social | 112 |
| california san | 112 |
| high dimensional | 112 |
| machine vision | 112 |
| of michigan | 112 |
| ef icient | 111 |
| of illinois | 111 |
| the present | 111 |
| signi cantly | 111 |
| computational linguistics | 111 |
| facial feature | 111 |
| to identify | 111 |
| southern california | 111 |
| tokyo japan | 111 |
| to detect | 110 |
| artificial intelligence | 110 |
| our approach | 110 |
| al and | 110 |
| archives ouvertes | 110 |
| science engineering | 110 |
| unrestricted use | 109 |
| of oxford | 109 |
| the second | 109 |
| imperial college | 109 |
| university beijing | 109 |
| detection using | 109 |
| laboratory for | 109 |
| of autism | 109 |
| we will | 108 |
| of their | 108 |
| mathematics and | 108 |
| tel fax | 108 |
| principal component | 108 |
| experimental results | 108 |
| georgia institute | 107 |
| and cognitive | 107 |
| is one | 107 |
| and that | 106 |
| computing and | 106 |
| generative adversarial | 106 |
| all the | 106 |
| in both | 106 |
| of washington | 106 |
| and systems | 106 |
| appearance based | 105 |
| the scene | 105 |
| training data | 105 |
| robust face | 105 |
| question answering | 105 |
| fran ais | 105 |
| low resolution | 105 |
| rather than | 105 |
| image analysis | 105 |
| proposed method | 105 |
| for all | 105 |
| on face | 105 |
| of toronto | 104 |
| pose and | 104 |
| improve the | 104 |
| compared with | 104 |
| available online | 104 |
| image classi | 104 |
| visual recognition | 103 |
| sparse representation | 103 |
| of people | 103 |
| ieee transactions | 103 |
| speech and | 103 |
| communication engineering | 103 |
| but also | 103 |
| of surrey | 103 |
| within the | 103 |
| the state | 103 |
| eye gaze | 102 |
| vision center | 102 |
| and machine | 102 |
| spatio temporal | 102 |
| at http | 102 |
| california los | 102 |
| recognition systems | 102 |
| and other | 102 |
| cognition and | 102 |
| discriminant analysis | 102 |
| north carolina | 101 |
| and electronic | 101 |
| of singapore | 101 |
| and facial | 101 |
| and human | 101 |
| the full | 101 |
| systems and | 101 |
| the images | 100 |
| and are | 100 |
| may not | 100 |
| de lausanne | 99 |
| deep convolutional | 99 |
| urbana champaign | 99 |
| recognition from | 99 |
| and pattern | 99 |
| to obtain | 98 |
| multi task | 98 |
| the journal | 98 |
| psychology and | 98 |
| the task | 98 |
| features are | 98 |
| shanghai china | 97 |
| use the | 97 |
| software engineering | 97 |
| partial fulfillment | 97 |
| tracking and | 97 |
| queen mary | 97 |
| computer sciences | 97 |
| multi modal | 97 |
| id pages | 97 |
| and signal | 97 |
| images are | 96 |
| of deep | 96 |
| speci cally | 96 |
| noname manuscript | 96 |
| optical flow | 96 |
| shown that | 96 |
| the research | 96 |
| the feature | 95 |
| chapel hill | 95 |
| perception and | 95 |
| multi view | 95 |
| to end | 95 |
| authors and | 95 |
| head pose | 95 |
| assistant professor | 95 |
| the neural | 95 |
| duke university | 95 |
| properly cited | 94 |
| key lab | 94 |
| to achieve | 94 |
| if the | 94 |
| of central | 94 |
| multi scale | 94 |
| paris france | 94 |
| university usa | 94 |
| al this | 94 |
| attribution license | 93 |
| york university | 93 |
| zhang and | 93 |
| of emotional | 93 |
| and computing | 93 |
| representation for | 93 |
| been accepted | 93 |
| for semantic | 93 |
| sagepub com | 93 |
| was supported | 93 |
| suggests that | 93 |
| or other | 93 |
| social interaction | 93 |
| from single | 92 |
| of objects | 92 |
| human faces | 92 |
| the input | 92 |
| natural language | 92 |
| of brain | 92 |
| facial emotion | 92 |
| cation and | 92 |
| for action | 92 |
| state key | 92 |
| the presence | 91 |
| jiaotong university | 91 |
| the role | 91 |
| images with | 91 |
| automation chinese | 91 |
| model based | 91 |
| in videos | 91 |
| fine grained | 91 |
| the effect | 91 |
| supervised learning | 91 |
| learning with | 91 |
| nd the | 91 |
| re identification | 91 |
| to solve | 90 |
| the eye | 90 |
| metric learning | 90 |
| of object | 90 |
| indian institute | 90 |
| transfer learning | 89 |
| entific research | 89 |
| manant des | 89 |
| des tablissements | 89 |
| tablissements enseignement | 89 |
| ou trangers | 89 |
| trangers des | 89 |
| for any | 89 |
| face veri | 89 |
| vision laboratory | 89 |
| for intelligent | 89 |
| low rank | 89 |
| if you | 89 |
| based methods | 89 |
| the accuracy | 88 |
| ieee international | 88 |
| of two | 88 |
| information sciences | 88 |
| mary university | 88 |
| dissertation submitted | 88 |
| among the | 88 |
| the world | 88 |
| the public | 88 |
| which can | 88 |
| the right | 88 |
| dictionary learning | 88 |
| recent years | 88 |
| seoul korea | 87 |
| this research | 87 |
| the target | 87 |
| de barcelona | 87 |
| binary pattern | 87 |
| michigan state | 87 |
| visual question | 87 |
| face alignment | 87 |
| engineering the | 87 |
| using deep | 87 |
| it can | 87 |
| under review | 87 |
| expression analysis | 87 |
| to social | 87 |
| in image | 87 |
| the brain | 87 |
| vision lab | 87 |
| is more | 86 |
| to whom | 86 |
| to address | 86 |
| been proposed | 86 |
| australian national | 86 |
| this thesis | 86 |
| to faces | 86 |
| learning based | 86 |
| people with | 86 |
| demonstrate that | 86 |
| this problem | 85 |
| was submitted | 85 |
| lausanne switzerland | 85 |
| of use | 85 |
| vision group | 85 |
| is based | 85 |
| or the | 85 |
| the context | 85 |
| of machine | 85 |
| computer interaction | 85 |
| engineering science | 84 |
| the user | 84 |
| ecole polytechnique | 84 |
| algorithm for | 84 |
| brain and | 84 |
| target tracking | 84 |
| social and | 84 |
| to image | 84 |
| received april | 84 |
| more than | 84 |
| hindawi publishing | 84 |
| publishing corporation | 84 |
| sciences and | 84 |
| in contrast | 84 |
| the study | 84 |
| both the | 84 |
| domain adaptation | 84 |
| feature based | 84 |
| research portal | 84 |
| received may | 83 |
| of training | 83 |
| which the | 83 |
| issn online | 83 |
| to their | 83 |
| of wisconsin | 83 |
| in children | 83 |
| recognition has | 83 |
| california berkeley | 82 |
| in figure | 82 |
| but not | 82 |
| emotional expressions | 82 |
| of software | 82 |
| high resolution | 82 |
| and pose | 82 |
| for computer | 82 |
| super resolution | 82 |
| dimensionality reduction | 82 |
| these results | 82 |
| analysis for | 82 |
| learning from | 82 |
| technological university | 82 |
| the computer | 81 |
| whom correspondence | 81 |
| received july | 81 |
| pattern analysis | 81 |
| cornell university | 81 |
| technische universit | 81 |
| that our | 81 |
| received march | 81 |
| is available | 81 |
| of features | 81 |
| of pennsylvania | 81 |
| chen and | 81 |
| to have | 80 |
| google brain | 80 |
| springer science | 80 |
| maryland college | 80 |
| in facial | 80 |
| and control | 80 |
| magnetic resonance | 80 |
| resonance imaging | 80 |
| nanyang technological | 80 |
| of its | 80 |
| based image | 80 |
| video based | 80 |
| ground truth | 79 |
| video surveillance | 79 |
| in images | 79 |
| received october | 79 |
| science business | 79 |
| business media | 79 |
| and applied | 79 |
| adversarial networks | 79 |
| vector machine | 79 |
| model and | 79 |
| an object | 79 |
| the effects | 79 |
| dataset for | 79 |
| are used | 79 |
| to use | 79 |
| nearest neighbor | 79 |
| low level | 78 |
| the mouth | 78 |
| in fig | 78 |
| multi target | 78 |
| for robust | 78 |
| this and | 78 |
| received december | 78 |
| notre dame | 78 |
| information systems | 78 |
| to its | 78 |
| to publication | 78 |
| images from | 77 |
| automatic face | 77 |
| received june | 77 |
| this document | 77 |
| single image | 77 |
| with high | 77 |
| in revised | 77 |
| revised form | 77 |
| for real | 77 |
| received january | 77 |
| are more | 77 |
| adobe research | 77 |
| the goal | 77 |
| central florida | 76 |
| the high | 76 |
| is still | 76 |
| as follows | 76 |
| local features | 76 |
| access books | 76 |
| image based | 76 |
| facial action | 76 |
| engineering research | 76 |
| li and | 76 |
| eye contact | 76 |
| nanjing university | 76 |
| asd and | 76 |
| with respect | 76 |
| material for | 76 |
| research group | 75 |
| of doctor | 75 |
| dif cult | 75 |
| with deep | 75 |
| la jolla | 75 |
| liu and | 75 |
| at urbana | 75 |
| jiao tong | 75 |
| received september | 75 |
| have shown | 75 |
| computer graphics | 75 |
| follow this | 75 |
| while the | 75 |
| electronic and | 75 |
| berlin germany | 75 |
| audio visual | 75 |
| sophia antipolis | 75 |
| in their | 75 |
| invariant face | 75 |
| columbia university | 75 |
| in each | 75 |
| dr ing | 75 |
| detection with | 75 |
| in recent | 75 |
| of north | 74 |
| https hal | 74 |
| from video | 74 |
| june accepted | 74 |
| shanghai jiao | 74 |
| bernt schiele | 74 |
| and additional | 74 |
| additional works | 74 |
| and open | 74 |
| accepted june | 74 |
| of others | 74 |
| extracted from | 74 |
| data set | 74 |
| been shown | 74 |
| believe that | 74 |
| personal use | 74 |
| vision based | 73 |
| we found | 73 |
| wide range | 73 |
| and more | 73 |
| the content | 73 |
| age estimation | 73 |
| machine intelligence | 73 |
| tong university | 73 |
| electrical computer | 73 |
| we used | 73 |
| information from | 73 |
| the appearance | 73 |
| human detection | 73 |
| karlsruhe germany | 73 |
| large number | 73 |
| stony brook | 73 |
| md usa | 73 |
| supplementary material | 72 |
| of cognitive | 72 |
| specialty section | 72 |
| taipei taiwan | 72 |
| city university | 72 |
| facial landmark | 72 |
| applied sciences | 72 |
| facial image | 72 |
| is often | 72 |
| accepted march | 72 |
| intelligence and | 72 |
| of statistics | 72 |
| accepted july | 72 |
| for personal | 72 |
| and electrical | 72 |
| the local | 72 |
| and object | 72 |
| image segmentation | 71 |
| idiap research | 71 |
| international joint | 71 |
| accepted may | 71 |
| of neural | 71 |
| along with | 71 |
| decision making | 71 |
| each other | 71 |
| the dataset | 71 |
| data driven | 71 |
| in individuals | 71 |
| johns hopkins | 71 |
| hopkins university | 71 |
| of multiple | 71 |
| shaogang gong | 71 |
| extraction and | 71 |
| peer reviewed | 71 |
| video processing | 71 |
| the video | 70 |
| th international | 70 |
| joint conference | 70 |
| cedex france | 70 |
| de cits | 70 |
| cordelia schmid | 70 |
| engineering national | 70 |
| california institute | 70 |
| faces are | 70 |
| well known | 70 |
| adults with | 70 |
| vol issue | 70 |
| information please | 70 |
| for vision | 70 |
| research laboratory | 70 |
| functional magnetic | 70 |
| peking university | 70 |
| of korea | 70 |
| based approach | 69 |
| shuicheng yan | 69 |
| intelligent information | 69 |
| in general | 69 |
| to determine | 69 |
| to reduce | 69 |
| on their | 69 |
| to facial | 69 |
| for learning | 69 |
| are based | 69 |
| free and | 69 |
| for publication | 69 |
| autism research | 69 |
| of eye | 69 |
| for large | 69 |
| words and | 69 |
| activity recognition | 69 |
| such that | 69 |
| other hand | 69 |
| social interactions | 69 |
| recognition for | 69 |
| these methods | 69 |
| gaze direction | 68 |
| our proposed | 68 |
| the shape | 68 |
| remote sensing | 68 |
| on pattern | 68 |
| wisconsin madison | 68 |
| we introduce | 68 |
| facial images | 68 |