| of the | 4760 |
| in the | 2840 |
| computer science | 2687 |
| of computer | 2418 |
| face recognition | 1920 |
| science and | 1502 |
| to the | 1498 |
| member ieee | 1381 |
| of technology | 1248 |
| for the | 1201 |
| on the | 1161 |
| and the | 858 |
| computer vision | 817 |
| of electrical | 785 |
| by the | 782 |
| facial expression | 778 |
| from the | 773 |
| in this | 758 |
| and technology | 719 |
| and computer | 715 |
| classi cation | 668 |
| has been | 653 |
| with the | 630 |
| identi cation | 614 |
| computer engineering | 584 |
| of science | 582 |
| of engineering | 576 |
| international journal | 571 |
| facial expressions | 570 |
| center for | 568 |
| electrical engineering | 565 |
| that the | 561 |
| for face | 560 |
| and engineering | 544 |
| of information | 543 |
| of this | 530 |
| open access | 525 |
| engineering and | 524 |
| of psychology | 517 |
| beijing china | 513 |
| this paper | 505 |
| of california | 497 |
| institute for | 494 |
| is the | 479 |
| the university | 462 |
| http www | 451 |
| this article | 450 |
| electrical and | 438 |
| have been | 437 |
| of facial | 411 |
| neural networks | 402 |
| the degree | 395 |
| doi org | 394 |
| of face | 393 |
| carnegie mellon | 390 |
| the face | 385 |
| mellon university | 384 |
| this work | 384 |
| re identi | 380 |
| expression recognition | 380 |
| the same | 374 |
| state university | 372 |
| at the | 369 |
| and information | 365 |
| senior member | 356 |
| as the | 348 |
| of sciences | 347 |
| ieee and | 343 |
| pattern recognition | 341 |
| real time | 332 |
| hong kong | 326 |
| recognition using | 322 |
| chinese academy | 320 |
| of computing | 314 |
| deep learning | 314 |
| information technology | 313 |
| pose estimation | 299 |
| with autism | 297 |
| in partial | 296 |
| the most | 296 |
| the requirements | 292 |
| object detection | 292 |
| new york | 288 |
| autism spectrum | 285 |
| fellow ieee | 285 |
| dx doi | 284 |
| neural network | 283 |
| of social | 281 |
| detection and | 275 |
| national university | 272 |
| machine learning | 271 |
| of philosophy | 265 |
| be addressed | 261 |
| student member | 259 |
| research article | 256 |
| large scale | 255 |
| of human | 253 |
| face detection | 253 |
| science university | 252 |
| in computer | 248 |
| convolutional neural | 246 |
| the proposed | 245 |
| key laboratory | 244 |
| vision and | 243 |
| arti cial | 242 |
| ca usa | 242 |
| learning for | 240 |
| as well | 239 |
| of these | 237 |
| individuals with | 233 |
| the author | 231 |
| of our | 230 |
| they are | 227 |
| of electronics | 225 |
| requirements for | 224 |
| engineering university | 224 |
| with asd | 224 |
| the netherlands | 224 |
| emotion recognition | 224 |
| associated with | 223 |
| feature extraction | 223 |
| information engineering | 222 |
| the art | 220 |
| for example | 220 |
| the original | 220 |
| we propose | 218 |
| signal processing | 216 |
| international conference | 212 |
| under the | 211 |
| recognition and | 207 |
| in face | 206 |
| volume article | 206 |
| that are | 202 |
| the image | 201 |
| research center | 200 |
| for facial | 200 |
| is not | 200 |
| psychology university | 197 |
| image processing | 197 |
| united kingdom | 196 |
| correspondence should | 194 |
| in social | 194 |
| face images | 193 |
| show that | 192 |
| for human | 192 |
| cial intelligence | 191 |
| action recognition | 190 |
| college london | 190 |
| information science | 189 |
| of electronic | 188 |
| there are | 187 |
| object recognition | 187 |
| in order | 186 |
| in autism | 186 |
| centre for | 185 |
| ef cient | 184 |
| using the | 184 |
| the human | 184 |
| electronics and | 184 |
| of automation | 183 |
| and communication | 183 |
| in which | 181 |
| published online | 180 |
| analysis and | 178 |
| image and | 177 |
| the rst | 177 |
| san diego | 176 |
| face processing | 176 |
| partial ful | 175 |
| the main | 175 |
| networks for | 175 |
| to this | 175 |
| for visual | 174 |
| in any | 174 |
| los angeles | 174 |
| ful llment | 173 |
| graduate school | 171 |
| of hong | 170 |
| computer and | 169 |
| rights reserved | 169 |
| research institute | 168 |
| human pose | 168 |
| the authors | 168 |
| recognition system | 167 |
| veri cation | 167 |
| for image | 167 |
| all rights | 167 |
| of faces | 166 |
| corresponding author | 166 |
| science department | 165 |
| max planck | 164 |
| volume issue | 163 |
| method for | 163 |
| and research | 163 |
| an open | 162 |
| microsoft research | 162 |
| children with | 162 |
| model for | 161 |
| united states | 159 |
| of advanced | 158 |
| eth zurich | 158 |
| the other | 158 |
| face and | 157 |
| electronic engineering | 155 |
| cite this | 155 |
| do not | 155 |
| are not | 154 |
| be used | 154 |
| the wild | 153 |
| of visual | 152 |
| we use | 152 |
| and social | 151 |
| network for | 151 |
| of images | 151 |
| creative commons | 151 |
| the data | 150 |
| technical university | 150 |
| used for | 150 |
| in asd | 150 |
| for person | 149 |
| in video | 149 |
| information and | 149 |
| of informatics | 149 |
| to cite | 148 |
| real world | 147 |
| for each | 147 |
| an image | 147 |
| planck institute | 147 |
| of emotion | 147 |
| of psychiatry | 147 |
| between the | 146 |
| features for | 146 |
| spectrum disorders | 146 |
| it has | 145 |
| the number | 145 |
| we present | 144 |
| and its | 143 |
| recognition with | 143 |
| approach for | 142 |
| in addition | 142 |
| the facial | 142 |
| massachusetts institute | 142 |
| spectrum disorder | 141 |
| ma usa | 141 |
| studies have | 141 |
| the amygdala | 141 |
| or not | 140 |
| pedestrian detection | 140 |
| of chinese | 139 |
| the chinese | 139 |
| ny usa | 138 |
| the performance | 138 |
| of medicine | 138 |
| intelligent systems | 137 |
| commons attribution | 137 |
| technology and | 136 |
| of pattern | 136 |
| to learn | 135 |
| stanford university | 135 |
| images and | 134 |
| of china | 133 |
| and applications | 133 |
| signi cant | 133 |
| for research | 133 |
| conference paper | 133 |
| the problem | 132 |
| to make | 132 |
| framework for | 132 |
| chinese university | 132 |
| national laboratory | 132 |
| based face | 131 |
| barcelona spain | 131 |
| information processing | 131 |
| research and | 131 |
| the use | 130 |
| we show | 130 |
| deep neural | 130 |
| learning and | 130 |
| to face | 130 |
| semantic segmentation | 129 |
| www frontiersin | 129 |
| frontiersin org | 129 |
| computer applications | 128 |
| of maryland | 128 |
| for object | 128 |
| in our | 128 |
| as conference | 128 |
| pa usa | 127 |
| the results | 127 |
| of mathematics | 127 |
| shanghai china | 127 |
| which are | 126 |
| over the | 126 |
| and recognition | 126 |
| at iclr | 126 |
| suggest that | 124 |
| is that | 124 |
| face image | 124 |
| for this | 123 |
| we are | 123 |
| object tracking | 122 |
| key words | 122 |
| eye tracking | 122 |
| for more | 122 |
| social cognition | 122 |
| the eyes | 122 |
| van gool | 121 |
| be inserted | 121 |
| does not | 121 |
| university china | 121 |
| face perception | 121 |
| for video | 120 |
| the following | 120 |
| the system | 120 |
| the creative | 120 |
| of image | 120 |
| of interest | 120 |
| the visual | 120 |
| models for | 119 |
| of texas | 119 |
| the first | 119 |
| accepted for | 119 |
| not the | 119 |
| system for | 118 |
| association for | 118 |
| human computer | 118 |
| that can | 118 |
| image retrieval | 117 |
| the ability | 117 |
| features and | 116 |
| the editor | 116 |
| national institute | 116 |
| faces and | 116 |
| is also | 116 |
| processing and | 116 |
| luc van | 115 |
| to recognize | 115 |
| university college | 115 |
| doi fpsyg | 114 |
| assistant professor | 114 |
| tsinghua university | 114 |
| access article | 113 |
| original research | 113 |
| for informatics | 113 |
| computing and | 113 |
| of research | 113 |
| the paper | 112 |
| in many | 112 |
| during the | 112 |
| this version | 112 |
| and their | 112 |
| provided the | 111 |
| for computational | 111 |
| in terms | 111 |
| and face | 111 |
| tokyo japan | 110 |
| eye gaze | 110 |
| sciences beijing | 110 |
| of autism | 109 |
| ieee transactions | 109 |
| the recognition | 109 |
| of data | 109 |
| de ned | 109 |
| the department | 109 |
| human face | 109 |
| support vector | 109 |
| college park | 108 |
| ku leuven | 108 |
| is used | 108 |
| engineering department | 108 |
| computational linguistics | 108 |
| of intelligent | 108 |
| teaching and | 108 |
| recognition based | 108 |
| an important | 107 |
| of southern | 107 |
| to improve | 107 |
| this study | 107 |
| of oxford | 107 |
| question answering | 106 |
| feature selection | 106 |
| classi ers | 106 |
| our method | 106 |
| article was | 106 |
| in psychology | 106 |
| science engineering | 106 |
| data and | 106 |
| in particular | 106 |
| available online | 105 |
| facial features | 105 |
| of each | 105 |
| of london | 105 |
| wang and | 105 |
| distribution and | 105 |
| https doi | 105 |
| where the | 105 |
| and video | 104 |
| whether they | 104 |
| the social | 104 |
| component analysis | 104 |
| accepted date | 104 |
| about the | 104 |
| we have | 104 |
| when the | 104 |
| the development | 103 |
| by using | 103 |
| southern california | 103 |
| to extract | 103 |
| indian institute | 103 |
| issn online | 103 |
| local binary | 103 |
| laboratory for | 103 |
| technical report | 102 |
| than the | 102 |
| found that | 102 |
| imperial college | 102 |
| correspondence tel | 102 |
| on image | 102 |
| robust face | 102 |
| zero shot | 102 |
| received date | 102 |
| al and | 102 |
| of new | 101 |
| of illinois | 101 |
| of all | 101 |
| the model | 101 |
| high dimensional | 101 |
| ai research | 101 |
| into the | 101 |
| high level | 101 |
| of sci | 101 |
| the work | 100 |
| the object | 100 |
| more information | 100 |
| come from | 100 |
| generative adversarial | 100 |
| in human | 100 |
| in real | 100 |
| and then | 100 |
| and image | 100 |
| discriminant analysis | 100 |
| people with | 99 |
| for multi | 99 |
| information about | 99 |
| we can | 99 |
| distributed under | 99 |
| original work | 99 |
| and tracking | 99 |
| or from | 99 |
| sparse representation | 99 |
| not only | 99 |
| the current | 99 |
| low resolution | 99 |
| published version | 98 |
| of singapore | 98 |
| for action | 98 |
| date accepted | 98 |
| we also | 98 |
| of amsterdam | 98 |
| spatio temporal | 97 |
| this material | 97 |
| mathematics and | 97 |
| university beijing | 97 |
| the scene | 96 |
| semi supervised | 96 |
| the training | 96 |
| any medium | 96 |
| however the | 96 |
| the last | 96 |
| the two | 96 |
| is multi | 96 |
| and facial | 96 |
| vision center | 96 |
| are the | 96 |
| on computer | 96 |
| north carolina | 96 |
| we will | 95 |
| urbana champaign | 95 |
| permits unrestricted | 95 |
| downloaded from | 95 |
| recognition from | 95 |
| the present | 95 |
| compared with | 95 |
| visual recognition | 95 |
| and cognitive | 95 |
| pose and | 94 |
| of emotional | 94 |
| and reproduction | 94 |
| may come | 94 |
| from public | 94 |
| machine vision | 94 |
| methods for | 94 |
| of applied | 94 |
| cognition and | 94 |
| california san | 94 |
| or private | 93 |
| and systems | 93 |
| for instance | 93 |
| low rank | 93 |
| which permits | 93 |
| the documents | 93 |
| in other | 93 |
| social interaction | 93 |
| academic editor | 93 |
| of washington | 93 |
| our approach | 92 |
| key lab | 92 |
| and electronic | 92 |
| medium provided | 92 |
| is properly | 92 |
| and dissemination | 92 |
| private research | 92 |
| research centers | 92 |
| representation for | 92 |
| california los | 92 |
| of michigan | 92 |
| psychology and | 92 |
| and human | 92 |
| metric learning | 92 |
| in both | 91 |
| multi disciplinary | 91 |
| disciplinary open | 91 |
| rchive for | 91 |
| the deposit | 91 |
| deposit and | 91 |
| research documents | 91 |
| documents whether | 91 |
| are pub | 91 |
| documents may | 91 |
| research institutions | 91 |
| in france | 91 |
| archive ouverte | 91 |
| ouverte pluridisciplinaire | 91 |
| pluridisciplinaire hal | 91 |
| hal est | 91 |
| la diffusion | 91 |
| de documents | 91 |
| de niveau | 91 |
| niveau recherche | 91 |
| recherche publi | 91 |
| ou non | 91 |
| recherche fran | 91 |
| des laboratoires | 91 |
| ou priv | 91 |
| and pattern | 91 |
| and machine | 91 |
| chapel hill | 91 |
| fine grained | 90 |
| uc berkeley | 90 |
| all the | 90 |
| training data | 90 |
| article distributed | 90 |
| facial feature | 90 |
| thesis submitted | 90 |
| within the | 90 |
| communication engineering | 89 |
| of people | 89 |
| that this | 89 |
| use distribution | 89 |
| the journal | 89 |
| please contact | 89 |
| to detect | 89 |
| rather than | 89 |
| image analysis | 89 |
| latex class | 89 |
| software engineering | 88 |
| of central | 88 |
| on face | 88 |
| robotics institute | 88 |
| in videos | 88 |
| expression analysis | 88 |
| image classi | 88 |
| facial emotion | 88 |
| emotional expressions | 88 |
| visual question | 88 |
| weakly supervised | 88 |
| head pose | 88 |
| class files | 88 |
| the eye | 88 |
| detection using | 87 |
| the second | 87 |
| unrestricted use | 87 |
| of different | 87 |
| the best | 87 |
| experimental results | 87 |
| been accepted | 87 |
| signi cantly | 87 |
| for all | 87 |
| of latex | 87 |
| artificial intelligence | 86 |
| to identify | 86 |
| michigan state | 86 |
| at http | 86 |
| of toronto | 86 |
| ef icient | 86 |
| of their | 86 |
| of cse | 86 |
| multi view | 85 |
| the role | 85 |
| zhang and | 85 |
| from video | 85 |
| queen mary | 85 |
| automation chinese | 85 |
| vol issue | 85 |
| eurasip journal | 85 |
| duke university | 85 |
| social and | 85 |
| suggests that | 85 |
| to faces | 85 |
| the past | 85 |
| and other | 85 |
| and that | 84 |
| tel fax | 84 |
| analysis for | 84 |
| use the | 84 |
| university usa | 84 |
| is one | 84 |
| was supported | 84 |
| of any | 84 |
| in children | 84 |
| face alignment | 83 |
| engineering the | 83 |
| pattern analysis | 83 |
| the task | 83 |
| files vol | 83 |
| multi task | 82 |
| state key | 82 |
| systems and | 82 |
| of machine | 82 |
| face veri | 82 |
| of objects | 82 |
| the neural | 82 |
| the input | 82 |
| natural language | 81 |
| to achieve | 81 |
| the feature | 81 |
| attribution license | 81 |
| georgia institute | 81 |
| to obtain | 81 |
| this research | 81 |
| but not | 81 |
| computer sciences | 80 |
| recognition systems | 80 |
| and are | 80 |
| the full | 80 |
| jiaotong university | 80 |
| is available | 80 |
| tracking and | 80 |
| al this | 80 |
| jiao tong | 80 |
| appearance based | 80 |
| learning with | 80 |
| the presence | 80 |
| in section | 80 |
| of brain | 80 |
| images are | 80 |
| of deep | 79 |
| domain adaptation | 79 |
| properly cited | 79 |
| shanghai jiao | 79 |
| and computing | 79 |
| to social | 79 |
| nd the | 79 |
| nanjing university | 79 |
| speech and | 79 |
| under review | 79 |
| de lausanne | 79 |
| computer interaction | 79 |
| supervised learning | 78 |
| chen and | 78 |
| was submitted | 78 |
| tong university | 78 |
| images with | 78 |
| shown that | 78 |
| sciences and | 78 |
| perception and | 78 |
| transfer learning | 77 |
| speci cally | 77 |
| proposed method | 77 |
| mary university | 77 |
| facial action | 77 |
| engineering science | 77 |
| in figure | 77 |
| brain and | 77 |
| de barcelona | 76 |
| engineering research | 76 |
| the effect | 76 |
| archives ouvertes | 76 |
| vision group | 76 |
| partial fulfillment | 76 |
| from single | 76 |
| information sciences | 76 |
| of surrey | 76 |
| and signal | 76 |
| of object | 76 |
| ieee international | 76 |
| for intelligent | 76 |
| technological university | 76 |
| the target | 76 |
| dimensionality reduction | 76 |
| received april | 76 |
| asd and | 76 |
| improve the | 76 |
| the brain | 76 |
| shuicheng yan | 75 |
| id pages | 75 |
| for robust | 75 |
| re identification | 75 |
| of others | 75 |
| noname manuscript | 75 |
| and control | 75 |
| these results | 75 |
| but also | 75 |
| human faces | 75 |
| the user | 75 |
| paris france | 75 |
| authors and | 75 |
| among the | 74 |
| the state | 74 |
| learning based | 74 |
| york university | 74 |
| dissertation submitted | 74 |
| model based | 74 |
| which the | 74 |
| issn print | 74 |
| technische universit | 74 |
| machine intelligence | 74 |
| to have | 74 |
| age estimation | 74 |
| to whom | 74 |
| to end | 74 |
| cation and | 74 |
| the right | 74 |
| deep convolutional | 73 |
| central florida | 73 |
| the images | 73 |
| to address | 73 |
| more than | 73 |
| may not | 73 |
| and pose | 73 |
| adversarial networks | 73 |
| dictionary learning | 73 |
| bernt schiele | 73 |
| this journal | 73 |
| to solve | 73 |
| which can | 73 |
| to image | 73 |
| if the | 73 |
| the effects | 73 |
| of ece | 73 |
| california berkeley | 73 |
| berlin germany | 73 |
| for semantic | 72 |
| multi modal | 72 |
| dataset for | 72 |
| cornell university | 72 |
| vision laboratory | 72 |
| the study | 72 |
| the mouth | 72 |
| features are | 72 |
| the accuracy | 72 |
| li and | 72 |
| springer science | 71 |
| of use | 71 |
| if you | 71 |
| the public | 71 |
| at urbana | 71 |
| is more | 71 |
| this problem | 71 |
| sagepub com | 71 |
| australian national | 71 |
| in facial | 71 |
| peking university | 71 |
| the context | 71 |
| principal component | 71 |
| demonstrate that | 71 |
| lausanne switzerland | 71 |
| it can | 71 |
| of wisconsin | 71 |
| magnetic resonance | 71 |
| seoul korea | 71 |
| science business | 70 |
| business media | 70 |
| multi scale | 70 |
| information systems | 70 |
| low level | 70 |
| xiaogang wang | 70 |
| in contrast | 70 |
| based methods | 70 |
| research group | 70 |
| no august | 70 |
| to facial | 70 |
| with high | 70 |
| in individuals | 70 |
| super resolution | 70 |
| received july | 70 |
| optical flow | 70 |
| for any | 70 |
| de cits | 70 |
| singapore singapore | 70 |
| for publication | 70 |
| or other | 69 |
| we found | 69 |
| vision lab | 69 |
| been proposed | 69 |
| of features | 69 |
| fran ais | 69 |
| autism research | 69 |
| of software | 69 |
| nanyang technological | 69 |
| liu and | 69 |
| gaze direction | 69 |
| whom correspondence | 69 |
| adults with | 69 |
| eye contact | 69 |
| resonance imaging | 69 |
| of north | 69 |
| learning from | 69 |
| to publication | 68 |
| single image | 68 |
| invariant face | 68 |
| activity recognition | 68 |
| stefanos zafeiriou | 68 |
| intelligent information | 68 |
| specialty section | 68 |
| or the | 68 |
| based image | 68 |
| to their | 68 |
| in image | 68 |
| taipei taiwan | 68 |
| target tracking | 68 |
| engineering college | 68 |
| not been | 68 |
| for computer | 67 |
| tx usa | 67 |
| data set | 67 |
| electronic and | 67 |
| disorder asd | 67 |
| facial landmark | 67 |
| adobe research | 67 |
| to its | 67 |
| typically developing | 67 |
| of pennsylvania | 67 |
| zurich switzerland | 67 |
| dr ing | 67 |
| high resolution | 67 |
| has not | 67 |
| maryland college | 66 |
| publishing corporation | 66 |
| of training | 66 |
| accepted june | 66 |
| of doctor | 66 |
| of eye | 66 |
| information from | 66 |
| automatic face | 66 |
| ecole polytechnique | 66 |
| the video | 66 |
| binary pattern | 66 |
| model and | 66 |
| in their | 66 |
| received may | 66 |
| been shown | 66 |
| social interactions | 66 |
| in revised | 66 |
| revised form | 66 |
| montr eal | 65 |
| algorithm for | 65 |
| is often | 65 |
| hindawi publishing | 65 |
| ground truth | 65 |
| of cognitive | 65 |
| shot learning | 65 |
| for large | 65 |
| recent years | 65 |
| double blind | 65 |
| with respect | 65 |
| expression and | 65 |
| have shown | 65 |
| karlsruhe germany | 65 |
| on their | 65 |
| the research | 65 |
| columbia university | 65 |
| associate professor | 65 |
| facial images | 64 |
| both the | 64 |
| dif cult | 64 |
| human action | 64 |
| technology cas | 64 |
| video surveillance | 64 |
| received december | 64 |
| the world | 64 |
| national taiwan | 64 |
| recognition under | 64 |
| intelligence and | 64 |
| video based | 64 |
| multi target | 64 |
| and applied | 64 |
| detection with | 63 |
| autism and | 63 |
| this document | 63 |
| believe that | 63 |
| human detection | 63 |
| and more | 63 |
| university shanghai | 63 |
| personal use | 63 |
| wa usa | 63 |
| cation using | 63 |
| and intelligent | 63 |
| ne grained | 63 |
| on pattern | 63 |
| applied sciences | 63 |
| while the | 63 |
| idiap research | 63 |
| extracted from | 63 |
| cation with | 63 |
| the dataset | 63 |
| received march | 63 |
| received june | 62 |
| multi object | 62 |
| de montr | 62 |
| of experimental | 62 |
| of multiple | 62 |
| sciences university | 62 |
| nearest neighbor | 62 |
| engineering national | 62 |
| taiwan university | 62 |
| the goal | 62 |
| we used | 62 |
| representations for | 62 |
| based approach | 62 |
| data driven | 62 |
| the computer | 62 |
| to reduce | 62 |
| vector machine | 62 |
| feature based | 62 |
| june accepted | 61 |
| at www | 61 |
| computer graphics | 61 |
| of tokyo | 61 |
| international joint | 61 |
| objects and | 61 |
| images for | 61 |
| large number | 61 |
| shiguang shan | 61 |
| shaogang gong | 61 |
| received october | 61 |
| an object | 61 |
| this thesis | 61 |
| are more | 61 |
| communication and | 61 |
| with deep | 61 |
| recognition has | 61 |
| the appearance | 61 |
| accepted march | 61 |
| of two | 61 |
| and emotion | 61 |
| human robot | 61 |
| as follows | 61 |
| california institute | 61 |
| of computational | 61 |
| that they | 60 |
| peer reviewed | 60 |
| words and | 60 |
| the shape | 60 |
| in each | 60 |
| th international | 60 |
| is still | 60 |
| using deep | 60 |
| and electrical | 60 |
| emotional facial | 60 |
| of its | 60 |
| showed that | 60 |
| ann arbor | 60 |
| these methods | 60 |
| are used | 60 |
| stony brook | 60 |
| supplementary material | 60 |
| illumination and | 60 |
| received january | 60 |
| such that | 60 |
| linear discriminant | 60 |
| subspace clustering | 59 |
| to determine | 59 |
| il usa | 59 |
| published october | 59 |
| entific research | 59 |
| manant des | 59 |
| des tablissements | 59 |
| tablissements enseignement | 59 |
| ou trangers | 59 |
| trangers des | 59 |
| material for | 59 |
| joint conference | 59 |
| wide range | 59 |
| nanjing china | 59 |
| normal university | 59 |
| for learning | 59 |
| for real | 59 |
| visual information | 59 |
| that our | 59 |