| of the | 1433 |
| computer science | 1230 |
| of computer | 1126 |
| face recognition | 970 |
| in the | 830 |
| science and | 671 |
| facial expression | 541 |
| member ieee | 538 |
| of technology | 501 |
| for the | 391 |
| of electrical | 384 |
| to the | 377 |
| on the | 353 |
| for face | 344 |
| and technology | 323 |
| and computer | 317 |
| expression recognition | 303 |
| computer engineering | 299 |
| beijing china | 298 |
| computer vision | 291 |
| classi cation | 288 |
| of information | 277 |
| and engineering | 268 |
| facial expressions | 259 |
| of engineering | 257 |
| by the | 249 |
| in this | 248 |
| center for | 242 |
| carnegie mellon | 237 |
| and the | 237 |
| mellon university | 236 |
| international journal | 231 |
| from the | 230 |
| hong kong | 229 |
| electrical engineering | 228 |
| of science | 225 |
| electrical and | 223 |
| of sciences | 222 |
| engineering and | 215 |
| has been | 210 |
| chinese academy | 208 |
| of facial | 205 |
| of california | 204 |
| of computing | 182 |
| is the | 178 |
| this paper | 176 |
| with the | 173 |
| neural networks | 167 |
| state university | 165 |
| institute for | 164 |
| recognition using | 162 |
| action recognition | 160 |
| open access | 158 |
| for facial | 156 |
| http www | 156 |
| the university | 154 |
| pattern recognition | 154 |
| and information | 153 |
| of this | 153 |
| the face | 153 |
| face detection | 152 |
| national university | 147 |
| of face | 145 |
| the degree | 144 |
| information engineering | 139 |
| that the | 136 |
| real time | 135 |
| information technology | 135 |
| senior member | 134 |
| ieee and | 132 |
| of hong | 130 |
| have been | 128 |
| the wild | 125 |
| this work | 124 |
| fellow ieee | 124 |
| college london | 121 |
| deep learning | 118 |
| the chinese | 117 |
| the same | 116 |
| veri cation | 114 |
| face images | 114 |
| chinese university | 114 |
| convolutional neural | 113 |
| doi org | 111 |
| neural network | 109 |
| in partial | 109 |
| student member | 108 |
| of maryland | 106 |
| identi cation | 105 |
| the requirements | 104 |
| vision and | 104 |
| information science | 103 |
| emotion recognition | 103 |
| of advanced | 102 |
| recognition with | 102 |
| science university | 101 |
| this article | 101 |
| learning for | 101 |
| feature extraction | 101 |
| large scale | 100 |
| the netherlands | 99 |
| key laboratory | 99 |
| of automation | 99 |
| as the | 99 |
| of philosophy | 97 |
| signal processing | 96 |
| at the | 94 |
| in computer | 94 |
| face alignment | 94 |
| engineering university | 93 |
| san diego | 92 |
| of psychology | 92 |
| of chinese | 92 |
| college park | 91 |
| machine learning | 91 |
| imperial college | 90 |
| the art | 88 |
| research article | 88 |
| for action | 88 |
| the proposed | 86 |
| united kingdom | 86 |
| stanford university | 85 |
| the most | 85 |
| pa usa | 84 |
| detection and | 84 |
| of human | 84 |
| national laboratory | 84 |
| we propose | 82 |
| recognition and | 81 |
| engineering the | 81 |
| of electronics | 81 |
| ca usa | 80 |
| university china | 80 |
| age estimation | 80 |
| international conference | 80 |
| arti cial | 80 |
| requirements for | 79 |
| dx doi | 78 |
| based face | 75 |
| science department | 75 |
| the original | 75 |
| of informatics | 74 |
| tsinghua university | 74 |
| of pattern | 74 |
| centre for | 74 |
| new york | 74 |
| volume issue | 73 |
| partial ful | 73 |
| in video | 72 |
| expression analysis | 72 |
| of electronic | 72 |
| the facial | 71 |
| face and | 71 |
| research center | 71 |
| recognition system | 71 |
| facial landmark | 70 |
| facial action | 70 |
| ful llment | 70 |
| computer and | 69 |
| analysis and | 69 |
| key lab | 69 |
| face veri | 69 |
| for video | 68 |
| sciences beijing | 68 |
| electronic engineering | 68 |
| of our | 67 |
| face image | 67 |
| microsoft research | 67 |
| the image | 67 |
| electronics and | 67 |
| image processing | 67 |
| maryland college | 66 |
| networks for | 66 |
| model for | 66 |
| of texas | 65 |
| robust face | 65 |
| graduate school | 65 |
| method for | 64 |
| of intelligent | 64 |
| facial feature | 64 |
| cial intelligence | 64 |
| under the | 64 |
| spatio temporal | 64 |
| human computer | 64 |
| robotics institute | 63 |
| information processing | 63 |
| the problem | 62 |
| as well | 62 |
| network for | 61 |
| rama chellappa | 61 |
| in videos | 61 |
| using the | 61 |
| university beijing | 61 |
| image and | 61 |
| that are | 61 |
| of these | 61 |
| for research | 61 |
| in face | 60 |
| california san | 60 |
| technology and | 60 |
| in order | 60 |
| technology cas | 60 |
| real world | 60 |
| for example | 60 |
| xiaoou tang | 60 |
| stefanos zafeiriou | 59 |
| pose estimation | 59 |
| for human | 59 |
| be addressed | 59 |
| there are | 59 |
| be used | 59 |
| research institute | 59 |
| is not | 59 |
| of emotion | 59 |
| information and | 58 |
| of china | 58 |
| of singapore | 58 |
| ma usa | 57 |
| of oxford | 57 |
| component analysis | 57 |
| of illinois | 57 |
| the author | 57 |
| and communication | 57 |
| ny usa | 56 |
| used for | 56 |
| they are | 56 |
| creative commons | 56 |
| the main | 55 |
| recognition based | 55 |
| advanced technology | 55 |
| and recognition | 55 |
| of amsterdam | 55 |
| low rank | 55 |
| semi supervised | 54 |
| show that | 54 |
| recognition from | 54 |
| discriminant analysis | 54 |
| an open | 54 |
| in any | 54 |
| michigan state | 54 |
| local binary | 54 |
| science engineering | 53 |
| to face | 53 |
| intelligent systems | 53 |
| the data | 53 |
| head pose | 53 |
| features for | 53 |
| ef cient | 52 |
| and applications | 52 |
| we present | 52 |
| massachusetts institute | 52 |
| approach for | 52 |
| the number | 52 |
| support vector | 52 |
| peking university | 52 |
| the performance | 51 |
| object detection | 51 |
| assistant professor | 51 |
| invariant face | 51 |
| engineering department | 51 |
| dictionary learning | 51 |
| eth zurich | 51 |
| shiguang shan | 51 |
| for each | 50 |
| correspondence should | 50 |
| psychology university | 50 |
| machine vision | 50 |
| the other | 50 |
| facial images | 49 |
| columbia university | 49 |
| facial features | 49 |
| urbana champaign | 49 |
| action unit | 49 |
| of mathematics | 49 |
| volume article | 49 |
| automation chinese | 49 |
| computing and | 49 |
| the rst | 49 |
| metric learning | 49 |
| max planck | 48 |
| shenzhen institutes | 48 |
| commons attribution | 48 |
| of each | 48 |
| computer interaction | 48 |
| technical university | 48 |
| of washington | 48 |
| feature selection | 47 |
| sparse representation | 47 |
| models for | 47 |
| and face | 47 |
| action units | 47 |
| of images | 47 |
| of surrey | 47 |
| framework for | 46 |
| it has | 46 |
| for image | 46 |
| ieee transactions | 46 |
| barcelona spain | 46 |
| in which | 46 |
| published online | 46 |
| human face | 46 |
| the training | 46 |
| tel aviv | 46 |
| available online | 45 |
| university pittsburgh | 45 |
| technical report | 45 |
| dimensionality reduction | 45 |
| mathematics and | 45 |
| the human | 45 |
| laboratory for | 45 |
| wang and | 45 |
| automatic facial | 45 |
| the system | 45 |
| that can | 45 |
| to this | 45 |
| and its | 44 |
| of faces | 44 |
| maja pantic | 44 |
| learning and | 44 |
| planck institute | 44 |
| for informatics | 44 |
| deep neural | 44 |
| and video | 44 |
| artificial intelligence | 44 |
| which are | 44 |
| vol issue | 44 |
| institute carnegie | 44 |
| national institute | 44 |
| high dimensional | 44 |
| between the | 44 |
| to improve | 44 |
| cas beijing | 44 |
| human action | 44 |
| technological university | 44 |
| computer applications | 44 |
| in addition | 43 |
| and research | 43 |
| be inserted | 43 |
| received date | 43 |
| accepted date | 43 |
| zhang and | 43 |
| images and | 43 |
| in real | 43 |
| automatic face | 43 |
| corresponding author | 43 |
| over the | 43 |
| and machine | 43 |
| the state | 42 |
| are not | 42 |
| intelligent information | 42 |
| accepted for | 42 |
| the editor | 42 |
| date accepted | 42 |
| shuicheng yan | 42 |
| is used | 42 |
| of pittsburgh | 42 |
| for visual | 42 |
| nanyang technological | 42 |
| visual recognition | 42 |
| engineering science | 41 |
| fine grained | 41 |
| at urbana | 41 |
| representation for | 41 |
| on image | 41 |
| weakly supervised | 41 |
| the recognition | 41 |
| image analysis | 41 |
| tokyo japan | 41 |
| and facial | 41 |
| and security | 41 |
| speech and | 41 |
| and signal | 41 |
| low resolution | 41 |
| andrew zisserman | 40 |
| jeffrey cohn | 40 |
| the following | 40 |
| of southern | 40 |
| southern california | 40 |
| this material | 40 |
| issn online | 40 |
| anil jain | 40 |
| recognition systems | 40 |
| the creative | 40 |
| do not | 40 |
| our method | 40 |
| features and | 40 |
| associated with | 40 |
| computing technology | 40 |
| singapore singapore | 40 |
| indian institute | 39 |
| rights reserved | 39 |
| an image | 39 |
| of london | 39 |
| de ned | 39 |
| university college | 39 |
| nanjing university | 38 |
| of new | 38 |
| and computing | 38 |
| eurasip journal | 38 |
| active appearance | 38 |
| rutgers university | 38 |
| to cite | 38 |
| provided the | 38 |
| pose and | 38 |
| the visual | 38 |
| to recognize | 38 |
| in many | 38 |
| system for | 38 |
| data and | 38 |
| super resolution | 38 |
| engineering national | 38 |
| of central | 38 |
| activity recognition | 38 |
| electrical computer | 37 |
| multi task | 37 |
| the first | 37 |
| shanghai china | 37 |
| image retrieval | 37 |
| van gool | 37 |
| cite this | 37 |
| experimental results | 37 |
| of image | 37 |
| analysis for | 37 |
| information sciences | 37 |
| all rights | 37 |
| xilin chen | 37 |
| recognition under | 37 |
| the authors | 37 |
| north carolina | 37 |
| video based | 37 |
| and electronic | 37 |
| we are | 37 |
| cornell university | 36 |
| australian national | 36 |
| of thessaloniki | 36 |
| is that | 36 |
| luc van | 36 |
| deep convolutional | 36 |
| object recognition | 36 |
| binary pattern | 36 |
| to extract | 36 |
| united states | 36 |
| domain adaptation | 36 |
| to learn | 36 |
| university usa | 36 |
| from video | 36 |
| software engineering | 35 |
| supervised learning | 35 |
| and pattern | 35 |
| cation using | 35 |
| by using | 35 |
| facial emotion | 35 |
| or not | 35 |
| is one | 35 |
| the use | 35 |
| unconstrained face | 35 |
| the user | 35 |
| is also | 35 |
| all the | 35 |
| based methods | 35 |
| from face | 35 |
| when the | 35 |
| for robust | 35 |
| processing and | 35 |
| improve the | 35 |
| of cse | 35 |
| is available | 35 |
| queen mary | 35 |
| li and | 35 |
| of applied | 35 |
| systems and | 35 |
| central florida | 35 |
| state key | 35 |
| we use | 35 |
| to make | 34 |
| the model | 34 |
| published version | 34 |
| landmark localization | 34 |
| access article | 34 |
| distributed under | 34 |
| distribution and | 34 |
| original work | 34 |
| kristen grauman | 34 |
| for this | 34 |
| of data | 34 |
| the last | 34 |
| of social | 34 |
| we also | 34 |
| on computer | 34 |
| on facial | 34 |
| thesis submitted | 34 |
| of all | 33 |
| polytechnic institute | 33 |
| using deep | 33 |
| the work | 33 |
| pose invariant | 33 |
| and systems | 33 |
| algorithm for | 33 |
| md usa | 33 |
| model based | 33 |
| to achieve | 33 |
| to solve | 33 |
| affective computing | 33 |
| training data | 33 |
| and image | 33 |
| research and | 33 |
| on face | 33 |
| at austin | 33 |
| stony brook | 33 |
| california berkeley | 33 |
| where the | 33 |
| of tokyo | 33 |
| communication engineering | 33 |
| taipei taiwan | 33 |
| linear discriminant | 33 |
| tehran iran | 33 |
| aviv university | 33 |
| cation and | 33 |
| subspace clustering | 32 |
| for automation | 32 |
| principal component | 32 |
| la torre | 32 |
| at http | 32 |
| the department | 32 |
| https doi | 32 |
| to facial | 32 |
| appearance models | 32 |
| the video | 32 |
| the best | 32 |
| permits unrestricted | 32 |
| is properly | 32 |
| of features | 32 |
| the feature | 32 |
| the results | 32 |
| polytechnic university | 32 |
| and then | 32 |
| technische universit | 32 |
| facial image | 32 |
| an important | 32 |
| liu and | 32 |
| technology sydney | 32 |
| at unchen | 32 |
| of north | 32 |
| gesture recognition | 32 |
| th international | 32 |
| video classi | 31 |
| automation research | 31 |
| people with | 31 |
| the shape | 31 |
| the paper | 31 |
| aristotle university | 31 |
| thessaloniki greece | 31 |
| this version | 31 |
| deep face | 31 |
| detection using | 31 |
| of michigan | 31 |
| the images | 31 |
| and intelligent | 31 |
| the accuracy | 31 |
| dr ing | 31 |
| for vision | 31 |
| mary university | 31 |
| in our | 31 |
| fraser university | 31 |
| in figure | 31 |
| in social | 31 |
| machine intelligence | 31 |
| of posts | 31 |
| posts and | 31 |
| been accepted | 31 |
| we will | 31 |
| facial landmarks | 31 |
| of toronto | 30 |
| computer sciences | 30 |
| vision lab | 30 |
| in facial | 30 |
| zhen lei | 30 |
| not the | 30 |
| nanjing china | 30 |
| article distributed | 30 |
| unrestricted use | 30 |
| any medium | 30 |
| medium provided | 30 |
| jiaotong university | 30 |
| engineering research | 30 |
| illumination and | 30 |
| of oulu | 30 |
| into the | 30 |
| of research | 30 |
| during the | 30 |
| vector machine | 30 |
| simon fraser | 30 |
| the development | 30 |
| chen and | 30 |
| as follows | 30 |
| google research | 30 |
| learning based | 30 |
| signi cant | 30 |
| at www | 29 |
| information systems | 29 |
| in unconstrained | 29 |
| from facial | 29 |
| images with | 29 |
| come from | 29 |
| which permits | 29 |
| use distribution | 29 |
| and reproduction | 29 |
| classi ers | 29 |
| recent years | 29 |
| the computer | 29 |
| and their | 29 |
| biometrics and | 29 |
| academic editor | 29 |
| learning with | 29 |
| proposed method | 29 |
| in human | 29 |
| amsterdam the | 29 |
| methods for | 29 |
| the scene | 29 |
| human robot | 29 |
| national taiwan | 29 |
| shanghai jiao | 29 |
| jiao tong | 29 |
| of nottingham | 28 |
| information about | 28 |
| we show | 28 |
| of twente | 28 |
| thomas huang | 28 |
| landmark detection | 28 |
| zero shot | 28 |
| vision center | 28 |
| representations for | 28 |
| of massachusetts | 28 |
| for learning | 28 |
| to detect | 28 |
| shenzhen key | 28 |
| of visual | 28 |
| attribution license | 28 |
| transfer learning | 28 |
| robust facial | 28 |
| xiaogang wang | 28 |
| technology chinese | 28 |
| engineering college | 28 |
| electronic and | 28 |
| ku leuven | 28 |
| for large | 28 |
| cordelia schmid | 28 |
| use the | 28 |
| taiwan university | 28 |
| east lansing | 28 |
| tong university | 28 |
| and telecommunications | 28 |
| dissertation submitted | 28 |
| and peter | 28 |
| tx usa | 27 |
| please contact | 27 |
| material for | 27 |
| downloaded from | 27 |
| noname manuscript | 27 |
| georgia institute | 27 |
| images for | 27 |
| more information | 27 |
| learning from | 27 |
| and david | 27 |
| face analysis | 27 |
| id pages | 27 |
| of software | 27 |
| properly cited | 27 |
| of different | 27 |
| been proposed | 27 |
| ef icient | 27 |
| of pennsylvania | 27 |
| gender classi | 27 |
| and pose | 27 |
| in particular | 27 |
| west virginia | 27 |
| seoul korea | 27 |
| of deep | 27 |
| this chapter | 27 |
| on intelligent | 27 |
| of ece | 27 |
| normal university | 27 |
| for all | 27 |
| generative adversarial | 27 |
| intelligence and | 27 |
| we can | 27 |
| and software | 26 |
| yu qiao | 26 |
| to identify | 26 |
| the open | 26 |
| in other | 26 |
| nd the | 26 |
| signi cantly | 26 |
| and expression | 26 |
| whether they | 26 |
| teaching and | 26 |
| limin wang | 26 |
| of doctor | 26 |
| features are | 26 |
| research group | 26 |
| in image | 26 |
| we have | 26 |
| recognition has | 26 |
| the local | 26 |
| face representation | 26 |
| to end | 26 |
| which can | 26 |
| multi view | 26 |
| xiaoming liu | 26 |
| this study | 26 |
| human faces | 26 |
| issn print | 26 |
| expression and | 26 |
| of cambridge | 26 |
| shape and | 26 |
| however the | 26 |
| sun yat | 26 |
| yat sen | 26 |
| zurich switzerland | 26 |
| does not | 26 |
| optical flow | 26 |
| china university | 26 |
| pattern analysis | 26 |
| the past | 26 |
| massachusetts amherst | 26 |
| istanbul turkey | 26 |
| robotics and | 26 |
| the object | 26 |
| if the | 26 |
| this problem | 25 |
| rensselaer polytechnic | 25 |
| appearance based | 25 |
| of emotional | 25 |
| peer reviewed | 25 |
| personal use | 25 |
| no august | 25 |
| de lausanne | 25 |
| video processing | 25 |
| for more | 25 |
| is multi | 25 |
| ann arbor | 25 |
| are the | 25 |
| nicu sebe | 25 |
| our approach | 25 |
| to build | 25 |
| to obtain | 25 |
| oulu finland | 25 |
| for biometrics | 25 |
| latex class | 25 |
| class files | 25 |
| extracted from | 25 |
| it can | 25 |
| than the | 25 |
| la jolla | 25 |
| recognition via | 25 |
| local features | 25 |
| of objects | 25 |
| engineering technology | 25 |
| images are | 25 |
| chen change | 25 |
| change loy | 25 |
| research portal | 25 |
| lei zhang | 25 |
| the hong | 25 |
| kong polytechnic | 25 |
| university singapore | 25 |
| lior wolf | 25 |
| zhejiang university | 25 |
| algorithms for | 24 |
| science the | 24 |
| of people | 24 |
| the full | 24 |
| and rama | 24 |
| video and | 24 |
| and tracking | 24 |
| to publication | 24 |
| this document | 24 |
| from public | 24 |
| dataset for | 24 |
| ecole polytechnique | 24 |
| and dissemination | 24 |
| the documents | 24 |
| may come | 24 |
| or from | 24 |
| of interest | 24 |
| system and | 24 |
| the robotics | 24 |
| hindawi publishing | 24 |
| publishing corporation | 24 |
| the two | 24 |
| conference paper | 24 |
| visual attributes | 24 |
| security research | 24 |
| face reconstruction | 24 |
| applied sciences | 24 |
| of automatic | 24 |
| of training | 24 |
| the journal | 24 |
| on automatic | 24 |
| sen university | 24 |
| recognition for | 24 |
| of video | 24 |
| correspondence tel | 24 |
| the task | 24 |
| the identity | 24 |
| the input | 24 |
| of any | 24 |
| in section | 24 |
| michigan ann | 24 |
| of computational | 24 |
| of latex | 24 |
| to have | 24 |
| of statistics | 24 |
| and cognitive | 24 |
| this journal | 24 |
| for computer | 24 |
| and low | 23 |
| and are | 23 |
| open university | 23 |
| and gender | 23 |
| images using | 23 |
| based facial | 23 |
| and ioannis | 23 |
| the published | 23 |
| if you | 23 |
| dimitris metaxas | 23 |
| ieee international | 23 |
| multi disciplinary | 23 |
| disciplinary open | 23 |
| rchive for | 23 |
| the deposit | 23 |
| deposit and | 23 |
| of sci | 23 |
| research documents | 23 |
| documents whether | 23 |
| are pub | 23 |
| documents may | 23 |
| research institutions | 23 |
| in france | 23 |
| or private | 23 |
| private research | 23 |
| research centers | 23 |
| archive ouverte | 23 |
| ouverte pluridisciplinaire | 23 |
| pluridisciplinaire hal | 23 |
| hal est | 23 |
| la diffusion | 23 |
| de documents | 23 |
| de niveau | 23 |
| niveau recherche | 23 |
| recherche publi | 23 |
| ou non | 23 |
| recherche fran | 23 |
| des laboratoires | 23 |
| ou priv | 23 |
| recognition algorithms | 23 |
| in terms | 23 |
| tel fax | 23 |
| through the | 23 |
| al this | 23 |
| the high | 23 |
| for age | 23 |
| delhi india | 23 |
| is more | 23 |
| single image | 23 |
| key words | 23 |
| files vol | 23 |
| unsupervised learning | 23 |
| the current | 23 |
| the ability | 23 |
| sciences cas | 23 |
| of them | 23 |
| vision speech | 23 |
| feature based | 23 |
| large number | 23 |
| ground truth | 23 |
| beijing university | 23 |
| each other | 23 |
| virginia university | 23 |
| adobe research | 23 |
| of machine | 23 |
| machine perception | 23 |
| face hallucination | 23 |
| not been | 23 |
| these methods | 23 |
| differences between | 23 |
| and robotics | 23 |
| decision making | 23 |
| josef kittler | 23 |
| speci cally | 23 |
| arizona state | 23 |
| for publication | 23 |
| for object | 22 |
| vishal patel | 22 |
| tal hassner | 22 |
| research online | 22 |
| for pose | 22 |
| was submitted | 22 |
| action localization | 22 |
| electronics engineering | 22 |
| on artificial | 22 |
| extraction and | 22 |
| research asia | 22 |
| as conference | 22 |
| at iclr | 22 |
| illumination invariant | 22 |
| under varying | 22 |
| for real | 22 |
| vision group | 22 |
| research national | 22 |
| regression for | 22 |
| ai research | 22 |
| article was | 22 |
| www frontiersin | 22 |
| frontiersin org | 22 |
| the research | 22 |
| and illumination | 22 |
| expressions and | 22 |
| for recognition | 22 |
| is very | 22 |
| of two | 22 |
| international joint | 22 |
| guangzhou china | 22 |
| and social | 22 |
| for information | 22 |
| brook university | 22 |
| received march | 22 |
| for machine | 22 |
| tracking and | 22 |
| methods have | 22 |
| devi parikh | 22 |
| detection with | 22 |
| south china | 22 |
| cation with | 22 |
| and electronics | 22 |
| on machine | 22 |
| los angeles | 22 |
| notre dame | 22 |
| still images | 22 |
| california institute | 22 |
| natural language | 22 |
| such that | 22 |
| brain science | 22 |
| in recent | 22 |
| and vision | 22 |
| to determine | 22 |
| for instance | 21 |
| matrix factorization | 21 |
| of defense | 21 |
| works for | 21 |
| adversarial networks | 21 |
| harbin institute | 21 |
| learned miller | 21 |
| the second | 21 |
| cas china | 21 |
| kong china | 21 |
| institute university | 21 |
| ibm watson | 21 |
| recognition rate | 21 |
| binary patterns | 21 |
| and local | 21 |
| while the | 21 |
| and analysis | 21 |
| original research | 21 |
| of emotions | 21 |
| data for | 21 |
| idiap research | 21 |
| expressions are | 21 |
| studies have | 21 |
| and other | 21 |
| an efficient | 21 |
| low dimensional | 21 |
| for intelligent | 21 |
| dacheng tao | 21 |
| about the | 21 |
| technology beijing | 21 |
| and electrical | 21 |
| and anil | 21 |
| reduce the | 21 |
| is still | 21 |
| georgios tzimiropoulos | 21 |
| using local | 21 |
| with deep | 21 |
| of trento | 21 |
| on pattern | 21 |
| the dataset | 21 |
| paris france | 21 |
| computational intelligence | 21 |
| and human | 21 |
| perception and | 21 |
| visual geometry | 21 |
| geometry group | 21 |
| event detection | 21 |
| data points | 21 |
| article has | 21 |
| northeastern university | 21 |
| the method | 21 |
| university nanjing | 20 |
| new jersey | 20 |
| to address | 20 |