| of the | 1403 |
| computer science | 959 |
| of computer | 889 |
| face recognition | 876 |
| in the | 796 |
| science and | 536 |
| member ieee | 531 |
| facial expression | 507 |
| of technology | 393 |
| for the | 374 |
| to the | 373 |
| on the | 345 |
| of electrical | 299 |
| for face | 296 |
| expression recognition | 277 |
| computer vision | 259 |
| and technology | 259 |
| facial expressions | 251 |
| in this | 248 |
| by the | 247 |
| and computer | 241 |
| classi cation | 239 |
| computer engineering | 234 |
| and the | 229 |
| from the | 229 |
| of engineering | 221 |
| international journal | 215 |
| has been | 209 |
| and engineering | 206 |
| of information | 202 |
| beijing china | 197 |
| of facial | 195 |
| engineering and | 193 |
| electrical engineering | 192 |
| of science | 187 |
| center for | 177 |
| is the | 177 |
| this paper | 176 |
| with the | 171 |
| electrical and | 167 |
| carnegie mellon | 163 |
| mellon university | 162 |
| open access | 157 |
| http www | 153 |
| of this | 152 |
| the face | 150 |
| of california | 146 |
| recognition using | 143 |
| of face | 139 |
| neural networks | 139 |
| the degree | 138 |
| of sciences | 136 |
| that the | 136 |
| of computing | 135 |
| senior member | 134 |
| and information | 134 |
| hong kong | 132 |
| face detection | 131 |
| ieee and | 130 |
| for facial | 129 |
| have been | 128 |
| institute for | 125 |
| this work | 123 |
| real time | 122 |
| fellow ieee | 122 |
| pattern recognition | 121 |
| chinese academy | 121 |
| action recognition | 121 |
| the university | 120 |
| information technology | 120 |
| state university | 118 |
| the same | 116 |
| doi org | 109 |
| student member | 107 |
| the wild | 106 |
| in partial | 103 |
| face images | 103 |
| this article | 100 |
| the requirements | 100 |
| deep learning | 100 |
| as the | 99 |
| veri cation | 98 |
| emotion recognition | 97 |
| feature extraction | 95 |
| at the | 92 |
| vision and | 92 |
| identi cation | 91 |
| of philosophy | 91 |
| national university | 89 |
| information engineering | 89 |
| large scale | 87 |
| in computer | 87 |
| neural network | 86 |
| recognition with | 86 |
| the proposed | 86 |
| the art | 85 |
| the most | 84 |
| machine learning | 83 |
| the netherlands | 82 |
| research article | 82 |
| signal processing | 82 |
| we propose | 82 |
| detection and | 81 |
| of human | 81 |
| college london | 80 |
| information science | 79 |
| learning for | 78 |
| convolutional neural | 77 |
| dx doi | 76 |
| arti cial | 76 |
| requirements for | 74 |
| recognition and | 72 |
| the facial | 71 |
| face and | 71 |
| the original | 71 |
| of psychology | 71 |
| engineering university | 71 |
| of advanced | 70 |
| pa usa | 70 |
| of maryland | 70 |
| of informatics | 70 |
| of electronics | 70 |
| science university | 70 |
| recognition system | 70 |
| united kingdom | 70 |
| volume issue | 69 |
| partial ful | 69 |
| expression analysis | 68 |
| key laboratory | 68 |
| face alignment | 68 |
| for action | 67 |
| of our | 67 |
| the image | 67 |
| analysis and | 66 |
| ful llment | 66 |
| in video | 65 |
| international conference | 65 |
| age estimation | 65 |
| under the | 64 |
| face image | 64 |
| of hong | 64 |
| of chinese | 63 |
| based face | 62 |
| san diego | 62 |
| ca usa | 62 |
| image processing | 62 |
| human computer | 62 |
| the problem | 62 |
| as well | 62 |
| face veri | 61 |
| cial intelligence | 61 |
| that are | 61 |
| new york | 61 |
| of these | 61 |
| college park | 60 |
| facial feature | 60 |
| in order | 60 |
| for example | 60 |
| science department | 59 |
| be addressed | 59 |
| there are | 59 |
| be used | 59 |
| is not | 59 |
| method for | 58 |
| using the | 58 |
| robust face | 58 |
| centre for | 58 |
| of emotion | 58 |
| computer and | 57 |
| in face | 57 |
| image and | 57 |
| electronics and | 57 |
| used for | 56 |
| research center | 56 |
| facial landmark | 56 |
| model for | 56 |
| creative commons | 56 |
| real world | 56 |
| the author | 55 |
| they are | 55 |
| the main | 55 |
| research institute | 55 |
| show that | 54 |
| networks for | 54 |
| an open | 54 |
| in any | 54 |
| of automation | 54 |
| the chinese | 54 |
| imperial college | 53 |
| for video | 53 |
| graduate school | 53 |
| technology and | 53 |
| for human | 53 |
| the data | 53 |
| facial action | 52 |
| we present | 52 |
| recognition based | 52 |
| the number | 52 |
| spatio temporal | 52 |
| chinese university | 52 |
| in videos | 51 |
| assistant professor | 51 |
| component analysis | 51 |
| recognition from | 51 |
| pose estimation | 51 |
| and communication | 51 |
| and recognition | 51 |
| for each | 50 |
| information processing | 50 |
| and applications | 50 |
| correspondence should | 50 |
| local binary | 50 |
| the other | 50 |
| for research | 50 |
| the performance | 49 |
| information and | 49 |
| volume article | 49 |
| support vector | 49 |
| object detection | 48 |
| of intelligent | 48 |
| science engineering | 48 |
| commons attribution | 48 |
| head pose | 48 |
| approach for | 48 |
| the rst | 48 |
| of each | 48 |
| low rank | 48 |
| facial images | 47 |
| facial features | 47 |
| national laboratory | 47 |
| computing and | 47 |
| it has | 46 |
| sparse representation | 46 |
| ieee transactions | 46 |
| eth zurich | 46 |
| of electronic | 46 |
| in which | 46 |
| of images | 46 |
| published online | 46 |
| computer interaction | 46 |
| ny usa | 46 |
| discriminant analysis | 46 |
| feature selection | 45 |
| stefanos zafeiriou | 45 |
| to face | 45 |
| of pattern | 45 |
| dictionary learning | 45 |
| the human | 45 |
| and face | 45 |
| automatic facial | 45 |
| the system | 45 |
| sciences beijing | 45 |
| the training | 45 |
| that can | 45 |
| to this | 45 |
| electronic engineering | 45 |
| technical university | 45 |
| technical report | 44 |
| invariant face | 44 |
| which are | 44 |
| massachusetts institute | 44 |
| human face | 44 |
| semi supervised | 44 |
| in addition | 43 |
| key lab | 43 |
| be inserted | 43 |
| received date | 43 |
| accepted date | 43 |
| ma usa | 43 |
| features for | 43 |
| between the | 43 |
| over the | 43 |
| rama chellappa | 42 |
| ef cient | 42 |
| the editor | 42 |
| date accepted | 42 |
| barcelona spain | 42 |
| and video | 42 |
| of china | 42 |
| intelligent systems | 42 |
| is used | 42 |
| images and | 42 |
| action units | 42 |
| in real | 42 |
| to improve | 42 |
| shiguang shan | 42 |
| corresponding author | 42 |
| university china | 42 |
| computer applications | 42 |
| available online | 41 |
| network for | 41 |
| are not | 41 |
| framework for | 41 |
| accepted for | 41 |
| and research | 41 |
| engineering department | 41 |
| national institute | 41 |
| and facial | 41 |
| high dimensional | 41 |
| low resolution | 41 |
| maryland college | 40 |
| of faces | 40 |
| the following | 40 |
| this material | 40 |
| the creative | 40 |
| do not | 40 |
| the recognition | 40 |
| vol issue | 40 |
| our method | 40 |
| and machine | 40 |
| issn online | 39 |
| on image | 39 |
| rights reserved | 39 |
| machine vision | 39 |
| dimensionality reduction | 39 |
| associated with | 39 |
| of surrey | 39 |
| of amsterdam | 39 |
| image analysis | 39 |
| tsinghua university | 39 |
| de ned | 39 |
| robotics institute | 38 |
| of mathematics | 38 |
| eurasip journal | 38 |
| models for | 38 |
| to cite | 38 |
| recognition systems | 38 |
| artificial intelligence | 38 |
| provided the | 38 |
| microsoft research | 38 |
| michigan state | 38 |
| to recognize | 38 |
| in many | 38 |
| features and | 38 |
| an image | 38 |
| super resolution | 38 |
| metric learning | 38 |
| of texas | 37 |
| deep neural | 37 |
| of illinois | 37 |
| cite this | 37 |
| experimental results | 37 |
| technology cas | 37 |
| and its | 37 |
| system for | 37 |
| all rights | 37 |
| human action | 37 |
| recognition under | 37 |
| we are | 37 |
| the first | 36 |
| is that | 36 |
| mathematics and | 36 |
| pose and | 36 |
| psychology university | 36 |
| the visual | 36 |
| for image | 36 |
| to extract | 36 |
| the authors | 36 |
| to learn | 36 |
| the state | 35 |
| maja pantic | 35 |
| representation for | 35 |
| action unit | 35 |
| by using | 35 |
| is one | 35 |
| the user | 35 |
| weakly supervised | 35 |
| is also | 35 |
| all the | 35 |
| for visual | 35 |
| of oxford | 35 |
| of image | 35 |
| based methods | 35 |
| data and | 35 |
| of cse | 35 |
| learning and | 35 |
| engineering the | 35 |
| we use | 35 |
| activity recognition | 35 |
| to make | 34 |
| the model | 34 |
| of thessaloniki | 34 |
| published version | 34 |
| max planck | 34 |
| facial emotion | 34 |
| or not | 34 |
| the use | 34 |
| access article | 34 |
| distributed under | 34 |
| distribution and | 34 |
| original work | 34 |
| for this | 34 |
| binary pattern | 34 |
| analysis for | 34 |
| when the | 34 |
| the last | 34 |
| improve the | 34 |
| of social | 34 |
| we also | 34 |
| is available | 34 |
| california san | 34 |
| wang and | 34 |
| university beijing | 34 |
| university college | 34 |
| from video | 34 |
| of all | 33 |
| fine grained | 33 |
| of southern | 33 |
| southern california | 33 |
| the work | 33 |
| urbana champaign | 33 |
| anil jain | 33 |
| to achieve | 33 |
| for informatics | 33 |
| affective computing | 33 |
| speech and | 33 |
| cas beijing | 33 |
| of applied | 33 |
| where the | 33 |
| supervised learning | 33 |
| visual recognition | 33 |
| at http | 32 |
| https doi | 32 |
| and computing | 32 |
| van gool | 32 |
| shuicheng yan | 32 |
| active appearance | 32 |
| the best | 32 |
| permits unrestricted | 32 |
| is properly | 32 |
| the feature | 32 |
| stanford university | 32 |
| the results | 32 |
| to solve | 32 |
| and then | 32 |
| automatic face | 32 |
| an important | 32 |
| video based | 32 |
| xiaoou tang | 32 |
| on computer | 32 |
| thesis submitted | 32 |
| people with | 31 |
| intelligent information | 31 |
| shanghai china | 31 |
| indian institute | 31 |
| to facial | 31 |
| luc van | 31 |
| the images | 31 |
| the video | 31 |
| of features | 31 |
| planck institute | 31 |
| of singapore | 31 |
| object recognition | 31 |
| zhang and | 31 |
| tokyo japan | 31 |
| facial image | 31 |
| the accuracy | 31 |
| training data | 31 |
| and image | 31 |
| dr ing | 31 |
| processing and | 31 |
| research and | 31 |
| li and | 31 |
| in our | 31 |
| engineering national | 31 |
| model based | 31 |
| in figure | 31 |
| and electronic | 31 |
| of central | 31 |
| taipei taiwan | 31 |
| in social | 31 |
| tehran iran | 31 |
| on facial | 31 |
| been accepted | 31 |
| we will | 31 |
| the department | 30 |
| this version | 30 |
| pose invariant | 30 |
| not the | 30 |
| article distributed | 30 |
| unrestricted use | 30 |
| any medium | 30 |
| medium provided | 30 |
| illumination and | 30 |
| into the | 30 |
| during the | 30 |
| xilin chen | 30 |
| computing technology | 30 |
| on face | 30 |
| and signal | 30 |
| the development | 30 |
| as follows | 30 |
| domain adaptation | 30 |
| signi cant | 30 |
| tel aviv | 30 |
| of washington | 30 |
| cation and | 30 |
| subspace clustering | 29 |
| jeffrey cohn | 29 |
| the shape | 29 |
| using deep | 29 |
| aristotle university | 29 |
| thessaloniki greece | 29 |
| landmark localization | 29 |
| come from | 29 |
| which permits | 29 |
| use distribution | 29 |
| and reproduction | 29 |
| recent years | 29 |
| of data | 29 |
| information sciences | 29 |
| from face | 29 |
| of research | 29 |
| academic editor | 29 |
| proposed method | 29 |
| vector machine | 29 |
| of london | 29 |
| united states | 29 |
| methods for | 29 |
| the scene | 29 |
| linear discriminant | 29 |
| facial landmarks | 29 |
| software engineering | 28 |
| computer sciences | 28 |
| information about | 28 |
| we show | 28 |
| and pattern | 28 |
| to detect | 28 |
| of visual | 28 |
| attribution license | 28 |
| image retrieval | 28 |
| engineering research | 28 |
| and their | 28 |
| technische universit | 28 |
| technological university | 28 |
| at unchen | 28 |
| columbia university | 28 |
| cordelia schmid | 28 |
| systems and | 28 |
| use the | 28 |
| central florida | 28 |
| human robot | 28 |
| please contact | 27 |
| material for | 27 |
| downloaded from | 27 |
| principal component | 27 |
| in facial | 27 |
| noname manuscript | 27 |
| appearance models | 27 |
| advanced technology | 27 |
| id pages | 27 |
| properly cited | 27 |
| of different | 27 |
| automation chinese | 27 |
| been proposed | 27 |
| the computer | 27 |
| for robust | 27 |
| queen mary | 27 |
| liu and | 27 |
| engineering college | 27 |
| deep convolutional | 27 |
| in particular | 27 |
| this chapter | 27 |
| peking university | 27 |
| laboratory for | 27 |
| for all | 27 |
| machine intelligence | 27 |
| we can | 27 |
| unconstrained face | 27 |
| in human | 27 |
| multi task | 26 |
| of new | 26 |
| to identify | 26 |
| in unconstrained | 26 |
| the paper | 26 |
| in other | 26 |
| at urbana | 26 |
| nd the | 26 |
| from facial | 26 |
| cation using | 26 |
| detection using | 26 |
| images for | 26 |
| more information | 26 |
| whether they | 26 |
| teaching and | 26 |
| of massachusetts | 26 |
| features are | 26 |
| research group | 26 |
| we have | 26 |
| recognition has | 26 |
| the local | 26 |
| engineering science | 26 |
| which can | 26 |
| of pennsylvania | 26 |
| this study | 26 |
| human faces | 26 |
| expression and | 26 |
| however the | 26 |
| ku leuven | 26 |
| nanyang technological | 26 |
| seoul korea | 26 |
| of deep | 26 |
| md usa | 26 |
| does not | 26 |
| communication engineering | 26 |
| national taiwan | 26 |
| algorithm for | 26 |
| learning based | 26 |
| the past | 26 |
| intelligence and | 26 |
| dissertation submitted | 26 |
| the object | 26 |
| if the | 26 |
| for automation | 25 |
| this problem | 25 |
| information systems | 25 |
| vision lab | 25 |
| of emotional | 25 |
| personal use | 25 |
| and systems | 25 |
| de lausanne | 25 |
| video processing | 25 |
| for more | 25 |
| is multi | 25 |
| are the | 25 |
| classi ers | 25 |
| face analysis | 25 |
| of pittsburgh | 25 |
| our approach | 25 |
| to build | 25 |
| to obtain | 25 |
| latex class | 25 |
| class files | 25 |
| extracted from | 25 |
| it can | 25 |
| than the | 25 |
| signi cantly | 25 |
| robust facial | 25 |
| shape and | 25 |
| technology sydney | 25 |
| of tokyo | 25 |
| of objects | 25 |
| optical flow | 25 |
| images are | 25 |
| research portal | 25 |
| taiwan university | 25 |
| at www | 24 |
| electrical computer | 24 |
| automation research | 24 |
| the full | 24 |
| to publication | 24 |
| this document | 24 |
| from public | 24 |
| thomas huang | 24 |
| vision center | 24 |
| images with | 24 |
| ecole polytechnique | 24 |
| and dissemination | 24 |
| the documents | 24 |
| may come | 24 |
| or from | 24 |
| hindawi publishing | 24 |
| publishing corporation | 24 |
| the two | 24 |
| kristen grauman | 24 |
| and security | 24 |
| of training | 24 |
| the journal | 24 |
| transfer learning | 24 |
| issn print | 24 |
| la jolla | 24 |
| and pose | 24 |
| correspondence tel | 24 |
| california berkeley | 24 |
| the task | 24 |
| the identity | 24 |
| the input | 24 |
| local features | 24 |
| normal university | 24 |
| pattern analysis | 24 |
| of any | 24 |
| massachusetts amherst | 24 |
| in section | 24 |
| learning from | 24 |
| of latex | 24 |
| to have | 24 |
| this journal | 24 |
| google research | 24 |
| algorithms for | 23 |
| and are | 23 |
| video and | 23 |
| peer reviewed | 23 |
| the published | 23 |
| if you | 23 |
| no august | 23 |
| ieee international | 23 |
| multi disciplinary | 23 |
| disciplinary open | 23 |
| rchive for | 23 |
| the deposit | 23 |
| deposit and | 23 |
| of sci | 23 |
| research documents | 23 |
| documents whether | 23 |
| are pub | 23 |
| documents may | 23 |
| research institutions | 23 |
| in france | 23 |
| or private | 23 |
| private research | 23 |
| research centers | 23 |
| archive ouverte | 23 |
| ouverte pluridisciplinaire | 23 |
| pluridisciplinaire hal | 23 |
| hal est | 23 |
| la diffusion | 23 |
| de documents | 23 |
| de niveau | 23 |
| niveau recherche | 23 |
| recherche publi | 23 |
| ou non | 23 |
| recherche fran | 23 |
| des laboratoires | 23 |
| ou priv | 23 |
| representations for | 23 |
| for learning | 23 |
| of interest | 23 |
| in terms | 23 |
| appearance based | 23 |
| and intelligent | 23 |
| tel fax | 23 |
| al this | 23 |
| the high | 23 |
| is more | 23 |
| face representation | 23 |
| ef icient | 23 |
| key words | 23 |
| files vol | 23 |
| of automatic | 23 |
| the current | 23 |
| the ability | 23 |
| of them | 23 |
| for vision | 23 |
| mary university | 23 |
| large number | 23 |
| ground truth | 23 |
| recognition for | 23 |
| of video | 23 |
| each other | 23 |
| singapore singapore | 23 |
| amsterdam the | 23 |
| north carolina | 23 |
| state key | 23 |
| east lansing | 23 |
| these methods | 23 |
| generative adversarial | 23 |
| of doctor | 23 |
| andrew zisserman | 23 |
| speci cally | 23 |
| istanbul turkey | 23 |
| of people | 22 |
| based facial | 22 |
| video classi | 22 |
| and tracking | 22 |
| research online | 22 |
| was submitted | 22 |
| extraction and | 22 |
| nanjing university | 22 |
| in image | 22 |
| under varying | 22 |
| polytechnic university | 22 |
| to end | 22 |
| applied sciences | 22 |
| article was | 22 |
| www frontiersin | 22 |
| frontiersin org | 22 |
| the research | 22 |
| and illumination | 22 |
| is very | 22 |
| feature based | 22 |
| of two | 22 |
| of toronto | 22 |
| stony brook | 22 |
| received march | 22 |
| methods have | 22 |
| for large | 22 |
| chen and | 22 |
| still images | 22 |
| differences between | 22 |
| such that | 22 |
| in recent | 22 |
| decision making | 22 |
| to determine | 22 |
| for publication | 22 |
| cornell university | 21 |
| for instance | 21 |
| and low | 21 |
| and gender | 21 |
| of twente | 21 |
| and ioannis | 21 |
| works for | 21 |
| for pose | 21 |
| deep face | 21 |
| the second | 21 |
| system and | 21 |
| jiaotong university | 21 |
| conference paper | 21 |
| institute carnegie | 21 |
| illumination invariant | 21 |
| recognition rate | 21 |
| binary patterns | 21 |
| while the | 21 |
| learning with | 21 |
| original research | 21 |
| of emotions | 21 |
| expressions are | 21 |
| studies have | 21 |
| through the | 21 |
| gender classi | 21 |
| and other | 21 |
| and expression | 21 |
| expressions and | 21 |
| low dimensional | 21 |
| international joint | 21 |
| electronic and | 21 |
| recognition via | 21 |
| about the | 21 |
| tracking and | 21 |
| reduce the | 21 |
| is still | 21 |
| engineering technology | 21 |
| using local | 21 |
| gesture recognition | 21 |
| on pattern | 21 |
| face hallucination | 21 |
| polytechnic institute | 21 |
| not been | 21 |
| the dataset | 21 |
| of computational | 21 |
| computational intelligence | 21 |
| of statistics | 21 |
| event detection | 21 |
| data points | 21 |
| article has | 21 |
| the method | 21 |
| tx usa | 20 |
| to address | 20 |
| matrix factorization | 20 |
| for inclusion | 20 |
| of use | 20 |
| is permitted | 20 |
| obtained from | 20 |
| landmark detection | 20 |
| systems for | 20 |
| nanjing china | 20 |
| or the | 20 |
| recognition algorithms | 20 |
| our system | 20 |
| and david | 20 |
| shenzhen institutes | 20 |
| electronics engineering | 20 |
| nicu sebe | 20 |
| visual attributes | 20 |
| springer science | 20 |
| science business | 20 |
| business media | 20 |
| illumination variations | 20 |
| are used | 20 |
| for real | 20 |
| to overcome | 20 |
| vision group | 20 |
| single image | 20 |
| and local | 20 |
| and analysis | 20 |
| the target | 20 |
| to human | 20 |
| sciences cas | 20 |
| not only | 20 |
| the person | 20 |
| wide range | 20 |
| for recognition | 20 |
| dacheng tao | 20 |
| video sequence | 20 |
| at austin | 20 |
| of machine | 20 |
| in part | 20 |
| this research | 20 |
| zurich switzerland | 20 |
| final publication | 20 |
| based approach | 20 |
| shanghai jiao | 20 |
| jiao tong | 20 |
| natural language | 20 |
| received july | 20 |
| to its | 20 |
| and human | 20 |
| robotics and | 20 |
| associate professor | 20 |
| and peter | 20 |
| in future | 20 |
| future issue | 20 |
| human activity | 20 |
| doi fpsyg | 20 |
| in psychology | 20 |
| university pittsburgh | 19 |
| the role | 19 |
| images using | 19 |
| australian national | 19 |
| of korea | 19 |
| non negative | 19 |
| ioannis pitas | 19 |
| of defense | 19 |
| follow this | 19 |
| and open | 19 |
| are available | 19 |
| and texture | 19 |
| new collective | 19 |
| to servers | 19 |
| or lists | 19 |
| be obtained | 19 |
| the ieee | 19 |
| zero shot | 19 |
| dataset for | 19 |
| dimitris metaxas | 19 |
| rutgers university | 19 |
| archives ouvertes | 19 |
| learned miller | 19 |
| of michigan | 19 |
| that our | 19 |
| springer verlag | 19 |
| received december | 19 |
| the set | 19 |
| spontaneous facial | 19 |
| lausanne switzerland | 19 |
| as conference | 19 |
| at iclr | 19 |
| cedex france | 19 |
| results show | 19 |
| which the | 19 |
| face reconstruction | 19 |
| is based | 19 |
| shown that | 19 |
| ai research | 19 |
| received may | 19 |
| idiap research | 19 |
| learning algorithms | 19 |
| the twenty | 19 |
| joint conference | 19 |
| of its | 19 |
| of biometric | 19 |
| in cvpr | 19 |
| and social | 19 |
| adobe research | 19 |
| gender and | 19 |
| note that | 19 |
| access control | 19 |
| visual information | 19 |
| received april | 19 |
| on machine | 19 |
| cation with | 19 |
| los angeles | 19 |
| notre dame | 19 |
| tong university | 19 |
| paris france | 19 |
| the robotics | 19 |
| of posts | 19 |
| posts and | 19 |
| and telecommunications | 19 |
| of their | 19 |
| non verbal | 19 |
| optimization problem | 19 |
| professor department | 19 |
| and cognitive | 19 |
| we introduce | 19 |
| video sequences | 19 |
| th international | 19 |
| the entire | 19 |
| has not | 19 |
| and software | 18 |
| objects and | 18 |
| more than | 18 |
| la torre | 18 |
| this and | 18 |
| and additional | 18 |
| additional works | 18 |
| free and | 18 |
| in accordance | 18 |
| for advertising | 18 |
| other works | 18 |
| volume number | 18 |
| the goal | 18 |
| been made | 18 |
| propose novel | 18 |
| hyderabad india | 18 |
| of software | 18 |
| recognition accuracy | 18 |
| de barcelona | 18 |
| then the | 18 |
| the literature | 18 |
| the key | 18 |