1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
|
id,country,dataset_name,key,lat,lng,loc,loc_type,paper_id,paper_type,paper_url,title,year
0,,VOC,voc,0.0,0.0,,,,main,,The Pascal Visual Object Classes (VOC) Challenge,2009
1,China,VOC,voc,28.2290209,112.99483204,"National University of Defense Technology, China",mil,ca4e0a2cd761f52e6c0bc06ef8ac79e3c7649083,citation,https://arxiv.org/pdf/1804.04606.pdf,Loss Rank Mining: A General Hard Example Mining Method for Real-time Detectors,2018
2,United States,VOC,voc,39.0298587,-76.9638027,"U.S. Army Research Laboratory, Adelphi, MD, USA",mil,e7895feb2de9007ea1e47b0ea5952afd5af08b3d,citation,https://arxiv.org/pdf/1704.01069.pdf,ME R-CNN: Multi-Expert R-CNN for Object Detection,2017
3,United States,VOC,voc,37.8718992,-122.2585399,"University of Califonia, Berkeley",edu,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013
4,Israel,VOC,voc,32.1119889,34.80459702,Tel Aviv University,edu,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013
5,Israel,VOC,voc,31.904187,34.807378,"Weizmann Institute, Rehovot, Israel",edu,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013
6,Israel,VOC,voc,32.7940463,34.989571,"Yahoo Research Labs, Haifa, Israel",company,0547c44cb896e1cc38130ae8cc6b04dc21179045,citation,http://courses.cs.washington.edu/courses/cse590v/13au/FastMatch_cvpr_2013.pdf,Fast-Match: Fast Affine Template Matching,2013
7,Netherlands,VOC,voc,52.3553655,4.9501644,University of Amsterdam,edu,19a3e5495b420c1f5da283bf39708a6e833a6cc5,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_020.pdf,Attributes and categories for generic instance search from one example,2015
8,United States,VOC,voc,40.8419836,-73.94368971,Columbia University,edu,19a3e5495b420c1f5da283bf39708a6e833a6cc5,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_020.pdf,Attributes and categories for generic instance search from one example,2015
9,China,VOC,voc,39.103355,117.164927,NanKai University,edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016
10,United Kingdom,VOC,voc,51.7520849,-1.2516646,Oxford University,edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016
11,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016
12,United States,VOC,voc,32.87935255,-117.23110049,"University of California, San Diego",edu,55968c9906e13eff2a7fb03d7c416a6d0f9f53e0,citation,http://cg.cs.tsinghua.edu.cn/papers/ECCV-2016-Hfs.pdf,HFS: Hierarchical Feature Selection for Efficient Image Segmentation,2016
13,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,46c82cfadd9f885f5480b2d7155f0985daf949fc,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Fouhey_3D_Shape_Attributes_CVPR_2016_paper.pdf,3D Shape Attributes,2016
14,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,46c82cfadd9f885f5480b2d7155f0985daf949fc,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Fouhey_3D_Shape_Attributes_CVPR_2016_paper.pdf,3D Shape Attributes,2016
15,United States,VOC,voc,47.6423318,-122.1369302,Microsoft,company,57642aa16d29bbd9f89f95e3f3dcb8291552db60,citation,http://www.cs.toronto.edu/~pekhimenko/Papers/iiswc18-tbd.pdf,Benchmarking and Analyzing Deep Neural Network Training,2018
16,Canada,VOC,voc,49.25839375,-123.24658161,University of British Columbia,edu,57642aa16d29bbd9f89f95e3f3dcb8291552db60,citation,http://www.cs.toronto.edu/~pekhimenko/Papers/iiswc18-tbd.pdf,Benchmarking and Analyzing Deep Neural Network Training,2018
17,Canada,VOC,voc,43.66333345,-79.39769975,University of Toronto,edu,57642aa16d29bbd9f89f95e3f3dcb8291552db60,citation,http://www.cs.toronto.edu/~pekhimenko/Papers/iiswc18-tbd.pdf,Benchmarking and Analyzing Deep Neural Network Training,2018
18,China,VOC,voc,39.9808333,116.34101249,Beihang University,edu,df0e280cae018cebd5b16ad701ad101265c369fa,citation,https://arxiv.org/pdf/1509.02470.pdf,Deep Attributes from Context-Aware Regional Neural Codes,2015
19,China,VOC,voc,39.966244,116.3270039,Intel Labs China,company,df0e280cae018cebd5b16ad701ad101265c369fa,citation,https://arxiv.org/pdf/1509.02470.pdf,Deep Attributes from Context-Aware Regional Neural Codes,2015
20,United States,VOC,voc,40.8419836,-73.94368971,Columbia University,edu,df0e280cae018cebd5b16ad701ad101265c369fa,citation,https://arxiv.org/pdf/1509.02470.pdf,Deep Attributes from Context-Aware Regional Neural Codes,2015
21,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,a63104ad235f98bc5ee0b44fefbcdb49e32c205a,citation,http://groups.inf.ed.ac.uk/calvin/Publications/Jammalamadaka12eccv.pdf,Has my algorithm succeeded? an evaluator for human pose estimators,2012
22,Switzerland,VOC,voc,47.376313,8.5476699,ETH Zurich,edu,a63104ad235f98bc5ee0b44fefbcdb49e32c205a,citation,http://groups.inf.ed.ac.uk/calvin/Publications/Jammalamadaka12eccv.pdf,Has my algorithm succeeded? an evaluator for human pose estimators,2012
23,United Kingdom,VOC,voc,55.94951105,-3.19534913,University of Edinburgh,edu,a63104ad235f98bc5ee0b44fefbcdb49e32c205a,citation,http://groups.inf.ed.ac.uk/calvin/Publications/Jammalamadaka12eccv.pdf,Has my algorithm succeeded? an evaluator for human pose estimators,2012
24,China,VOC,voc,36.3693473,120.673818,Shandong University,edu,ddde8f2c0209f11c2579dfaa13ac4053dedbf2fe,citation,https://arxiv.org/pdf/1811.02804.pdf,Image smoothing via unsupervised learning,2018
25,United States,VOC,voc,42.3614256,-71.0812092,Microsoft Research Asia,company,ddde8f2c0209f11c2579dfaa13ac4053dedbf2fe,citation,https://arxiv.org/pdf/1811.02804.pdf,Image smoothing via unsupervised learning,2018
26,China,VOC,voc,39.9922379,116.30393816,Peking University,edu,ddde8f2c0209f11c2579dfaa13ac4053dedbf2fe,citation,https://arxiv.org/pdf/1811.02804.pdf,Image smoothing via unsupervised learning,2018
27,United States,VOC,voc,32.87935255,-117.23110049,"University of California, San Diego",edu,16161051ee13dd3d836a39a280df822bf6442c84,citation,https://pdfs.semanticscholar.org/4bd3/f187f3e09483b1f0f92150a4a77409691b0f.pdf,Learning Efficient Object Detection Models with Knowledge Distillation,2017
28,United States,VOC,voc,38.926761,-92.29193783,University of Missouri,edu,16161051ee13dd3d836a39a280df822bf6442c84,citation,https://pdfs.semanticscholar.org/4bd3/f187f3e09483b1f0f92150a4a77409691b0f.pdf,Learning Efficient Object Detection Models with Knowledge Distillation,2017
29,United States,VOC,voc,37.3239177,-122.0129693,"NEC Labs, Cupertino, CA",company,16161051ee13dd3d836a39a280df822bf6442c84,citation,https://pdfs.semanticscholar.org/4bd3/f187f3e09483b1f0f92150a4a77409691b0f.pdf,Learning Efficient Object Detection Models with Knowledge Distillation,2017
30,China,VOC,voc,39.966244,116.3270039,Intel Labs China,company,19d4855f064f0d53cb851e9342025bd8503922e2,citation,http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/IEEE_CVPR2013/data/Papers/4989d468.pdf,Learning SURF Cascade for Fast and Accurate Object Detection,2013
31,China,VOC,voc,23.09461185,113.28788994,Sun Yat-Sen University,edu,ee098ed493af3abe873ce89354599e1f6bdf65be,citation,https://arxiv.org/pdf/1702.05839.pdf,Progressively Diffused Networks for Semantic Image Segmentation,2017
32,China,VOC,voc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,ee098ed493af3abe873ce89354599e1f6bdf65be,citation,https://arxiv.org/pdf/1702.05839.pdf,Progressively Diffused Networks for Semantic Image Segmentation,2017
33,China,VOC,voc,39.993008,116.329882,SenseTime,company,ee098ed493af3abe873ce89354599e1f6bdf65be,citation,https://arxiv.org/pdf/1702.05839.pdf,Progressively Diffused Networks for Semantic Image Segmentation,2017
34,United States,VOC,voc,37.4092265,-122.0236615,Baidu,company,99f95595c45bd7a4fe2cffff07850754955e5e2a,citation,https://nicsefc.ee.tsinghua.edu.cn/media/publications/2015/IEEE%20TCAD_170.pdf,RRAM-Based Analog Approximate Computing,2015
35,United States,VOC,voc,40.44415295,-79.96243993,University of Pittsburgh,edu,99f95595c45bd7a4fe2cffff07850754955e5e2a,citation,https://nicsefc.ee.tsinghua.edu.cn/media/publications/2015/IEEE%20TCAD_170.pdf,RRAM-Based Analog Approximate Computing,2015
36,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,99f95595c45bd7a4fe2cffff07850754955e5e2a,citation,https://nicsefc.ee.tsinghua.edu.cn/media/publications/2015/IEEE%20TCAD_170.pdf,RRAM-Based Analog Approximate Computing,2015
37,United States,VOC,voc,33.7756178,-84.396285,Georgia Tech,edu,5a0209515ab62e008efeca31f80fa0a97031cd9d,citation,http://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_046.pdf,Dataset fingerprints: Exploring image collections through data mining,2015
38,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,2c953b06c1c312e36f1fdb9919567b42c9322384,citation,http://people.csail.mit.edu/tomasz/papers/malisiewicz_iccv11.pdf,Ensemble of exemplar-SVMs for object detection and beyond,2011
39,China,VOC,voc,40.0044795,116.370238,Chinese Academy of Sciences,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014
40,United States,VOC,voc,29.58333105,-98.61944505,University of Texas at San Antonio,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014
41,Singapore,VOC,voc,1.2962018,103.77689944,National University of Singapore,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014
42,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,5907ca4b91c8e8d846871e045bce9a4ca851053a,citation,http://eiger.ddns.comp.nus.edu.sg/pubs/fusionofmultichannelstructures-tip2014.pdf,Fusion of Multichannel Local and Global Structural Cues for Photo Aesthetics Evaluation,2014
43,China,VOC,voc,22.4162632,114.2109318,Chinese University of Hong Kong,edu,931282732f0be57f7fb895238e94bdda00a52cad,citation,https://pdfs.semanticscholar.org/9312/82732f0be57f7fb895238e94bdda00a52cad.pdf,Gated Bi-directional CNN for Object Detection,2016
44,China,VOC,voc,39.993008,116.329882,SenseTime,company,931282732f0be57f7fb895238e94bdda00a52cad,citation,https://pdfs.semanticscholar.org/9312/82732f0be57f7fb895238e94bdda00a52cad.pdf,Gated Bi-directional CNN for Object Detection,2016
45,Germany,VOC,voc,48.7468939,9.0805141,Max Planck Institute for Intelligent Systems,edu,cfa48bc1015b88809e362b4da19fe4459acb1d89,citation,https://pdfs.semanticscholar.org/cfa4/8bc1015b88809e362b4da19fe4459acb1d89.pdf,Learning to Filter Object Detections,2017
46,United States,VOC,voc,47.6423318,-122.1369302,Microsoft,company,cfa48bc1015b88809e362b4da19fe4459acb1d89,citation,https://pdfs.semanticscholar.org/cfa4/8bc1015b88809e362b4da19fe4459acb1d89.pdf,Learning to Filter Object Detections,2017
47,United States,VOC,voc,40.34829285,-74.66308325,Princeton University,edu,420c46d7cafcb841309f02ad04cf51cb1f190a48,citation,https://arxiv.org/pdf/1511.07122.pdf,Multi-Scale Context Aggregation by Dilated Convolutions,2015
48,United States,VOC,voc,40.4439789,-79.9464634,Intel Labs,company,420c46d7cafcb841309f02ad04cf51cb1f190a48,citation,https://arxiv.org/pdf/1511.07122.pdf,Multi-Scale Context Aggregation by Dilated Convolutions,2015
49,France,VOC,voc,48.708759,2.164006,"Center for Visual Computing, École Centrale Paris, France",edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013
50,France,VOC,voc,48.840579,2.586968,"LIGM (UMR CNRS), École des Ponts ParisTech, Université Paris-Est, France",edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013
51,Finland,VOC,voc,65.0592157,25.46632601,University of Oulu,edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013
52,France,VOC,voc,48.7146403,2.2056539,"Équipe Galen, INRIA Saclay, Île-de-France, France",edu,2603a85b305d041bf749934fe538315ecbc300c2,citation,http://www.ee.oulu.fi/~jkannala/publications/scia2013a.pdf,Non Maximal Suppression in Cascaded Ranking Models,2013
53,United States,VOC,voc,42.3354481,-71.16813864,Boston College,edu,18ccd8bd64b50c1b6a83a71792fd808da7076bc9,citation,http://ttic.uchicago.edu/~mmaire/papers/pdf/seg_obj_iccv2011.pdf,Object detection and segmentation from joint embedding of parts and pixels,2011
54,United States,VOC,voc,34.13710185,-118.12527487,California Institute of Technology,edu,18ccd8bd64b50c1b6a83a71792fd808da7076bc9,citation,http://ttic.uchicago.edu/~mmaire/papers/pdf/seg_obj_iccv2011.pdf,Object detection and segmentation from joint embedding of parts and pixels,2011
55,Japan,VOC,voc,34.7275714,135.2371,Kobe University,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019
56,China,VOC,voc,26.0252776,119.2117845,Fujian Normal University,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019
57,United Kingdom,VOC,voc,53.94540365,-1.03138878,University of York,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019
58,China,VOC,voc,24.4399419,118.09301781,Xiamen University,edu,75d0a8e80a75312571951144aaa2d5dd5ae30e43,citation,http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf,Polar Transformation on Image Features for Orientation-Invariant Representations,2019
59,United Kingdom,VOC,voc,51.5247272,-0.03931035,Queen Mary University of London,edu,b1045a2de35d0adf784353f90972118bc1162f8d,citation,http://eecs.qmul.ac.uk/~jason/Research/PreprintVersion/Quantifying%20and%20Transferring%20Contextual%20Information%20in%20Object%20Detection.pdf,Quantifying and Transferring Contextual Information in Object Detection,2012
60,China,VOC,voc,23.09461185,113.28788994,Sun Yat-Sen University,edu,b1045a2de35d0adf784353f90972118bc1162f8d,citation,http://eecs.qmul.ac.uk/~jason/Research/PreprintVersion/Quantifying%20and%20Transferring%20Contextual%20Information%20in%20Object%20Detection.pdf,Quantifying and Transferring Contextual Information in Object Detection,2012
61,China,VOC,voc,23.09461185,113.28788994,Sun Yat-Sen University,edu,ab781f035720d991e244adb35f1d04e671af1999,citation,https://arxiv.org/pdf/1712.07465.pdf,Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition,2018
62,China,VOC,voc,39.993008,116.329882,SenseTime,company,ab781f035720d991e244adb35f1d04e671af1999,citation,https://arxiv.org/pdf/1712.07465.pdf,Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition,2018
63,Canada,VOC,voc,43.66333345,-79.39769975,University of Toronto,edu,1bb0dd8d349cdb1bbc065f1f0e111a8334072257,citation,http://jmlr.csail.mit.edu/proceedings/papers/v22/tarlow12a/tarlow12a.pdf,Structured Output Learning with High Order Loss Functions,2012
64,United States,VOC,voc,41.7846982,-87.5925848,Toyota Technological Institute at Chicago,company,3a4c70ca0bbd461fe2e4de3448a01f06c0217459,citation,https://arxiv.org/pdf/1510.09171.pdf,Accurate Vision-based Vehicle Localization using Satellite Imagery,2015
65,Netherlands,VOC,voc,52.3553655,4.9501644,University of Amsterdam,edu,26c58e24687ccbe9737e41837aab74e4a499d259,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Li_Codemaps_-_Segment_2013_ICCV_paper.pdf,"Codemaps - Segment, Classify and Search Objects Locally",2013
66,Netherlands,VOC,voc,52.356678,4.95187,"Centrum Wiskunde & Informatica, Amsterdam, The Netherlands",edu,26c58e24687ccbe9737e41837aab74e4a499d259,citation,http://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Li_Codemaps_-_Segment_2013_ICCV_paper.pdf,"Codemaps - Segment, Classify and Search Objects Locally",2013
67,United States,VOC,voc,47.6423318,-122.1369302,Microsoft,company,c9abf6cb2d916262425033db12cf0181d40be7cb,citation,https://pdfs.semanticscholar.org/c9ab/f6cb2d916262425033db12cf0181d40be7cb.pdf,Entropy-based Latent Structured Output Prediction-Supplementary materials,2015
68,China,VOC,voc,31.83907195,117.26420748,University of Science and Technology of China,edu,ce43209fc68e51ef05fa06cc0fe6210cbd021e3f,citation,http://min.sjtu.edu.cn/files%5Cpapers%5C2016%5CJournal%5C2016-TIP-CV-ZHANGXIAOPENG%5C2016-TIP-CV-02.pdf,Fused One-vs-All Features With Semantic Alignments for Fine-Grained Visual Categorization,2016
69,United States,VOC,voc,29.58333105,-98.61944505,University of Texas at San Antonio,edu,ce43209fc68e51ef05fa06cc0fe6210cbd021e3f,citation,http://min.sjtu.edu.cn/files%5Cpapers%5C2016%5CJournal%5C2016-TIP-CV-ZHANGXIAOPENG%5C2016-TIP-CV-02.pdf,Fused One-vs-All Features With Semantic Alignments for Fine-Grained Visual Categorization,2016
70,China,VOC,voc,31.20081505,121.42840681,Shanghai Jiao Tong University,edu,ce43209fc68e51ef05fa06cc0fe6210cbd021e3f,citation,http://min.sjtu.edu.cn/files%5Cpapers%5C2016%5CJournal%5C2016-TIP-CV-ZHANGXIAOPENG%5C2016-TIP-CV-02.pdf,Fused One-vs-All Features With Semantic Alignments for Fine-Grained Visual Categorization,2016
71,United Kingdom,VOC,voc,51.7555205,-1.2261597,Oxford Brookes University,edu,70d71c2f8c865438c0158bed9f7d64e57e245535,citation,http://cms.brookes.ac.uk/research/visiongroup/publications/2013/intr_obj_vrt_nips13.pdf,"Higher Order Priors for Joint Intrinsic Image, Objects, and Attributes Estimation",2013
72,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,70d71c2f8c865438c0158bed9f7d64e57e245535,citation,http://cms.brookes.ac.uk/research/visiongroup/publications/2013/intr_obj_vrt_nips13.pdf,"Higher Order Priors for Joint Intrinsic Image, Objects, and Attributes Estimation",2013
73,China,VOC,voc,34.2469152,108.91061982,Northwestern Polytechnical University,edu,50953b9a15aca6ef3351e613e7215abdcae1435e,citation,http://sunw.csail.mit.edu/papers/63_Cheng_SUNw.pdf,Learning coarse-to-fine sparselets for efficient object detection and scene classification,2015
74,Thailand,VOC,voc,13.65450525,100.49423171,Robotics Institute,edu,d6d7dcdcf66fe83e49d175cd9d8ac0b507d0e9d8,citation,http://dhoiem.cs.illinois.edu/publications/ijcv2010_occlusion.pdf,Recovering Occlusion Boundaries from an Image,2010
75,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,d6d7dcdcf66fe83e49d175cd9d8ac0b507d0e9d8,citation,http://dhoiem.cs.illinois.edu/publications/ijcv2010_occlusion.pdf,Recovering Occlusion Boundaries from an Image,2010
76,United States,VOC,voc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,d6d7dcdcf66fe83e49d175cd9d8ac0b507d0e9d8,citation,http://dhoiem.cs.illinois.edu/publications/ijcv2010_occlusion.pdf,Recovering Occlusion Boundaries from an Image,2010
77,China,VOC,voc,28.727339,115.816633,Jiangxi University of Finance and Economics,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018
78,Singapore,VOC,voc,1.3484104,103.68297965,Nanyang Technological University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018
79,China,VOC,voc,34.250803,108.983693,Xi’an Jiaotong University,edu,1642358cd9410abe9ee512d34ba68296b308770e,citation,https://arxiv.org/pdf/1807.04562.pdf,Robustness Analysis of Pedestrian Detectors for Surveillance,2018
80,Netherlands,VOC,voc,52.3553655,4.9501644,University of Amsterdam,edu,25d7da85858a4d89b7de84fd94f0c0a51a9fc67a,citation,http://graphics.cs.cmu.edu/courses/16-824/2016_spring/slides/seg_3.pdf,Selective Search for Object Recognition,2013
81,Italy,VOC,voc,46.0658836,11.1159894,University of Trento,edu,25d7da85858a4d89b7de84fd94f0c0a51a9fc67a,citation,http://graphics.cs.cmu.edu/courses/16-824/2016_spring/slides/seg_3.pdf,Selective Search for Object Recognition,2013
82,United States,VOC,voc,37.4219999,-122.0840575,Google,company,0690ba31424310a90028533218d0afd25a829c8d,citation,https://arxiv.org/pdf/1412.7062.pdf,Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs,2015
83,Germany,VOC,voc,53.8338371,10.7035939,Institute of Systems and Robotics,edu,7fb8d9c36c23f274f2dd84945dd32ec2cc143de1,citation,http://home.isr.uc.pt/~joaoluis/papers/eccv2012.pdf,Semantic segmentation with second-order pooling,2012
84,Germany,VOC,voc,50.7338124,7.1022465,University of Bonn,edu,7fb8d9c36c23f274f2dd84945dd32ec2cc143de1,citation,http://home.isr.uc.pt/~joaoluis/papers/eccv2012.pdf,Semantic segmentation with second-order pooling,2012
85,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,4682fee7dc045aea7177d7f3bfe344aabf153bd5,citation,http://cs.brown.edu/~ls/teaching_CMU_16-824/slides_tz-1.pdf,Tabula rasa: Model transfer for object category detection,2011
86,United States,VOC,voc,42.3614256,-71.0812092,Microsoft Research Asia,company,35f345ebe3831e4741dcdc1931da59043acf4b83,citation,https://pdfs.semanticscholar.org/35f3/45ebe3831e4741dcdc1931da59043acf4b83.pdf,Towards High Performance Video Object Detection for Mobiles 3 2 Revisiting Video Object Detection Baseline,2018
87,Canada,VOC,voc,49.8091536,-97.13304179,University of Manitoba,edu,488fff23542ff397cdb1ced64db2c96320afc560,citation,http://www.cs.umanitoba.ca/~ywang/papers/cvpr15.pdf,Weakly supervised localization of novel objects using appearance transfer,2015
88,United States,VOC,voc,37.43131385,-122.16936535,Stanford University,edu,032bde9da87439c781a6c81ba7933985ed95d88e,citation,https://arxiv.org/pdf/1506.02106.pdf,What's the point: Semantic segmentation with point supervision,2016
89,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,032bde9da87439c781a6c81ba7933985ed95d88e,citation,https://arxiv.org/pdf/1506.02106.pdf,What's the point: Semantic segmentation with point supervision,2016
90,United Kingdom,VOC,voc,55.94951105,-3.19534913,University of Edinburgh,edu,032bde9da87439c781a6c81ba7933985ed95d88e,citation,https://arxiv.org/pdf/1506.02106.pdf,What's the point: Semantic segmentation with point supervision,2016
91,Australia,VOC,voc,-42.902631,147.3273381,University of Tasmania,edu,c2a2093b4163616b83398e503ae9ed948f4f6a2b,citation,http://mima.sdu.edu.cn/(X(1)S(ar3myg55nqom1l55ttix5kjj))/Images/publication/Dual-CNN-ML.pdf,A Dual-CNN Model for Multi-label Classification by Leveraging Co-occurrence Dependencies Between Labels,2017
92,China,VOC,voc,36.3693473,120.673818,Shandong University,edu,c2a2093b4163616b83398e503ae9ed948f4f6a2b,citation,http://mima.sdu.edu.cn/(X(1)S(ar3myg55nqom1l55ttix5kjj))/Images/publication/Dual-CNN-ML.pdf,A Dual-CNN Model for Multi-label Classification by Leveraging Co-occurrence Dependencies Between Labels,2017
93,United States,VOC,voc,34.068921,-118.4451811,UCLA,edu,c4fc07072d7ebfbca471d2394b20199d8107e517,citation,https://pdfs.semanticscholar.org/c4fc/07072d7ebfbca471d2394b20199d8107e517.pdf,Active Mask Hierarchies for Object Detection,2010
94,United States,VOC,voc,42.3583961,-71.09567788,MIT,edu,c4fc07072d7ebfbca471d2394b20199d8107e517,citation,https://pdfs.semanticscholar.org/c4fc/07072d7ebfbca471d2394b20199d8107e517.pdf,Active Mask Hierarchies for Object Detection,2010
95,China,VOC,voc,38.88140235,121.52281098,Dalian University of Technology,edu,39afeceb57a7fde266ddd842aa23d2eea7ad5665,citation,https://arxiv.org/pdf/1802.06960.pdf,Agile Amulet: Real-Time Salient Object Detection with Contextual Attention,2018
96,Australia,VOC,voc,-34.9189226,138.60423668,University of Adelaide,edu,39afeceb57a7fde266ddd842aa23d2eea7ad5665,citation,https://arxiv.org/pdf/1802.06960.pdf,Agile Amulet: Real-Time Salient Object Detection with Contextual Attention,2018
97,United States,VOC,voc,42.3583961,-71.09567788,MIT,edu,732e4016225280b485c557a119ec50cffb8fee98,citation,https://arxiv.org/pdf/1311.6510.pdf,Are all training examples equally valuable?,2013
98,Spain,VOC,voc,41.40657415,2.1945341,Universitat Oberta de Catalunya,edu,732e4016225280b485c557a119ec50cffb8fee98,citation,https://arxiv.org/pdf/1311.6510.pdf,Are all training examples equally valuable?,2013
99,United States,VOC,voc,39.2899685,-76.62196103,University of Maryland,edu,38b4ac4a0802fdb63dea6769dd1aee075cc3f87d,citation,https://arxiv.org/pdf/1712.08675.pdf,Boundary-sensitive Network for Portrait Segmentation,2017
100,United States,VOC,voc,37.4019735,-122.0477876,Samsung Research America,edu,38b4ac4a0802fdb63dea6769dd1aee075cc3f87d,citation,https://arxiv.org/pdf/1712.08675.pdf,Boundary-sensitive Network for Portrait Segmentation,2017
101,Switzerland,VOC,voc,47.3764534,8.54770931,ETH Zürich,edu,10f13579084670291019c6e8ef55f5cd35c926b6,citation,https://pdfs.semanticscholar.org/7088/0e0ba2478c7250918ee9b7accc6993d13ba4.pdf,Closed-Form Approximate CRF Training for Scalable Image Segmentation,2014
102,United Kingdom,VOC,voc,55.94951105,-3.19534913,University of Edinburgh,edu,10f13579084670291019c6e8ef55f5cd35c926b6,citation,https://pdfs.semanticscholar.org/7088/0e0ba2478c7250918ee9b7accc6993d13ba4.pdf,Closed-Form Approximate CRF Training for Scalable Image Segmentation,2014
103,Singapore,VOC,voc,1.2962018,103.77689944,National University of Singapore,edu,5250f319cae32437489bb97b2ed9a1dc962d4d39,citation,https://arxiv.org/pdf/1411.2861.pdf,Computational Baby Learning.,2014
104,China,VOC,voc,39.94976005,116.33629046,Beijing Jiaotong University,edu,5250f319cae32437489bb97b2ed9a1dc962d4d39,citation,https://arxiv.org/pdf/1411.2861.pdf,Computational Baby Learning.,2014
105,Switzerland,VOC,voc,46.5190557,6.5667576,"EPFL, Lausanne (Switzerland)",edu,7b8ace072475a9a42d6ceb293c8b4a8c9b573284,citation,http://www.vision.ee.ethz.ch/en/publications/papers/proceedings/eth_biwi_00855.pdf,Conditional Random Fields for multi-camera object detection,2011
106,Switzerland,VOC,voc,47.376313,8.5476699,"ETHZ, Zurich (Switzerland)",edu,7b8ace072475a9a42d6ceb293c8b4a8c9b573284,citation,http://www.vision.ee.ethz.ch/en/publications/papers/proceedings/eth_biwi_00855.pdf,Conditional Random Fields for multi-camera object detection,2011
107,United States,VOC,voc,37.2283843,-80.4234167,Virginia Tech,edu,3d0660e18c17db305b9764bb86b21a429241309e,citation,https://arxiv.org/pdf/1604.03505.pdf,Counting Everyday Objects in Everyday Scenes,2017
108,United States,VOC,voc,33.776033,-84.39884086,Georgia Institute of Technology,edu,3d0660e18c17db305b9764bb86b21a429241309e,citation,https://arxiv.org/pdf/1604.03505.pdf,Counting Everyday Objects in Everyday Scenes,2017
109,United States,VOC,voc,37.3239177,-122.0129693,"NEC Labs, Cupertino, CA",company,8f76401847d3e3f0331bab24b17f76953be66220,citation,http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2010_1077.pdf,Deep Coding Network,2010
110,United States,VOC,voc,40.47913175,-74.43168868,Rutgers University,edu,8f76401847d3e3f0331bab24b17f76953be66220,citation,http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2010_1077.pdf,Deep Coding Network,2010
111,China,VOC,voc,40.00229045,116.32098908,Tsinghua University,edu,fe7ae13bf5fc80cf0837bacbe44905bd8749f03f,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Deep%20coupled%20metric%20learning%20for%20cross-modal%20matching.pdf,Deep Coupled Metric Learning for Cross-Modal Matching,2017
112,Singapore,VOC,voc,1.3484104,103.68297965,Nanyang Technological University,edu,fe7ae13bf5fc80cf0837bacbe44905bd8749f03f,citation,http://ivg.au.tsinghua.edu.cn/paper/2017_Deep%20coupled%20metric%20learning%20for%20cross-modal%20matching.pdf,Deep Coupled Metric Learning for Cross-Modal Matching,2017
113,Canada,VOC,voc,43.7743911,-79.50481085,York University,edu,cdeee5eed68e7c8eb06185f7fcb1a072af784886,citation,https://arxiv.org/pdf/1505.01173.pdf,Deep Learning for Object Saliency Detection and Image Segmentation,2015
114,United States,VOC,voc,37.43131385,-122.16936535,Stanford University,edu,cdeee5eed68e7c8eb06185f7fcb1a072af784886,citation,https://arxiv.org/pdf/1505.01173.pdf,Deep Learning for Object Saliency Detection and Image Segmentation,2015
115,Canada,VOC,voc,49.8091536,-97.13304179,University of Manitoba,edu,64b9675e924974fdec78a7272b27c7e7ec63a608,citation,http://www.cs.umanitoba.ca/~ywang/papers/icip17.pdf,Depth-aware object instance segmentation,2017
116,China,VOC,voc,31.32235655,121.38400941,Shanghai University,edu,64b9675e924974fdec78a7272b27c7e7ec63a608,citation,http://www.cs.umanitoba.ca/~ywang/papers/icip17.pdf,Depth-aware object instance segmentation,2017
117,Thailand,VOC,voc,13.65450525,100.49423171,Robotics Institute,edu,7d520f474f2fc59422d910b980f8485716ce0a3e,citation,https://pdfs.semanticscholar.org/2128/4a9310a4b4c836b8dfb6af39c682b7348128.pdf,Designing Convolutional Neural Networks for Urban Scene Understanding,2017
118,United States,VOC,voc,40.4441619,-79.94272826,Carnegie Mellon University,edu,7d520f474f2fc59422d910b980f8485716ce0a3e,citation,https://pdfs.semanticscholar.org/2128/4a9310a4b4c836b8dfb6af39c682b7348128.pdf,Designing Convolutional Neural Networks for Urban Scene Understanding,2017
119,India,VOC,voc,17.4450981,78.3497678,IIIT Hyderabad,edu,f23114073e0e513b1c1c55e8777bda503721718c,citation,https://arxiv.org/pdf/1811.10016.pdf,Dissimilarity Coefficient based Weakly Supervised Object Detection,2018
120,United Kingdom,VOC,voc,51.7534538,-1.25400997,University of Oxford,edu,f23114073e0e513b1c1c55e8777bda503721718c,citation,https://arxiv.org/pdf/1811.10016.pdf,Dissimilarity Coefficient based Weakly Supervised Object Detection,2018
121,United States,VOC,voc,37.43131385,-122.16936535,Stanford University,edu,280d632ef3234c5ab06018c6eaccead75bc173b3,citation,http://ai.stanford.edu/~ajoulin/article/eccv14-vidcoloc.pdf,Efficient Image and Video Co-localization with Frank-Wolfe Algorithm,2014
122,United States,VOC,voc,37.3239177,-122.0129693,NEC,company,44a3ee0429a6d1b79d431b4d396962175c28ace6,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Yang_Exploit_All_the_CVPR_2016_paper.pdf,Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers,2016
123,United States,VOC,voc,38.99203005,-76.9461029,University of Maryland College Park,edu,44a3ee0429a6d1b79d431b4d396962175c28ace6,citation,http://openaccess.thecvf.com/content_cvpr_2016/papers/Yang_Exploit_All_the_CVPR_2016_paper.pdf,Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers,2016
124,United States,VOC,voc,34.13710185,-118.12527487,California Institute of Technology,edu,1a54a8b0c7b3fc5a21c6d33656690585c46ca08b,citation,http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf,Fast Feature Pyramids for Object Detection,2014
125,United States,VOC,voc,42.4505507,-76.4783513,Cornell University,edu,1a54a8b0c7b3fc5a21c6d33656690585c46ca08b,citation,http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf,Fast Feature Pyramids for Object Detection,2014
126,United States,VOC,voc,47.6418392,-122.1407465,"Microsoft Research Redmond, Redmond, USA",company,1a54a8b0c7b3fc5a21c6d33656690585c46ca08b,citation,http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf,Fast Feature Pyramids for Object Detection,2014
127,Singapore,VOC,voc,1.29500195,103.84909214,Singapore Management University,edu,742d5b4590284b632ca043a16507fb5a459dceb2,citation,https://arxiv.org/pdf/1712.00721.pdf,Feature Agglomeration Networks for Single Stage Face Detection,2017
128,China,VOC,voc,30.19331415,120.11930822,Zhejiang University,edu,742d5b4590284b632ca043a16507fb5a459dceb2,citation,https://arxiv.org/pdf/1712.00721.pdf,Feature Agglomeration Networks for Single Stage Face Detection,2017
129,United States,VOC,voc,42.2745754,-71.8062724,Worcester Polytechnic Institute,edu,bd433d471af50b571d7284afb5ee435654ace99f,citation,https://pdfs.semanticscholar.org/bd43/3d471af50b571d7284afb5ee435654ace99f.pdf,Going Deeper with Convolutional Neural Network for Intelligent Transportation,2016
130,United States,VOC,voc,33.5866784,-101.87539204,Electrical and Computer Engineering,edu,bd433d471af50b571d7284afb5ee435654ace99f,citation,https://pdfs.semanticscholar.org/bd43/3d471af50b571d7284afb5ee435654ace99f.pdf,Going Deeper with Convolutional Neural Network for Intelligent Transportation,2016
131,Israel,VOC,voc,32.76162915,35.01986304,University of Haifa,edu,fe683e48f373fa14c07851966474d15588b8c28b,citation,https://pdfs.semanticscholar.org/fe68/3e48f373fa14c07851966474d15588b8c28b.pdf,Hinge-Minimax Learner for the Ensemble of Hyperplanes,2018
132,Israel,VOC,voc,32.7767783,35.0231271,Technion - Israel Institute of Technology,edu,fe683e48f373fa14c07851966474d15588b8c28b,citation,https://pdfs.semanticscholar.org/fe68/3e48f373fa14c07851966474d15588b8c28b.pdf,Hinge-Minimax Learner for the Ensemble of Hyperplanes,2018
133,United States,VOC,voc,40.11116745,-88.22587665,"University of Illinois, Urbana-Champaign",edu,4e65c9f0a64b6a4333b12e2adc3861ad75aca83b,citation,https://pdfs.semanticscholar.org/4e65/c9f0a64b6a4333b12e2adc3861ad75aca83b.pdf,Image Classification Using Super-Vector Coding of Local Image Descriptors,2010
134,United States,VOC,voc,40.47913175,-74.43168868,Rutgers University,edu,4e65c9f0a64b6a4333b12e2adc3861ad75aca83b,citation,https://pdfs.semanticscholar.org/4e65/c9f0a64b6a4333b12e2adc3861ad75aca83b.pdf,Image Classification Using Super-Vector Coding of Local Image Descriptors,2010
135,United States,VOC,voc,41.7847112,-87.59260567,"Toyota Technological Institute, Chicago",edu,a1f33473ea3b8e98fee37e32ecbecabc379e07a0,citation,http://cs.brown.edu/people/ren/publications/cvpr2013/cascade_final.pdf,Image Segmentation by Cascaded Region Agglomeration,2013
136,China,VOC,voc,30.19331415,120.11930822,Zhejiang University,edu,a1f33473ea3b8e98fee37e32ecbecabc379e07a0,citation,http://cs.brown.edu/people/ren/publications/cvpr2013/cascade_final.pdf,Image Segmentation by Cascaded Region Agglomeration,2013
137,Canada,VOC,voc,49.8091536,-97.13304179,University of Manitoba,edu,3b60af814574ebe389856e9f7008bb83b0539abc,citation,https://arxiv.org/pdf/1703.00551.pdf,Label Refinement Network for Coarse-to-Fine Semantic Segmentation.,2017
138,United States,VOC,voc,39.86948105,-84.87956905,Indiana University,edu,3b60af814574ebe389856e9f7008bb83b0539abc,citation,https://arxiv.org/pdf/1703.00551.pdf,Label Refinement Network for Coarse-to-Fine Semantic Segmentation.,2017
139,United States,VOC,voc,47.6543238,-122.30800894,University of Washington,edu,214f552070a7eb5ef5efe0d6ffeaaa594a3c3535,citation,http://allenai.org/content/publications/objectNgrams_cvpr14.pdf,Learning Everything about Anything: Webly-Supervised Visual Concept Learning,2014
140,Germany,VOC,voc,48.14955455,11.56775314,Technical University Munich,edu,472541ccd941b9b4c52e1f088cc1152de9b3430f,citation,https://arxiv.org/pdf/1612.00197.pdf,Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses,2017
141,United States,VOC,voc,39.3299013,-76.6205177,Johns Hopkins University,edu,472541ccd941b9b4c52e1f088cc1152de9b3430f,citation,https://arxiv.org/pdf/1612.00197.pdf,Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses,2017
142,United States,VOC,voc,40.11571585,-88.22750772,Beckman Institute,edu,0bbb40e5b9e546a3f4e7340b2980059065c99203,citation,https://arxiv.org/pdf/1712.00886.pdf,Learning Object Detectors from Scratch with Gated Recurrent Feature Pyramids,2017
143,China,VOC,voc,31.30104395,121.50045497,Fudan University,edu,0bbb40e5b9e546a3f4e7340b2980059065c99203,citation,https://arxiv.org/pdf/1712.00886.pdf,Learning Object Detectors from Scratch with Gated Recurrent Feature Pyramids,2017
|