diff options
| author | jules@lens <julescarbon@gmail.com> | 2019-04-18 16:55:14 +0200 |
|---|---|---|
| committer | jules@lens <julescarbon@gmail.com> | 2019-04-18 16:55:14 +0200 |
| commit | 2e4daed06264f3dc3bbabd8fa4fc0d8ceed4c5af (patch) | |
| tree | 1a17bb4459776ac91f7006a2a407ca12edd3471e /site/public/datasets/duke_mtmc/index.html | |
| parent | 3d32e5b4ddbfbfe5d4abeda57ff200adf1532f4c (diff) | |
| parent | f8012f88641b0bb378ba79393f277c8918ebe452 (diff) | |
Merge branch 'master' of asdf.us:megapixels_dev
Diffstat (limited to 'site/public/datasets/duke_mtmc/index.html')
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 451 |
1 files changed, 334 insertions, 117 deletions
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 37de48ad..3c0bc0c2 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -17,7 +17,7 @@ <a class='slogan' href="/"> <div class='logo'></div> <div class='site_name'>MegaPixels</div> - <div class='splash'>Duke MTMC</div> + <div class='splash'>Duke MTMC Dataset</div> </a> <div class='links'> <a href="/datasets/">Datasets</a> @@ -26,8 +26,9 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 HD cameras at Duke University campus in March 2014 -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014 +</span></div></div></section><section><h2>Duke MTMC</h2> +</section><section><div class='right-sidebar'><div class='meta'> <div class='gray'>Published</div> <div>2016</div> </div><div class='meta'> @@ -35,45 +36,210 @@ <div>2,000,000 </div> </div><div class='meta'> <div class='gray'>Identities</div> - <div>1,812 </div> + <div>2,700 </div> </div><div class='meta'> <div class='gray'>Purpose</div> - <div>Person re-identification and multi-camera tracking</div> + <div>Person re-identification, multi-camera tracking</div> </div><div class='meta'> <div class='gray'>Created by</div> <div>Computer Science Department, Duke University, Durham, US</div> </div><div class='meta'> <div class='gray'>Website</div> <div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div> - </div><div class='meta'><div><div class='gray'>Created</div><div>2014</div></div><div><div class='gray'>Identities</div><div>Over 2,700</div></div><div><div class='gray'>Used for</div><div>Face recognition, person re-identification</div></div><div><div class='gray'>Created by</div><div>Computer Science Department, Duke University, Durham, US</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> -<p>[ PAGE UNDER DEVELOPMENT ]</p> -<p>Duke MTMC is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving <em>multi-target multi-camera tracking</em>. The videos were recorded during February and March 2014 and cinclude</p> -<p>Includes a total of 888.8 minutes of video (ind. verified)</p> -<p>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy."</p> -<p>The dataset includes approximately 2,000 annotated identities appearing in 85 hours of video from 8 cameras located throughout Duke University's campus.</p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg' alt=' Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey'><div class='caption'> Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey</div></div></section><section><p>According to the dataset authors,</p> -</section><section> + </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60FPS with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a>.</p> +<p>In this investigation into the Duke MTMC dataset we tracked down over 100 publicly available research papers that explicitly acknowledged using Duke MTMC. Our analysis shows that the dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers with explicit and direct links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.</p> +<<<<<<< HEAD +<p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in "finding missing elderly and children, and suspect tracking, etc." Both SenseNets and SenseTime have been directly linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a><a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 4">4</a></p> +======= +<p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in "finding missing elderly and children, and suspect tracking, etc." Both SenseNets and SenseTime have been directly linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 1">1</a><a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a></p> +>>>>>>> 61fbcb8f2709236f36a103a73e0bd9d1dd3723e8 +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.</div></div></section><section><p>Despite <a href="https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent">repeated</a> <a href="https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region">warnings</a> by Human Rights Watch that the authoritarian surveillance used in China represents a violation of human rights, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 70 research projects happening in China that publicly acknowledged benefiting from the Duke MTMC dataset. Amongst these were projects from SenseNets, SenseTime, CloudWalk, Megvii, Beihang University, and the PLA's National University of Defense Technology.</p> +<table> +<thead><tr> +<th>Organization</th> +<th>Paper</th> +<th>Link</th> +<th>Year</th> +<th>Used Duke MTMC</th> +</tr> +</thead> +<tbody> +<tr> +<td>Beihang University</td> +<td>Orientation-Guided Similarity Learning for Person Re-identification</td> +<td><a href="https://ieeexplore.ieee.org/document/8545620">ieee.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>Beihang University</td> +<td>Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology</td> +<td><a href="https://dl.acm.org/citation.cfm?id=3240663">acm.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>CloudWalk</td> +<td>CloudWalk re-identification technology extends facial biometric tracking with improved accuracy</td> +<td><a href="https://www.biometricupdate.com/201903/cloudwalk-re-identification-technology-extends-facial-biometric-tracking-with-improved-accuracy">BiometricUpdate.com</a></td> +<td>2019</td> +<td>✔</td> +</tr> +<tr> +<td>CloudWalk</td> +<td>Horizontal Pyramid Matching for Person Re-identification</td> +<td><a href="https://arxiv.org/pdf/1804.05275.pdf">arxiv.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>Megvii</td> +<td>Person Re-Identification (slides)</td> +<td><a href="https://zsc.github.io/megvii-pku-dl-course/slides/Lecture%2011,%20Human%20Understanding_%20ReID%20and%20Pose%20and%20Attributes%20and%20Activity%20.pdf">github.io</a></td> +<td>2017</td> +<td>✔</td> +</tr> +<tr> +<td>Megvii</td> +<td>Multi-Target, Multi-Camera Tracking by Hierarchical Clustering: Recent Progress on DukeMTMC Project</td> +<td><a href="https://www.semanticscholar.org/paper/Multi-Target%2C-Multi-Camera-Tracking-by-Hierarchical-Zhang-Wu/10c20cf47d61063032dce4af73a4b8e350bf1128">SemanticScholar</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>Megvii</td> +<td>SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial PersonRe-Identification</td> +<td><a href="https://arxiv.org/abs/1810.06996">arxiv.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>National University of Defense Technology</td> +<td>Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers</td> +<td><a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">SemanticScholar.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>National University of Defense Technology</td> +<td>Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks</td> +<td><a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">SemanticScholar.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>SenseNets, SenseTime</td> +<td>Attention-Aware Compositional Network for Person Re-identification</td> +<td><a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>SenseTime</td> +<td>End-to-End Deep Kronecker-Product Matching for Person Re-identification</td> +<td><a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">thcvf.com</a></td> +<td>2018</td> +<td>✔</td> +</tr> +</tbody> +</table> +<p>The reasons that companies in China use the Duke MTMC dataset for research are technically no different than the reasons it is used in the United States and Europe. In fact, the original creators of the dataset published a follow up report in 2017 titled <a href="https://www.semanticscholar.org/paper/Tracking-Social-Groups-Within-and-Across-Cameras-Solera-Calderara/9e644b1e33dd9367be167eb9d832174004840400">Tracking Social Groups Within and Across Cameras</a> with specific applications to "automated analysis of crowds and social gatherings for surveillance and security applications". Their work, as well as the creation of the original dataset in 2014 were both supported in part by the United States Army Research Laboratory.</p> +<p>Citations from the United States and Europe show a similar trend to that in China, including publicly acknowledged and verified usage of the Duke MTMC dataset supported or carried out by the United States Department of Homeland Security, IARPA, IBM, Microsoft (who provides surveillance to ICE), and Vision Semantics (who works with the UK Ministry of Defence). One <a href="https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf">paper</a> is even jointly published by researchers affiliated with both the University College of London and the National University of Defense Technology in China.</p> +<table> +<thead><tr> +<th>Organization</th> +<th>Paper</th> +<th>Link</th> +<th>Year</th> +<th>Used Duke MTMC</th> +</tr> +</thead> +<tbody> +<tr> +<td>IARPA, IBM</td> +<td>Horizontal Pyramid Matching for Person Re-identification</td> +<td><a href="https://arxiv.org/abs/1804.05275">arxiv.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>Microsoft</td> +<td>ReXCam: Resource-Efficient, Cross-CameraVideo Analytics at Enterprise Scale</td> +<td><a href="https://arxiv.org/abs/1811.01268">arxiv.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>Microsoft</td> +<td>Scaling Video Analytics Systems to Large Camera Deployments</td> +<td><a href="https://arxiv.org/pdf/1809.02318.pdf">arxiv.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>University College of London</td> +<td>Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based RecurrentAttention Networks</td> +<td><a href="https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf">SemanticScholar.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +<tr> +<td>US Dept. of Homeland Security</td> +<td>Re-Identification with Consistent Attentive Siamese Networks</td> +<td><a href="https://arxiv.org/abs/1811.07487/">arxiv.org</a></td> +<td>2019</td> +<td>✔</td> +</tr> +<tr> +<td>Vision Semantics Ltd.</td> +<td>Unsupervised Person Re-identification by Deep Learning Tracklet Association</td> +<td><a href="https://arxiv.org/abs/1809.02874">arxiv.org</a></td> +<td>2018</td> +<td>✔</td> +</tr> +</tbody> +</table> +<p>By some metrics the dataset is considered a huge success. It is regarded as highly influential research and has contributed to hundreds, if not thousands, of projects to advance artificial intelligence for person tracking and monitoring. All the above citations, regardless of which country is using it, align perfectly with the original <a href="http://vision.cs.duke.edu/DukeMTMC/">intent</a> of the Duke MTMC dataset: "to accelerate advances in multi-target multi-camera tracking".</p> +<<<<<<< HEAD +<p>The same logic applies for all the new extensions of the Duke MTMC dataset including <a href="https://github.com/layumi/DukeMTMC-reID_evaluation">Duke MTMC Re-ID</a>, <a href="https://github.com/Yu-Wu/DukeMTMC-VideoReID">Duke MTMC Video Re-ID</a>, Duke MTMC Groups, and <a href="https://github.com/vana77/DukeMTMC-attribute">Duke MTMC Attribute</a>. And it also applies to all the new specialized datasets that will be created from Duke MTMC, such as the low-resolution face recognition dataset called <a href="https://qmul-survface.github.io/">QMUL-SurvFace</a>, which was funded in part by <a href="https://seequestor.com">SeeQuestor</a>, a computer vision provider to law enforcement agencies including Scotland Yards and Queensland Police. From the perspective of academic researchers, security contractors, and defense agencies using these datasets to advance their organization's work, Duke MTMC provides significant value regardless of who else is using it, so long as it advances their own interests in artificial intelligence.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc</div></div></section><section><p>But this perspective comes at significant cost to civil rights, human rights, and privacy. The creation and distribution of the Duke MTMC illustrates an egregious prioritization of surveillance technologies over individual rights, where the simple act of going to class could implicate your biometric data in a surveillance training dataset, perhaps even used by foreign defense agencies against your own ethics, against your own political interests, or against universal human rights.</p> +<p>For the approximately 2,000 students in Duke MTMC dataset, there is unfortunately no escape. It would be impossible to remove oneself from all copies of the dataset downloaded around the world. Instead, over 2,000 students and visitors who happened to be walking to class on March 13, 2014 will forever remain in all downloaded copies of the Duke MTMC dataset and all its extensions, contributing to a global supply chain of data that powers governmental and commercial expansion of biometric surveillance technologies.</p> +======= +<p>The same logic applies for all the new extensions of the Duke MTMC dataset including <a href="https://github.com/layumi/DukeMTMC-reID_evaluation">Duke MTMC Re-ID</a>, <a href="https://github.com/Yu-Wu/DukeMTMC-VideoReID">Duke MTMC Video Re-ID</a>, Duke MTMC Groups, and <a href="https://github.com/vana77/DukeMTMC-attribute">Duke MTMC Attribute</a>. And it also applies to all the new specialized datasets that will be created from Duke MTMC, such as the low-resolution face recognition dataset called <a href="https://qmul-survface.github.io/">QMUL-SurvFace</a>, which was funded in part by <a href="https://seequestor.com">SeeQuestor</a>, a computer vision provider to law enforcement agencies including Scotland Yards and Queensland Police. From the perspective of academic researchers, security contractors, and defense agencies using these datasets to advance their organization's work, Duke MTMC provides significant value regardless of who else is using it so long as it accelerate advances their own interests in artificial intelligence.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc</div></div></section><section><p>But this perspective comes at significant cost to civil rights, human rights, and privacy. The creation and distribution of the Duke MTMC illustrates an egregious prioritization of surveillance technologies over individual rights, where the simple act of going to class could implicate your biometric data in a surveillance training dataset, perhaps even used by foreign defense agencies against your own ethics, against universal human rights, or against your own political interests.</p> +<p>For the approximately 2,000 students in Duke MTMC dataset there is unfortunately no escape. It would be impossible to remove oneself from all copies of the dataset downloaded around the world. Instead, over 2,000 students and visitors who happened to be walking to class in 2014 will forever remain in all downloaded copies of the Duke MTMC dataset and all its extensions, contributing to a global supply chain of data that powers governmental and commercial expansion of biometric surveillance technologies.</p> +>>>>>>> 61fbcb8f2709236f36a103a73e0bd9d1dd3723e8 +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.'><div class='caption'> Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.</div></div></section><section> + <h3>Who used Duke MTMC Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> <h3>Biometric Trade Routes</h3> -<!-- - <div class="map-sidebar right-sidebar"> - <h3>Legend</h3> - <ul> - <li><span style="color: #f2f293">■</span> Industry</li> - <li><span style="color: #f30000">■</span> Academic</li> - <li><span style="color: #3264f6">■</span> Government</li> - </ul> - </div> - --> + <p> - To help understand how Duke MTMC Dataset has been used around the world for commercial, military and academic research; publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. + To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. </p> </section> <section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> - </section> <div class="caption"> @@ -81,30 +247,19 @@ <li class="edu">Academic</li> <li class="com">Commercial</li> <li class="gov">Military / Government</li> - <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > </div> -<!-- <section> - <p class='subp'> - [section under development] Duke MTMC Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. - </p> -</section> - --><section> - <h3>Who used Duke MTMC Dataset?</h3> +<section class="applet_container"> + + <h3>Dataset Citations</h3> <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> - - </section> -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> + <div class="applet" data-payload="{"command": "citations"}"></div> </section><section> <div class="hr-wave-holder"> @@ -112,93 +267,155 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h3>Supplementary Information</h3> + <h2>Supplementary Information</h2> -</section><section class="applet_container"> +</section><section><h4>Video Timestamps</h4> +<<<<<<< HEAD +<p>The video timestamps contain the likely, but not yet confirmed, date and times of capture. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least aligns the relative time. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&req_state=NC&req_statename=North%20Carolina&reqdb.zip=27708&reqdb.magic=1&reqdb.wmo=99999">rainy weather</a> on that day also contributes towards the likelihood of March 14, 2014.</p> +======= +<p>The video timestamps contain the likely, but not yet confirmed, date and times the video recorded. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least confirms the relative timing. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&req_state=NC&req_statename=North%20Carolina&reqdb.zip=27708&reqdb.magic=1&reqdb.wmo=99999">precipitous weather</a> on March 14, 2014 in Durham, North Carolina supports, but does not confirm, that this day is a potential capture date.</p> +>>>>>>> 61fbcb8f2709236f36a103a73e0bd9d1dd3723e8 +</section><section><div class='columns columns-2'><div class='column'><table> +<thead><tr> +<th>Camera</th> +<th>Date</th> +<th>Start</th> +<th>End</th> +</tr> +</thead> +<tbody> +<tr> +<td>Camera 1</td> +<td>March 14, 2014</td> +<td>4:14PM</td> +<td>5:43PM</td> +</tr> +<tr> +<td>Camera 2</td> +<td>March 14, 2014</td> +<td>4:13PM</td> +<td>4:43PM</td> +</tr> +<tr> +<td>Camera 3</td> +<td>March 14, 2014</td> +<td>4:20PM</td> +<td>5:48PM</td> +</tr> +<tr> +<td>Camera 4</td> +<td>March 14, 2014</td> +<td>4:21PM</td> +<td>5:54PM</td> +</tr> +</tbody> +</table> +</div><div class='column'><table> +<thead><tr> +<th>Camera</th> +<th>Date</th> +<th>Start</th> +<th>End</th> +</tr> +</thead> +<tbody> +<tr> +<td>Camera 5</td> +<td>March 14, 2014</td> +<td>4:12PM</td> +<td>5:43PM</td> +</tr> +<tr> +<td>Camera 6</td> +<td>March 14, 2014</td> +<td>4:18PM</td> +<td>5:43PM</td> +</tr> +<tr> +<td>Camera 7</td> +<td>March 14, 2014</td> +<td>4:16PM</td> +<td>5:40PM</td> +</tr> +<tr> +<td>Camera 8</td> +<td>March 14, 2014</td> +<td>4:25PM</td> +<td>5:42PM</td> +</tr> +</tbody> +</table> +<<<<<<< HEAD +</div></div></section><section><h4>Errata</h4> +<ul> +<li>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812.</li> +</ul> +<h4>Citing Duke MTMC</h4> +<p>If you use any data from the Duke MTMC, please follow their <a href="http://vision.cs.duke.edu/DukeMTMC/#how-to-cite">license</a> and cite their work as:</p> +<pre> +@inproceedings{ristani2016MTMC, + title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking}, + author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo}, + booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking}, + year = {2016} +} +</pre></section><section> +======= +</div></div></section><section><h4>Notes</h4> +<p>The original Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812, and their own research typically mentions 2,000. For this write up we used 2,000 to describe the approximate number of students.</p> +<h4>Ethics</h4> +<p>Please direct any questions about the ethics of the dataset to Duke University's <a href="https://hr.duke.edu/policies/expectations/compliance/">Institutional Ethics & Compliance Office</a> using the number at the bottom of the page.</p> +</section><section> +>>>>>>> 61fbcb8f2709236f36a103a73e0bd9d1dd3723e8 - <h3>Dataset Citations</h3> + <h4>Cite Our Work</h4> <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> + + If you use our data, research, or graphics please cite our work: - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h2>Research Notes</h2> +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +<<<<<<< HEAD +</section><section><h4>ToDo</h4> <ul> -<li>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy." - 27a2fad58dd8727e280f97036e0d2bc55ef5424c</li> -<li>"This work was supported in part by the EPSRC Programme Grant (FACER2VM) EP/N007743/1, EPSRC/dstl/MURI project EP/R018456/1, the National Natural Science Foundation of China (61373055, 61672265, 61602390, 61532009, 61571313), Chinese Ministry of Education (Z2015101), Science and Technology Department of Sichuan Province (2017RZ0009 and 2017FZ0029), Education Department of Sichuan Province (15ZB0130), the Open Research Fund from Province Key Laboratory of Xihua University (szjj2015-056) and the NVIDIA GPU Grant Program." - ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b</li> -<li>"DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where"</li> -<li><p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> -<p>8 static cameras x 85 minutes of 1080p 60 fps video - More than 2,000,000 manually annotated frames - More than 2,000 identities - Manual annotation by 5 people over 1 year - More identities than all existing MTMC datasets combined - Unconstrained paths, diverse appearance</p> -</li> -<li>DukeMTMC Project -Ergys Ristani Ergys Ristani Ergys Ristani Ergys Ristani Ergys Ristani</li> +<li>clean up citations, formatting</li> </ul> -<p>People involved: -Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara, Carlo Tomasi.</p> -<p>Navigation:</p> -<p>Data Set - Downloads - Downloads - Dataset Extensions - Performance Measures - Tracking Systems - Publications - How to Cite - Contact</p> -<p>Welcome to the Duke Multi-Target, Multi-Camera Tracking Project.</p> -<p>DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where. -DukeMTMC Data Set -Snapshot from the DukeMTMC data set.</p> -<p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> -<p>8 static cameras x 85 minutes of 1080p 60 fps video - More than 2,000,000 manually annotated frames - More than 2,000 identities - Manual annotation by 5 people over 1 year - More identities than all existing MTMC datasets combined - Unconstrained paths, diverse appearance</p> -<p>News</p> -<p>05 Feb 2019 We are organizing the 2nd Workshop on MTMCT and ReID at CVPR 2019 - 25 Jul 2018: The code for DeepCC is available on github - 28 Feb 2018: OpenPose detections now available for download - 19 Feb 2018: Our DeepCC tracker has been accepted to CVPR 2018 - 04 Oct 2017: A new blog post describes ID measures of performance - 26 Jul 2017: Slides from the BMTT 2017 workshop are now available - 09 Dec 2016: DukeMTMC is now hosted on MOTChallenge</p> -<p>DukeMTMC Downloads</p> -<p>DukeMTMC dataset (tracking)</p> -<p>Dataset Extensions</p> -<p>Below is a list of dataset extensions provided by the community:</p> -<p>DukeMTMC-VideoReID (download) - DukeMTMC-reID (download) - DukeMTMC4REID - DukeMTMC-attribute</p> -<p>If you use or extend DukeMTMC, please refer to the license terms. -DukeMTMCT Benchmark</p> -<p>DukeMTMCT is a tracking benchmark hosted on motchallenge.net. Click here for the up-to-date rankings. Here you will find the official motchallenge-devkit used for evaluation by MOTChallenge. For detailed instructions how to submit on motchallenge you can refer to this link.</p> -<p>Trackers are ranked using our identity-based measures which compute how often the system is correct about who is where, regardless of how often a target is lost and reacquired. Our measures are useful in applications such as security, surveillance or sports. This short post describes our measures with illustrations, while for details you can refer to the original paper. -Tracking Systems</p> -<p>We provide code for the following tracking systems which are all based on Correlation Clustering optimization:</p> -<p>DeepCC for single- and multi-camera tracking [1] - Single-Camera Tracker (demo video) [2] - Multi-Camera Tracker (demo video, failure cases) [2] - People-Groups Tracker [3] - Original Single-Camera Tracker [4]</p> -</section> +</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a></span>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">SemanticScholar</a> +</li><li>2 <a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a> +</li><li>3 <a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a> +</li><li>4 <a name="[^xinjiang_nyt]" class="footnote_shim"></a><span class="backlinks"><a href="#[^xinjiang_nyt]_1">a</a></span>Mozur, Paul. "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority". <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html</a>. April 14, 2019. +======= +</section><section><p>If you use any data from the Duke MTMC please follow their <a href="http://vision.cs.duke.edu/DukeMTMC/#how-to-cite">license</a> and cite their work as:</p> +<pre> +@inproceedings{ristani2016MTMC, + title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking}, + author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo}, + booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking}, + year = {2016} +} +</pre></section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^xinjiang_nyt]" class="footnote_shim"></a><span class="backlinks"><a href="#[^xinjiang_nyt]_1">a</a></span><p>Mozur, Paul. "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority". <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html</a>. April 14, 2019.</p> +</li><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p> +</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p> +</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">SemanticScholar</a></p> +>>>>>>> 61fbcb8f2709236f36a103a73e0bd9d1dd3723e8 +</li></ul></section></section> </div> <footer> <div> <a href="/">MegaPixels.cc</a> - <a href="/about/disclaimer/">Disclaimer</a> - <a href="/about/terms/">Terms of Use</a> - <a href="/about/privacy/">Privacy</a> + <a href="/datasets/">Datasets</a> <a href="/about/">About</a> - <a href="/about/team/">Team</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> </div> <div> MegaPixels ©2017-19 Adam R. Harvey / |
