diff options
Diffstat (limited to 'site/public/datasets/duke_mtmc')
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 32 |
1 files changed, 18 insertions, 14 deletions
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 06a9ed1b..8ff4ef43 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -47,9 +47,11 @@ <div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div> </div></div><h2>Duke MTMC</h2> <p>[ page under development ]</p> -<p>The Duke Multi-Target, Multi-Camera Tracking Dataset (MTMC) is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving <em>multi-target multi-camera tracking</em> for surveillance. The dataset includes over 14 hours of 1080p video from 8 cameras positioned around Duke's campus during February and March 2014. Over 2,700 unique people are included in the dataset, which has become of the most widely used person re-identification image datasets.</p> -<p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".</p> -</section><section> +<p>The Duke Multi-Target, Multi-Camera Tracking Dataset (MTMC) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking is used for citywide dragnet surveillance systems such as those used throughout China by SenseTime<a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 1">1</a> and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets<a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 2">2</a>. In fact researchers from both SenseTime<a class="footnote_shim" name="[^sensetime1]_1"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_1"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a> and SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_1"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a> used the Duke MTMC dataset for their research.</p> +<p>The Duke MTMC dataset is unique because it is the largest publicly available MTMC and person re-identification dataset and has the longest duration of annotated video. In total, the Duke MTMC dataset provides over 14 hours of 1080p video from 8 synchronized surveillance cameras.<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a> It is among the most widely used person re-identification datasets in the world. The approximately 2,700 unique people in the Duke MTMC videos, most of whom are students, are used for research and development of surveillance technologies by commercial, academic, and even defense organizations.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension. © megapixels.cc'><div class='caption'> A collection of 1,600 out of the 2,700 students captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension. © megapixels.cc</div></div></section><section><p>The creation and publication of the Duke MTMC dataset in 2016 was originally funded by the U.S. Army Research Laboratory and the National Science Foundation<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Since 2016 use of the Duke MTMC dataset images have been publicly acknowledged in research funded by or on behalf of the Chinese National University of Defense<a class="footnote_shim" name="[^cn_defense1]_1"> </a><a href="#[^cn_defense1]" class="footnote" title="Footnote 7">7</a><a class="footnote_shim" name="[^cn_defense2]_1"> </a><a href="#[^cn_defense2]" class="footnote" title="Footnote 8">8</a>, IARPA and IBM<a class="footnote_shim" name="[^iarpa_ibm]_1"> </a><a href="#[^iarpa_ibm]" class="footnote" title="Footnote 9">9</a>, and U.S. Department of Homeland Security<a class="footnote_shim" name="[^us_dhs]_1"> </a><a href="#[^us_dhs]" class="footnote" title="Footnote 10">10</a>.</p> +<p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_3"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a> Camera 7 and 2 capture large groups of prospective students and children. Camera 5 was positioned to capture students as they enter and exit Duke University's main chapel. Each camera's location is documented below.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus © megapixels.cc'><div class='caption'> Duke MTMC camera locations on Duke University campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc</div></div></section><section> <h3>Who used Duke MTMC Dataset?</h3> <p> @@ -109,17 +111,19 @@ <h2>Supplementary Information</h2> -</section><section><h4>Data Visualizations</h4> -</section><section><div class='columns columns-2'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam1.jpg' alt=' Camera 1 © megapixels.cc'><div class='caption'> Camera 1 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam2.jpg' alt=' Camera 2 © megapixels.cc'><div class='caption'> Camera 2 © megapixels.cc</div></div></section></div></section><section><div class='columns columns-2'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam3.jpg' alt=' Camera 3 © megapixels.cc'><div class='caption'> Camera 3 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam4.jpg' alt=' Camera 4 © megapixels.cc'><div class='caption'> Camera 4 © megapixels.cc</div></div></section></div></section><section><div class='columns columns-2'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam5.jpg' alt=' Camera 5 © megapixels.cc'><div class='caption'> Camera 5 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam6.jpg' alt=' Camera 6 © megapixels.cc'><div class='caption'> Camera 6 © megapixels.cc</div></div></section></div></section><section><div class='columns columns-2'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam7.jpg' alt=' Camera 7 © megapixels.cc'><div class='caption'> Camera 7 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam8.jpg' alt=' Camera 8 © megapixels.cc'><div class='caption'> Camera 8 © megapixels.cc</div></div></section></div></section><section><h3>Alternate Layout</h3> -</section><section><div class='columns columns-4'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam1.jpg' alt=' Camera 1 © megapixels.cc'><div class='caption'> Camera 1 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam2.jpg' alt=' Camera 2 © megapixels.cc'><div class='caption'> Camera 2 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam3.jpg' alt=' Camera 3 © megapixels.cc'><div class='caption'> Camera 3 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam4.jpg' alt=' Camera 4 © megapixels.cc'><div class='caption'> Camera 4 © megapixels.cc</div></div></section></div></section><section><div class='columns columns-4'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam5.jpg' alt=' Camera 5 © megapixels.cc'><div class='caption'> Camera 5 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam6.jpg' alt=' Camera 6 © megapixels.cc'><div class='caption'> Camera 6 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam7.jpg' alt=' Camera 7 © megapixels.cc'><div class='caption'> Camera 7 © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliency_cam8.jpg' alt=' Camera 8 © megapixels.cc'><div class='caption'> Camera 8 © megapixels.cc</div></div></section></div></section><section><h3>TODO</h3> -<ul> -<li>expand story</li> -<li>add google street view images of each camera location?</li> -<li>add actual head detections to header image with faces blurred</li> -<li>add 4 diverse example images with faces blurred</li> -<li>add links to google map locations of each camera</li> -</ul> -</section> +</section><section><h3>Notes</h3> +<p>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812</p> +</section><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p> +</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p> +</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Source</a></p> +</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">source</a></p> +</li><li><a name="[^sensetime2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime2]_1">a</a></span><p>"Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. <a href="https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b">Source</a></p> +</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a><a href="#[^duke_mtmc_orig]_2">b</a><a href="#[^duke_mtmc_orig]_3">c</a></span><p>"Performance Measures and a Data Set for</p> +</li><li><a name="[^cn_defense1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense1]_1">a</a></span><p>"Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. <a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">Source</a></p> +</li><li><a name="[^cn_defense2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense2]_1">a</a></span><p>"Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. <a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">Source</a></p> +</li><li><a name="[^iarpa_ibm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^iarpa_ibm]_1">a</a></span><p>"Horizontal Pyramid Matching for Person Re-identification". 2019. <a href="https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8">Source</a></p> +</li><li><a name="[^us_dhs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^us_dhs]_1">a</a></span><p>"Re-Identification with Consistent Attentive Siamese Networks". 2018. <a href="https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94">Source</a></p> +</li></ul></section> </div> <footer> |
