summaryrefslogtreecommitdiff
path: root/site/content/pages/datasets/duke_mtmc/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'site/content/pages/datasets/duke_mtmc/index.md')
-rw-r--r--site/content/pages/datasets/duke_mtmc/index.md78
1 files changed, 62 insertions, 16 deletions
diff --git a/site/content/pages/datasets/duke_mtmc/index.md b/site/content/pages/datasets/duke_mtmc/index.md
index 8308eee7..28c586f9 100644
--- a/site/content/pages/datasets/duke_mtmc/index.md
+++ b/site/content/pages/datasets/duke_mtmc/index.md
@@ -18,17 +18,19 @@ authors: Adam Harvey
## Duke MTMC
-The Duke Multi-Target, Multi-Camera Tracking Dataset (MTMC) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking is used for citywide dragnet surveillance systems such as those used throughout China by SenseTime[^sensetime_qz] and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets[^sensenets_uyghurs]. In fact researchers from both SenseTime[^sensetime1] [^sensetime2] and SenseNets[^sensenets_sensetime] used the Duke MTMC dataset for their research.
+Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime[^sensetime_qz] and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets[^sensenets_uyghurs]. In fact researchers from both SenseTime[^sensetime1] [^sensetime2] and SenseNets[^sensenets_sensetime] used the Duke MTMC dataset for their research.
-The Duke MTMC dataset is unique because it is the largest publicly available MTMC and person re-identification dataset and has the longest duration of annotated video. In total, the Duke MTMC dataset provides over 14 hours of 1080p video from 8 synchronized surveillance cameras.[^duke_mtmc_orig] It is among the most widely used person re-identification datasets in the world. The approximately 2,700 unique people in the Duke MTMC videos, most of whom are students, are used for research and development of surveillance technologies by commercial, academic, and even defense organizations.
+In this investigation into the Duke MTMC dataset, we found that researchers at Duke Univesity in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.
-![caption: A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. © megapixels.cc](assets/duke_mtmc_reid_montage.jpg)
+Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime[^sensetime1] [^sensetime2], SenseNets[^sensenets_sensetime], IARPA and IBM[^iarpa_ibm], Chinese National University of Defense [^cn_defense1][^cn_defense2], US Department of Homeland Security[^us_dhs], Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few.
-The creation and publication of the Duke MTMC dataset in 2016 was originally funded by the U.S. Army Research Laboratory and the National Science Foundation[^duke_mtmc_orig]. Since 2016 use of the Duke MTMC dataset images have been publicly acknowledged in research funded by or on behalf of the Chinese National University of Defense[^cn_defense1][^cn_defense2], IARPA and IBM[^iarpa_ibm], and U.S. Department of Homeland Security[^us_dhs].
+The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation[^duke_mtmc_orig]. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China.
-The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".[^duke_mtmc_orig] Camera 7 and 2 capture large groups of prospective students and children. Camera 5 was positioned to capture students as they enter and exit Duke University's main chapel. Each camera's location is documented below.
+![caption: A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.](assets/duke_mtmc_reid_montage.jpg)
-![caption: Duke MTMC camera locations on Duke University campus © megapixels.cc](assets/duke_mtmc_camera_map.jpg)
+The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".[^duke_mtmc_orig]. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC datset.
+
+![caption: Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.](assets/duke_mtmc_camera_map.jpg)
![caption: Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc](assets/duke_mtmc_cameras.jpg)
@@ -39,23 +41,67 @@ The 8 cameras deployed on Duke's campus were specifically setup to capture stude
{% include 'supplementary_header.html' %}
+#### Funding
+
+Original funding for the Duke MTMC dataset was provided by the Army Research Office under Grant No. W911NF-10-1-0387 and by the National Science Foundation
+under Grants IIS-10-17017 and IIS-14-20894.
+
+#### Video Timestamps
+
+The video timestamps contain the likely, but not yet confirmed, date and times of capture. Because the video timestamps align with the start and stop [time sync data](http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync) provided by the researchers, it at least aligns the relative time. The [rainy weather](https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&req_state=NC&req_statename=North%20Carolina&reqdb.zip=27708&reqdb.magic=1&reqdb.wmo=99999) on that day also contribute towards the likelihood of March 14, 2014..
+
+=== columns 2
+
+| Camera | Date | Start | End |
+| --- | --- | --- | --- |
+| Camera 1 | March 14, 2014 | 4:14PM | 5:43PM |
+| Camera 2 | March 14, 2014 | 4:13PM | 4:43PM |
+| Camera 3 | March 14, 2014 | 4:20PM | 5:48PM |
+| Camera 4 | March 14, 2014 | 4:21PM | 5:54PM |
+
+===========
-### Notes
+| Camera | Date | Start | End |
+| --- | --- | --- | --- |
+| Camera 5 | March 14, 2014 | 4:12PM | 5:43PM |
+| Camera 6 | March 14, 2014 | 4:18PM | 5:43PM |
+| Camera 7 | March 14, 2014 | 4:16PM | 5:40PM |
+| Camera 8 | March 14, 2014 | 4:25PM | 5:42PM |
+
+=== end columns
+
+
+### Opting Out
+
+If you attended Duke University and were captured by any of the 8 surveillance cameras positioned on campus in 2014, there is unfortunately no way to be removed. The dataset files have been distributed throughout the world and it would not be possible to contact all the owners for removal. Nor do the authors provide any options for students to opt-out, nor did they even inform students they would be used at test subjects for surveillance research and development in a project funded, in part, by the United States Army Research Office.
+
+#### Notes
+
+- The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812
-The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812
{% include 'cite_our_work.html' %}
+If you use any data from the Duke MTMC please follow their [license](http://vision.cs.duke.edu/DukeMTMC/#how-to-cite) and cite their work as:
+
+<pre>
+@inproceedings{ristani2016MTMC,
+ title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
+ author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
+ booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
+ year = {2016}
+}
+</pre>
### Footnotes
[^sensetime_qz]: <https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/>
[^sensenets_uyghurs]: <https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/>
-[^sensenets_sensetime]: "Attention-Aware Compositional Network for Person Re-identification". 2018. [Source](https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e)
-[^sensetime1]: "End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. [source](https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838)
-[^sensetime2]: "Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. [Source](https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b)
-[^duke_mtmc_orig]: "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. [Source](https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c)
-[^cn_defense1]: "Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. [Source](https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786)
-[^cn_defense2]: "Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. [Source](https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881)
-[^iarpa_ibm]: "Horizontal Pyramid Matching for Person Re-identification". 2019. [Source](https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8)
-[^us_dhs]: "Re-Identification with Consistent Attentive Siamese Networks". 2018. [Source](https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94) \ No newline at end of file
+[^sensenets_sensetime]: "Attention-Aware Compositional Network for Person Re-identification". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e), [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf)
+[^sensetime1]: "End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838), [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf)
+[^sensetime2]: "Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b)
+[^duke_mtmc_orig]: "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. [SemanticScholar](https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c)
+[^cn_defense1]: "Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786)
+[^cn_defense2]: "Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881)
+[^iarpa_ibm]: "Horizontal Pyramid Matching for Person Re-identification". 2019. [SemanticScholar](https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8)
+[^us_dhs]: "Re-Identification with Consistent Attentive Siamese Networks". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94) \ No newline at end of file