summaryrefslogtreecommitdiff
path: root/site/content/pages/datasets/duke_mtmc/index.md
blob: 37b2cd2d74d89cf38ae870cf350cd40802d24b4f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
------------

status: published
title: Duke MTMC
desc: <span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus
subdesc: Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014
slug: duke_mtmc
cssclass: dataset
image: assets/background.jpg
published: 2019-2-23
updated: 2019-2-23
authors: Adam Harvey

------------

### sidebar
### end sidebar

## Duke MTMC

[ page under development ]

Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime[^sensetime_qz] and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets[^sensenets_uyghurs]. In fact researchers from both SenseTime[^sensetime1] [^sensetime2] and SenseNets[^sensenets_sensetime] used the Duke MTMC dataset for their research.

In this investigation into the Duke MTMC dataset, we found that researchers at Duke University in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.

Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime[^sensetime1] [^sensetime2], SenseNets[^sensenets_sensetime], IARPA and IBM[^iarpa_ibm], Chinese National University of Defense [^cn_defense1][^cn_defense2], US Department of Homeland Security[^us_dhs], Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few.

The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation[^duke_mtmc_orig]. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China.

![caption: A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.](assets/duke_mtmc_reid_montage.jpg)

The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".[^duke_mtmc_orig]. Camera 5 was positioned to capture students as entering and exiting the university's main chapel.  Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC dataset.

![caption: Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.](assets/duke_mtmc_camera_map.jpg)

![caption: Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc](assets/duke_mtmc_cameras.jpg)

![caption: Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc](assets/duke_mtmc_saliencies.jpg)


{% include 'dashboard.html' %}

{% include 'supplementary_header.html' %}

#### Funding 

Original funding for the Duke MTMC dataset was provided by the Army Research Office under Grant No. W911NF-10-1-0387 and by the National Science Foundation
under Grants IIS-10-17017 and IIS-14-20894.

#### Video Timestamps

The video timestamps contain the likely, but not yet confirmed, date and times of capture. Because the video timestamps align with the start and stop [time sync data](http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync) provided by the researchers, it at least aligns the relative time. The [rainy weather](https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&req_state=NC&req_statename=North%20Carolina&reqdb.zip=27708&reqdb.magic=1&reqdb.wmo=99999) on that day also contribute towards the likelihood of March 14, 2014..

=== columns 2

| Camera | Date  | Start | End |
| --- | --- | --- | --- |
| Camera 1 | March 14, 2014 | 4:14PM | 5:43PM |
| Camera 2 | March 14, 2014 | 4:13PM | 4:43PM |
| Camera 3 | March 14, 2014 | 4:20PM | 5:48PM |
| Camera 4 | March 14, 2014 | 4:21PM | 5:54PM |

===========

| Camera | Date  | Start | End |
| --- | --- | --- | --- |
| Camera 5 | March 14, 2014 | 4:12PM | 5:43PM |
| Camera 6 | March 14, 2014 | 4:18PM | 5:43PM |
| Camera 7 | March 14, 2014 | 4:16PM | 5:40PM |
| Camera 8 | March 14, 2014 | 4:25PM | 5:42PM |

=== end columns


### Opting Out

If you attended Duke University and were captured by any of the 8 surveillance cameras positioned on campus in 2014, there is unfortunately no way to be removed. The dataset files have been distributed throughout the world and it would not be possible to contact all the owners for removal. Nor do the authors provide any options for students to opt-out, nor did they even inform students they would be used at test subjects for surveillance research and development in a project funded, in part, by the United States Army Research Office.

#### Notes

- The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812

{% include 'cite_our_work.html' %}

If you use any data from the Duke MTMC please follow their [license](http://vision.cs.duke.edu/DukeMTMC/#how-to-cite) and cite their work as:

<pre>
@inproceedings{ristani2016MTMC,
  title =        {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
  author =       {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
  booktitle =    {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
  year =         {2016}
}
</pre>

#### ToDo

- clean up citations, formatting

### Footnotes

[^sensetime_qz]: <https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/>
[^sensenets_uyghurs]: <https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/>
[^sensenets_sensetime]: "Attention-Aware Compositional Network for Person Re-identification". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e), [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf)
[^sensetime1]: "End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838), [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf)
[^sensetime2]: "Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b)
[^duke_mtmc_orig]: "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. [SemanticScholar](https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c)
[^cn_defense1]: "Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786)
[^cn_defense2]: "Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881)
[^iarpa_ibm]: "Horizontal Pyramid Matching for Person Re-identification". 2019. [SemanticScholar](https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8)
[^us_dhs]: "Re-Identification with Consistent Attentive Siamese Networks". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94)