1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
|
<!doctype html>
<html>
<head>
<title>MegaPixels</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus" />
<meta name="referrer" content="no-referrer" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
<div class='splash'>Duke MTMC Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
</div>
</header>
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014
</span></div></div></section><section><div class='left-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
</div><div class='meta'>
<div class='gray'>Images</div>
<div>2,000,000 </div>
</div><div class='meta'>
<div class='gray'>Identities</div>
<div>2,700 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
<div>Person re-identification, multi-camera tracking</div>
</div><div class='meta'>
<div class='gray'>Created by</div>
<div>Computer Science Department, Duke University, Durham, US</div>
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
</div></div><h2>Duke MTMC</h2>
<p>The Duke Multi-Target, Multi-Camera Tracking Dataset (MTMC) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking is used for citywide dragnet surveillance systems such as those used throughout China by SenseTime<a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 1">1</a> and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets<a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 2">2</a>. In fact researchers from both SenseTime<a class="footnote_shim" name="[^sensetime1]_1"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_1"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a> and SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_1"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a> used the Duke MTMC dataset for their research.</p>
<p>The Duke MTMC dataset is unique because it is the largest publicly available MTMC and person re-identification dataset and has the longest duration of annotated video. In total, the Duke MTMC dataset provides over 14 hours of 1080p video from 8 synchronized surveillance cameras.<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a> It is among the most widely used person re-identification datasets in the world. The approximately 2,700 unique people in the Duke MTMC videos, most of whom are students, are used for research and development of surveillance technologies by commercial, academic, and even defense organizations.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. © megapixels.cc'><div class='caption'> A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. © megapixels.cc</div></div></section><section><p>The creation and publication of the Duke MTMC dataset in 2016 was originally funded by the U.S. Army Research Laboratory and the National Science Foundation<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Since 2016 use of the Duke MTMC dataset images have been publicly acknowledged in research funded by or on behalf of the Chinese National University of Defense<a class="footnote_shim" name="[^cn_defense1]_1"> </a><a href="#[^cn_defense1]" class="footnote" title="Footnote 7">7</a><a class="footnote_shim" name="[^cn_defense2]_1"> </a><a href="#[^cn_defense2]" class="footnote" title="Footnote 8">8</a>, IARPA and IBM<a class="footnote_shim" name="[^iarpa_ibm]_1"> </a><a href="#[^iarpa_ibm]" class="footnote" title="Footnote 9">9</a>, and U.S. Department of Homeland Security<a class="footnote_shim" name="[^us_dhs]_1"> </a><a href="#[^us_dhs]" class="footnote" title="Footnote 10">10</a>.</p>
<p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_3"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a> Camera 7 and 2 capture large groups of prospective students and children. Camera 5 was positioned to capture students as they enter and exit Duke University's main chapel. Each camera's location is documented below.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus © megapixels.cc'><div class='caption'> Duke MTMC camera locations on Duke University campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc</div></div></section><section>
<h3>Who used Duke MTMC Dataset?</h3>
<p>
This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
</p>
</section>
<section class="applet_container">
<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
</div> -->
<div class="applet" data-payload="{"command": "chart"}"></div>
</section>
<section class="applet_container">
<div class="applet" data-payload="{"command": "piechart"}"></div>
</section>
<section>
<h3>Biometric Trade Routes</h3>
<p>
To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
</p>
</section>
<section class="applet_container fullwidth">
<div class="applet" data-payload="{"command": "map"}"></div>
</section>
<div class="caption">
<ul class="map-legend">
<li class="edu">Academic</li>
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
<div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
</div>
<section class="applet_container">
<h3>Dataset Citations</h3>
<p>
The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
</p>
<div class="applet" data-payload="{"command": "citations"}"></div>
</section><section>
<div class="hr-wave-holder">
<div class="hr-wave-line hr-wave-line1"></div>
<div class="hr-wave-line hr-wave-line2"></div>
</div>
<h2>Supplementary Information</h2>
</section><section><h3>Notes</h3>
<p>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812</p>
</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p>
</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Source</a></p>
</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">source</a></p>
</li><li><a name="[^sensetime2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime2]_1">a</a></span><p>"Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. <a href="https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b">Source</a></p>
</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a><a href="#[^duke_mtmc_orig]_2">b</a><a href="#[^duke_mtmc_orig]_3">c</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">Source</a></p>
</li><li><a name="[^cn_defense1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense1]_1">a</a></span><p>"Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. <a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">Source</a></p>
</li><li><a name="[^cn_defense2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense2]_1">a</a></span><p>"Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. <a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">Source</a></p>
</li><li><a name="[^iarpa_ibm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^iarpa_ibm]_1">a</a></span><p>"Horizontal Pyramid Matching for Person Re-identification". 2019. <a href="https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8">Source</a></p>
</li><li><a name="[^us_dhs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^us_dhs]_1">a</a></span><p>"Re-Identification with Consistent Attentive Siamese Networks". 2018. <a href="https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94">Source</a></p>
</li></ul></section></section>
</div>
<footer>
<div>
<a href="/">MegaPixels.cc</a>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
<a href="/about/press/">Press</a>
<a href="/about/legal/">Legal and Privacy</a>
</div>
<div>
MegaPixels ©2017-19 Adam R. Harvey /
<a href="https://ahprojects.com">ahprojects.com</a>
</div>
</footer>
</body>
<script src="/assets/js/dist/index.js"></script>
</html>
|