summaryrefslogtreecommitdiff
path: root/site/public/datasets/duke_mtmc/index.html
blob: 7806710144e0fb5405f64d877e95861c5bc45861 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
<!doctype html>
<html>
<head>
  <title>MegaPixels</title>
  <meta charset="utf-8" />
  <meta name="author" content="Adam Harvey" />
  <meta name="description" content="Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus" />
  <meta name="referrer" content="no-referrer" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <link rel='stylesheet' href='/assets/css/fonts.css' />
  <link rel='stylesheet' href='/assets/css/css.css' />
  <link rel='stylesheet' href='/assets/css/leaflet.css' />
  <link rel='stylesheet' href='/assets/css/applets.css' />
</head>
<body>
  <header>
    <a class='slogan' href="/">
      <div class='logo'></div>
      <div class='site_name'>MegaPixels</div>
      <div class='splash'>Duke MTMC Dataset</div>
    </a>
    <div class='links'>
      <a href="/datasets/">Datasets</a>
      <a href="/about/">About</a>
    </div>
  </header>
  <div class="content content-dataset">
    
  <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014
</span></div></div></section><section><h2>Duke MTMC</h2>
</section><div class='right-sidebar'><div class='meta'>
    <div class='gray'>Published</div>
    <div>2016</div>
  </div><div class='meta'>
    <div class='gray'>Images</div>
    <div>2,000,000 </div>
  </div><div class='meta'>
    <div class='gray'>Identities</div>
    <div>2,700 </div>
  </div><div class='meta'>
    <div class='gray'>Purpose</div>
    <div>Person re-identification, multi-camera tracking</div>
  </div><div class='meta'>
    <div class='gray'>Created by</div>
    <div>Computer Science Department, Duke University, Durham, US</div>
  </div><div class='meta'>
    <div class='gray'>Website</div>
    <div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
  </div></div><section><p>[ page under development ]</p>
<p>Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime<a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 1">1</a> and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets<a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 2">2</a>. In fact researchers from both SenseTime<a class="footnote_shim" name="[^sensetime1]_1"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_1"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a> and SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_1"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a> used the Duke MTMC dataset for their research.</p>
<p>In this investigation into the Duke MTMC dataset, we found that researchers at Duke University in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.</p>
<p>Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime<a class="footnote_shim" name="[^sensetime1]_2"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_2"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a>, SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_2"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a>, IARPA and IBM<a class="footnote_shim" name="[^iarpa_ibm]_1"> </a><a href="#[^iarpa_ibm]" class="footnote" title="Footnote 9">9</a>, Chinese National University of Defense <a class="footnote_shim" name="[^cn_defense1]_1"> </a><a href="#[^cn_defense1]" class="footnote" title="Footnote 7">7</a><a class="footnote_shim" name="[^cn_defense2]_1"> </a><a href="#[^cn_defense2]" class="footnote" title="Footnote 8">8</a>, US Department of Homeland Security<a class="footnote_shim" name="[^us_dhs]_1"> </a><a href="#[^us_dhs]" class="footnote" title="Footnote 10">10</a>, Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few.</p>
<p>The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.</div></div></section><section><p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Camera 5 was positioned to capture students as entering and exiting the university's main chapel.  Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC dataset.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.'><div class='caption'> Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section>
  <h3>Who used Duke MTMC Dataset?</h3>

  <p>
    This bar chart presents a ranking of the top countries where dataset citations originated.  Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
  </p>
 
 </section>

<section class="applet_container">
<!-- 	<div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
</div> -->
 <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
</section>

<section class="applet_container">
 <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
</section>

<section>
	
	<h3>Biometric Trade Routes</h3>

	<p>
		To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
	</p>
 
 </section>

<section class="applet_container fullwidth">
 <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
</section>

<div class="caption">
	<ul class="map-legend">
	<li class="edu">Academic</li>
	<li class="com">Commercial</li>
	<li class="gov">Military / Government</li>
	</ul>
	<div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
</div>


<section class="applet_container">

  <h3>Dataset Citations</h3>
  <p>
    The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers.  Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources.  These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
  </p>

  <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
</section><section>

  <div class="hr-wave-holder">
      <div class="hr-wave-line hr-wave-line1"></div>
      <div class="hr-wave-line hr-wave-line2"></div>
  </div>

  <h2>Supplementary Information</h2>
  
</section><section><h4>Funding</h4>
<p>Original funding for the Duke MTMC dataset was provided by the Army Research Office under Grant No. W911NF-10-1-0387 and by the National Science Foundation
under Grants IIS-10-17017 and IIS-14-20894.</p>
<h4>Video Timestamps</h4>
<p>The video timestamps contain the likely, but not yet confirmed, date and times of capture. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least aligns the relative time. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&amp;req_state=NC&amp;req_statename=North%20Carolina&amp;reqdb.zip=27708&amp;reqdb.magic=1&amp;reqdb.wmo=99999">rainy weather</a> on that day also contribute towards the likelihood of March 14, 2014..</p>
</section><section><div class='columns columns-2'><div class='column'><table>
<thead><tr>
<th>Camera</th>
<th>Date</th>
<th>Start</th>
<th>End</th>
</tr>
</thead>
<tbody>
<tr>
<td>Camera 1</td>
<td>March 14, 2014</td>
<td>4:14PM</td>
<td>5:43PM</td>
</tr>
<tr>
<td>Camera 2</td>
<td>March 14, 2014</td>
<td>4:13PM</td>
<td>4:43PM</td>
</tr>
<tr>
<td>Camera 3</td>
<td>March 14, 2014</td>
<td>4:20PM</td>
<td>5:48PM</td>
</tr>
<tr>
<td>Camera 4</td>
<td>March 14, 2014</td>
<td>4:21PM</td>
<td>5:54PM</td>
</tr>
</tbody>
</table>
</div><div class='column'><table>
<thead><tr>
<th>Camera</th>
<th>Date</th>
<th>Start</th>
<th>End</th>
</tr>
</thead>
<tbody>
<tr>
<td>Camera 5</td>
<td>March 14, 2014</td>
<td>4:12PM</td>
<td>5:43PM</td>
</tr>
<tr>
<td>Camera 6</td>
<td>March 14, 2014</td>
<td>4:18PM</td>
<td>5:43PM</td>
</tr>
<tr>
<td>Camera 7</td>
<td>March 14, 2014</td>
<td>4:16PM</td>
<td>5:40PM</td>
</tr>
<tr>
<td>Camera 8</td>
<td>March 14, 2014</td>
<td>4:25PM</td>
<td>5:42PM</td>
</tr>
</tbody>
</table>
</div></div></section><section><h3>Opting Out</h3>
<p>If you attended Duke University and were captured by any of the 8 surveillance cameras positioned on campus in 2014, there is unfortunately no way to be removed. The dataset files have been distributed throughout the world and it would not be possible to contact all the owners for removal. Nor do the authors provide any options for students to opt-out, nor did they even inform students they would be used at test subjects for surveillance research and development in a project funded, in part, by the United States Army Research Office.</p>
<h4>Notes</h4>
<ul>
<li>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812</li>
</ul>
</section><section>

  <h4>Cite Our Work</h4>
  <p>
  	
  	If you use our data, research, or graphics please cite our work:

<pre id="cite-bibtex">
@online{megapixels,
  author = {Harvey, Adam. LaPlace, Jules.},
  title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
  year = 2019,
  url = {https://megapixels.cc/},
  urldate = {2019-04-20}
}</pre>

	</p>
</section><section><p>If you use any data from the Duke MTMC please follow their <a href="http://vision.cs.duke.edu/DukeMTMC/#how-to-cite">license</a> and cite their work as:</p>
<pre>
@inproceedings{ristani2016MTMC,
 title =        {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
 author =       {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
 booktitle =    {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
 year =         {2016}
}
</pre><h4>ToDo</h4>
<ul>
<li>clean up citations, formatting</li>
</ul>
</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p>
</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a><a href="#[^sensenets_sensetime]_2">b</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a>, <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">PDF</a></p>
</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a><a href="#[^sensetime1]_2">b</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">SemanticScholar</a>, <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">PDF</a></p>
</li><li><a name="[^sensetime2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime2]_1">a</a><a href="#[^sensetime2]_2">b</a></span><p>"Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. <a href="https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b">SemanticScholar</a></p>
</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a><a href="#[^duke_mtmc_orig]_2">b</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">SemanticScholar</a></p>
</li><li><a name="[^cn_defense1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense1]_1">a</a></span><p>"Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. <a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">SemanticScholar</a></p>
</li><li><a name="[^cn_defense2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense2]_1">a</a></span><p>"Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. <a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">SemanticScholar</a></p>
</li><li><a name="[^iarpa_ibm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^iarpa_ibm]_1">a</a></span><p>"Horizontal Pyramid Matching for Person Re-identification". 2019. <a href="https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8">SemanticScholar</a></p>
</li><li><a name="[^us_dhs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^us_dhs]_1">a</a></span><p>"Re-Identification with Consistent Attentive Siamese Networks". 2018. <a href="https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94">SemanticScholar</a></p>
</li></ul></section></section>

  </div>
  <footer>
    <div>
      <a href="/">MegaPixels.cc</a>
      <a href="/datasets/">Datasets</a>
      <a href="/about/">About</a>
      <a href="/about/press/">Press</a>
      <a href="/about/legal/">Legal and Privacy</a>
    </div>
    <div>
      MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
      <a href="https://ahprojects.com">ahprojects.com</a>
    </div>
  </footer>
</body>

<script src="/assets/js/dist/index.js"></script>
</html>