summaryrefslogtreecommitdiff
path: root/site/public/datasets/duke_mtmc
diff options
context:
space:
mode:
authoradamhrv <adam@ahprojects.com>2019-04-17 17:44:29 +0200
committeradamhrv <adam@ahprojects.com>2019-04-17 17:44:29 +0200
commitfa11005bed137f90f627eeacc0d264d77206b992 (patch)
treefcf5cda0d9a36323cc3ce6ed4f1ca68046cc228c /site/public/datasets/duke_mtmc
parent80901bd8af4f78be8d3e697115f07d4e69473de5 (diff)
update duke, uccs
Diffstat (limited to 'site/public/datasets/duke_mtmc')
-rw-r--r--site/public/datasets/duke_mtmc/index.html76
1 files changed, 36 insertions, 40 deletions
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index 7b965bd4..bd4fb8d9 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -61,30 +61,30 @@
</thead>
<tbody>
<tr>
-<td>SenseNets, SenseTime</td>
-<td>Attention-Aware Compositional Network for Person Re-identification</td>
-<td><a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a></td>
+<td>Beihang University</td>
+<td>Orientation-Guided Similarity Learning for Person Re-identification</td>
+<td><a href="https://ieeexplore.ieee.org/document/8545620">ieee.org</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
<tr>
-<td>SenseTime</td>
-<td>End-to-End Deep Kronecker-Product Matching for Person Re-identification</td>
-<td><a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">thcvf.com</a></td>
+<td>Beihang University</td>
+<td>Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology</td>
+<td><a href="https://dl.acm.org/citation.cfm?id=3240663">acm.org</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
<tr>
<td>CloudWalk</td>
-<td>Horizontal Pyramid Matching for Person Re-identification</td>
-<td><a href="https://arxiv.org/pdf/1804.05275.pdf">arxiv.org</a></td>
-<td>2018</td>
+<td>CloudWalk re-identification technology extends facial biometric tracking with improved accuracy</td>
+<td><a href="https://www.biometricupdate.com/201903/cloudwalk-re-identification-technology-extends-facial-biometric-tracking-with-improved-accuracy">BiometricUpdate.com</a></td>
+<td>2019</td>
<td>&#x2714;</td>
</tr>
<tr>
-<td>Megvii</td>
-<td>Multi-Target, Multi-Camera Tracking by Hierarchical Clustering: Recent Progress on DukeMTMC Project</td>
-<td><a href="https://www.semanticscholar.org/paper/Multi-Target%2C-Multi-Camera-Tracking-by-Hierarchical-Zhang-Wu/10c20cf47d61063032dce4af73a4b8e350bf1128">SemanticScholar</a></td>
+<td>CloudWalk</td>
+<td>Horizontal Pyramid Matching for Person Re-identification</td>
+<td><a href="https://arxiv.org/pdf/1804.05275.pdf">arxiv.org</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
@@ -97,15 +97,15 @@
</tr>
<tr>
<td>Megvii</td>
-<td>SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial PersonRe-Identification</td>
-<td><a href="https://arxiv.org/abs/1810.06996">arxiv.org</a></td>
+<td>Multi-Target, Multi-Camera Tracking by Hierarchical Clustering: Recent Progress on DukeMTMC Project</td>
+<td><a href="https://www.semanticscholar.org/paper/Multi-Target%2C-Multi-Camera-Tracking-by-Hierarchical-Zhang-Wu/10c20cf47d61063032dce4af73a4b8e350bf1128">SemanticScholar</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
<tr>
-<td>CloudWalk</td>
-<td>CloudWalk re-identification technology extends facial biometric tracking with improved accuracy</td>
-<td><a href="https://www.biometricupdate.com/201903/cloudwalk-re-identification-technology-extends-facial-biometric-tracking-with-improved-accuracy">BiometricUpdate.com</a></td>
+<td>Megvii</td>
+<td>SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial PersonRe-Identification</td>
+<td><a href="https://arxiv.org/abs/1810.06996">arxiv.org</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
@@ -124,16 +124,16 @@
<td>&#x2714;</td>
</tr>
<tr>
-<td>Beihang University</td>
-<td>Orientation-Guided Similarity Learning for Person Re-identification</td>
-<td><a href="https://ieeexplore.ieee.org/document/8545620">ieee.org</a></td>
+<td>SenseNets, SenseTime</td>
+<td>Attention-Aware Compositional Network for Person Re-identification</td>
+<td><a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
<tr>
-<td>Beihang University</td>
-<td>Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology</td>
-<td><a href="https://dl.acm.org/citation.cfm?id=3240663">acm.org</a></td>
+<td>SenseTime</td>
+<td>End-to-End Deep Kronecker-Product Matching for Person Re-identification</td>
+<td><a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">thcvf.com</a></td>
<td>2018</td>
<td>&#x2714;</td>
</tr>
@@ -180,19 +180,19 @@
<td>&#x2714;</td>
</tr>
<tr>
-<td>Vision Semantics Ltd.</td>
-<td>Unsupervised Person Re-identification by Deep Learning Tracklet Association</td>
-<td><a href="https://arxiv.org/abs/1809.02874">arxiv.org</a></td>
-<td>2018</td>
-<td>&#x2714;</td>
-</tr>
-<tr>
<td>US Dept. of Homeland Security</td>
<td>Re-Identification with Consistent Attentive Siamese Networks</td>
<td><a href="https://arxiv.org/abs/1811.07487/">arxiv.org</a></td>
<td>2019</td>
<td>&#x2714;</td>
</tr>
+<tr>
+<td>Vision Semantics Ltd.</td>
+<td>Unsupervised Person Re-identification by Deep Learning Tracklet Association</td>
+<td><a href="https://arxiv.org/abs/1809.02874">arxiv.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
</tbody>
</table>
<p>By some metrics the dataset is considered a huge success. It is regarded as highly influential research and has contributed to hundreds, if not thousands, of projects to advance artificial intelligence for person tracking and monitoring. All the above citations, regardless of which country is using it, align perfectly with the original <a href="http://vision.cs.duke.edu/DukeMTMC/">intent</a> of the Duke MTMC dataset: "to accelerate advances in multi-target multi-camera tracking".</p>
@@ -260,7 +260,7 @@
<h2>Supplementary Information</h2>
</section><section><h4>Video Timestamps</h4>
-<p>The video timestamps contain the likely, but not yet confirmed, date and times the video recorded. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least confirms the relative timing. The [<a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&amp;req_state=NC&amp;req_statename=North%20Carolina&amp;reqdb.zip=27708&amp;reqdb.magic=1&amp;reqdb.wmo=99999">precipitous weather</a> on March 14, 2014 in Durham, North Carolina supports, but does not confirm, that this day is a potential capture date.</p>
+<p>The video timestamps contain the likely, but not yet confirmed, date and times the video recorded. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least confirms the relative timing. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&amp;req_state=NC&amp;req_statename=North%20Carolina&amp;reqdb.zip=27708&amp;reqdb.magic=1&amp;reqdb.wmo=99999">precipitous weather</a> on March 14, 2014 in Durham, North Carolina supports, but does not confirm, that this day is a potential capture date.</p>
</section><section><div class='columns columns-2'><div class='column'><table>
<thead><tr>
<th>Camera</th>
@@ -332,9 +332,9 @@
</tbody>
</table>
</div></div></section><section><h4>Notes</h4>
-<ul>
-<li>The original Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812, and their own research typically mentions 2,000. For this write up we used 2,000 to describe the approximate number of students.</li>
-</ul>
+<p>The original Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812, and their own research typically mentions 2,000. For this write up we used 2,000 to describe the approximate number of students.</p>
+<h4>Ethics</h4>
+<p>Please direct any questions about the ethics of the dataset to Duke University's <a href="https://hr.duke.edu/policies/expectations/compliance/">Institutional Ethics &amp; Compliance Office</a> using the number at the bottom of the page.</p>
</section><section>
<h4>Cite Our Work</h4>
@@ -348,7 +348,7 @@
title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
year = 2019,
url = {https://megapixels.cc/},
- urldate = {2019-04-20}
+ urldate = {2019-04-18}
}</pre>
</p>
@@ -360,11 +360,7 @@
booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
year = {2016}
}
-</pre><h4>ToDo</h4>
-<ul>
-<li>clean up citations, formatting</li>
-</ul>
-</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^xinjiang_nyt]" class="footnote_shim"></a><span class="backlinks"><a href="#[^xinjiang_nyt]_1">a</a></span><p>Mozur, Paul. "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority". <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html</a>. April 14, 2019.</p>
+</pre></section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^xinjiang_nyt]" class="footnote_shim"></a><span class="backlinks"><a href="#[^xinjiang_nyt]_1">a</a></span><p>Mozur, Paul. "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority". <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html</a>. April 14, 2019.</p>
</li><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p>
</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">SemanticScholar</a></p>