diff options
| author | adamhrv <adam@ahprojects.com> | 2019-04-12 09:10:21 +0200 |
|---|---|---|
| committer | adamhrv <adam@ahprojects.com> | 2019-04-12 09:10:21 +0200 |
| commit | 1e98a1f926475d03077a7d18d3557140e83d41ba (patch) | |
| tree | 55c42f5320f0ef9472fc5b92844e7bb2109374a9 /site/public/datasets | |
| parent | 57fba037d519e45488599288f7753cb7a3cd32aa (diff) | |
| parent | 9b1e2709cbdb40eabb34d379df18e61c10e3737c (diff) | |
ugh, merge
Diffstat (limited to 'site/public/datasets')
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 144 | ||||
| -rw-r--r-- | site/public/datasets/hrt_transgender/index.html | 67 | ||||
| -rw-r--r-- | site/public/datasets/index.html | 109 | ||||
| -rw-r--r-- | site/public/datasets/lfw/index.html | 166 | ||||
| -rw-r--r-- | site/public/datasets/msceleb/index.html | 139 | ||||
| -rw-r--r-- | site/public/datasets/oxford_town_centre/index.html | 149 |
6 files changed, 774 insertions, 0 deletions
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html new file mode 100644 index 00000000..9bec47ed --- /dev/null +++ b/site/public/datasets/duke_mtmc/index.html @@ -0,0 +1,144 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='splash'>Duke MTMC Dataset</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014 +</span></div></div></section><section><div class='left-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2016</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>2,000,000 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>2,700 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>Person re-identification, multi-camera tracking</div> + </div><div class='meta'> + <div class='gray'>Created by</div> + <div>Computer Science Department, Duke University, Durham, US</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div> + </div></div><h2>Duke MTMC</h2> +<p>The Duke Multi-Target, Multi-Camera Tracking Dataset (MTMC) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking is used for citywide dragnet surveillance systems such as those used throughout China by SenseTime<a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 1">1</a> and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets<a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 2">2</a>. In fact researchers from both SenseTime<a class="footnote_shim" name="[^sensetime1]_1"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_1"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a> and SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_1"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a> used the Duke MTMC dataset for their research.</p> +<p>The Duke MTMC dataset is unique because it is the largest publicly available MTMC and person re-identification dataset and has the longest duration of annotated video. In total, the Duke MTMC dataset provides over 14 hours of 1080p video from 8 synchronized surveillance cameras.<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a> It is among the most widely used person re-identification datasets in the world. The approximately 2,700 unique people in the Duke MTMC videos, most of whom are students, are used for research and development of surveillance technologies by commercial, academic, and even defense organizations.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. © megapixels.cc'><div class='caption'> A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. © megapixels.cc</div></div></section><section><p>The creation and publication of the Duke MTMC dataset in 2016 was originally funded by the U.S. Army Research Laboratory and the National Science Foundation<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Since 2016 use of the Duke MTMC dataset images have been publicly acknowledged in research funded by or on behalf of the Chinese National University of Defense<a class="footnote_shim" name="[^cn_defense1]_1"> </a><a href="#[^cn_defense1]" class="footnote" title="Footnote 7">7</a><a class="footnote_shim" name="[^cn_defense2]_1"> </a><a href="#[^cn_defense2]" class="footnote" title="Footnote 8">8</a>, IARPA and IBM<a class="footnote_shim" name="[^iarpa_ibm]_1"> </a><a href="#[^iarpa_ibm]" class="footnote" title="Footnote 9">9</a>, and U.S. Department of Homeland Security<a class="footnote_shim" name="[^us_dhs]_1"> </a><a href="#[^us_dhs]" class="footnote" title="Footnote 10">10</a>.</p> +<p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_3"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a> Camera 7 and 2 capture large groups of prospective students and children. Camera 5 was positioned to capture students as they enter and exit Duke University's main chapel. Each camera's location is documented below.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus © megapixels.cc'><div class='caption'> Duke MTMC camera locations on Duke University campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus © megapixels.cc</div></div></section><section> + <h3>Who used Duke MTMC Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Biometric Trade Routes</h3> + + <p> + To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section><h3>Notes</h3> +<p>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812</p> +</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p> +</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p> +</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Source</a></p> +</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">source</a></p> +</li><li><a name="[^sensetime2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime2]_1">a</a></span><p>"Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. <a href="https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b">Source</a></p> +</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a><a href="#[^duke_mtmc_orig]_2">b</a><a href="#[^duke_mtmc_orig]_3">c</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">Source</a></p> +</li><li><a name="[^cn_defense1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense1]_1">a</a></span><p>"Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. <a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">Source</a></p> +</li><li><a name="[^cn_defense2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense2]_1">a</a></span><p>"Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. <a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">Source</a></p> +</li><li><a name="[^iarpa_ibm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^iarpa_ibm]_1">a</a></span><p>"Horizontal Pyramid Matching for Person Re-identification". 2019. <a href="https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8">Source</a></p> +</li><li><a name="[^us_dhs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^us_dhs]_1">a</a></span><p>"Re-Identification with Consistent Attentive Siamese Networks". 2018. <a href="https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94">Source</a></p> +</li></ul></section></section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html new file mode 100644 index 00000000..486b9122 --- /dev/null +++ b/site/public/datasets/hrt_transgender/index.html @@ -0,0 +1,67 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="TBD" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='splash'>HRT Transgender</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/hrt_transgender/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>TBD</span></div><div class='hero_subdesc'><span class='bgpad'>TBD +</span></div></div></section><section><div class='left-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2013</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>10,564 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>38 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>Face recognition, gender transition biometrics</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://www.faceaginggroup.com/hrt-transgender/' target='_blank' rel='nofollow noopener'>faceaginggroup.com</a></div> + </div></div><h2>HRT Transgender Dataset</h2> +<p>[ page under development ]</p> +</section><section><p>{% include 'dashboard.html' }</p> +</section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html new file mode 100644 index 00000000..b01c1ac1 --- /dev/null +++ b/site/public/datasets/index.html @@ -0,0 +1,109 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Facial Recognition Datasets" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-"> + + + <section><h1>Facial Recognition Datasets</h1> +<p>Explore publicly available facial recognition datasets. More datasets will be added throughout 2019.</p> +</section> + + <section class='applet_container autosize'><div class='applet' data-payload='{"command":"dataset_list"}'></div></section> + + <section class='wide dataset-intro'> + + <div class="dataset-list"> + + <a href="/datasets/brainwash/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>Brainwash</span> + <div class='fields'> + <div class='year visible'><span>2015</span></div> + <div class='purpose'><span>Head detection</span></div> + <div class='images'><span>11,917 images</span></div> + <div class='identities'><span></span></div> + </div> + </div> + </a> + + <a href="/datasets/duke_mtmc/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>Duke MTMC</span> + <div class='fields'> + <div class='year visible'><span>2016</span></div> + <div class='purpose'><span>Person re-identification, multi-camera tracking</span></div> + <div class='images'><span>2,000,000 images</span></div> + <div class='identities'><span>1,812 </span></div> + </div> + </div> + </a> + + <a href="/datasets/oxford_town_centre/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>Oxford Town Centre</span> + <div class='fields'> + <div class='year visible'><span>2011</span></div> + <div class='purpose'><span>Person detection, gaze estimation</span></div> + <div class='images'><span> images</span></div> + <div class='identities'><span></span></div> + </div> + </div> + </a> + + <a href="/datasets/uccs/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>UnConstrained College Students</span> + <div class='fields'> + <div class='year visible'><span>2016</span></div> + <div class='purpose'><span>Face recognition, face detection</span></div> + <div class='images'><span>16,149 images</span></div> + <div class='identities'><span>1,732 </span></div> + </div> + </div> + </a> + + </div> + </section> + + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html new file mode 100644 index 00000000..60a6bf0e --- /dev/null +++ b/site/public/datasets/lfw/index.html @@ -0,0 +1,166 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Labeled Faces in The Wild (LFW) is the first facial recognition dataset created entirely from online photos" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='splash'>LFW</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Labeled Faces in The Wild (LFW)</span> is the first facial recognition dataset created entirely from online photos</span></div><div class='hero_subdesc'><span class='bgpad'>It includes 13,456 images of 4,432 people's images copied from the Internet during 2002-2004 and is the most frequently used dataset in the world for benchmarking face recognition algorithms. +</span></div></div></section><section><div class='left-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2007</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>13,233 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>5,749 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>face recognition</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://vis-www.cs.umass.edu/lfw/' target='_blank' rel='nofollow noopener'>umass.edu</a></div> + </div></div><h2>Labeled Faces in the Wild</h2> +<p>[ PAGE UNDER DEVELOPMENT ]</p> +<p><em>Labeled Faces in The Wild</em> (LFW) is "a database of face photographs designed for studying the problem of unconstrained face recognition<a class="footnote_shim" name="[^lfw_www]_1"> </a><a href="#[^lfw_www]" class="footnote" title="Footnote 1">1</a>. It is used to evaluate and improve the performance of facial recognition algorithms in academic, commercial, and government research. According to BiometricUpdate.com<a class="footnote_shim" name="[^lfw_pingan]_1"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a>, LFW is "the most widely used evaluation set in the field of facial recognition, LFW attracts a few dozen teams from around the globe including Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong."</p> +<p>The LFW dataset includes 13,233 images of 5,749 people that were collected between 2002-2004. LFW is a subset of <em>Names of Faces</em> and is part of the first facial recognition training dataset created entirely from images appearing on the Internet. The people appearing in LFW are...</p> +<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> +<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_montage_all_crop.jpg' alt='All 5,379 people in the Labeled Faces in The Wild Dataset. Showing one face per person'><div class='caption'>All 5,379 people in the Labeled Faces in The Wild Dataset. Showing one face per person</div></div></section><section><p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> +<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> +</section><section> + <h3>Who used LFW?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Biometric Trade Routes</h3> + + <p> + To help understand how LFW has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section><h3>Commercial Use</h3> +<p>Add a paragraph about how usage extends far beyond academia into research centers for largest companies in the world. And even funnels into CIA funded research in the US and defense industry usage in China.</p> +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file assets/lfw_commercial_use.csv", "fields": ["name_display, company_url, example_url, country, description"]}'></div></section><section><h3>Research</h3> +<ul> +<li>"In our experiments, we used 10000 images and associated captions from the Faces in the wilddata set [3]."</li> +<li>"This work was supported in part by the Center for Intelligent Information Retrieval, the Central Intelligence Agency, the National Security Agency and National Science Foundation under CAREER award IIS-0546666 and grant IIS-0326249."</li> +<li>From: "People-LDA: Anchoring Topics to People using Face Recognition" <a href="https://www.semanticscholar.org/paper/People-LDA%3A-Anchoring-Topics-to-People-using-Face-Jain-Learned-Miller/10f17534dba06af1ddab96c4188a9c98a020a459">https://www.semanticscholar.org/paper/People-LDA%3A-Anchoring-Topics-to-People-using-Face-Jain-Learned-Miller/10f17534dba06af1ddab96c4188a9c98a020a459</a> and <a href="https://ieeexplore.ieee.org/document/4409055">https://ieeexplore.ieee.org/document/4409055</a></li> +<li>This paper was presented at IEEE 11th ICCV conference Oct 14-21 and the main LFW paper "Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments" was also published that same year</li> +<li>10f17534dba06af1ddab96c4188a9c98a020a459</li> +<li>This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract number 2014-14071600010.</li> +<li>From "Labeled Faces in the Wild: Updates and New Reporting Procedures"</li> +<li>70% of people in the dataset have only 1 image and 29% have 2 or more images</li> +<li>The LFW dataset is considered the "most popular benchmark for face recognition" <a class="footnote_shim" name="[^lfw_baidu]_1"> </a><a href="#[^lfw_baidu]" class="footnote" title="Footnote 2">2</a></li> +<li>The LFW dataset is "the most widely used evaluation set in the field of facial recognition" <a class="footnote_shim" name="[^lfw_pingan]_2"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a></li> +<li>All images in LFW dataset were obtained "in the wild" meaning without any consent from the subject or from the photographer</li> +<li>The faces in the LFW dataset were detected using the Viola-Jones haarcascade face detector [^lfw_website] [^lfw-survey]</li> +<li>The LFW dataset is used by several of the largest tech companies in the world including "Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong." <a class="footnote_shim" name="[^lfw_pingan]_3"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a></li> +<li>All images in the LFW dataset were copied from Yahoo News between 2002 - 2004</li> +<li>In 2014, two of the four original authors of the LFW dataset received funding from IARPA and ODNI for their followup paper <a href="https://www.semanticscholar.org/paper/Labeled-Faces-in-the-Wild-%3A-Updates-and-New-Huang-Learned-Miller/2d3482dcff69c7417c7b933f22de606a0e8e42d4">Labeled Faces in the Wild: Updates and New Reporting Procedures</a> via IARPA contract number 2014-14071600010</li> +<li>The dataset includes 2 images of <a href="http://vis-www.cs.umass.edu/lfw/person/George_Tenet.html">George Tenet</a>, the former Director of Central Intelligence (DCI) for the Central Intelligence Agency whose facial biometrics were eventually used to help train facial recognition software in China and Russia</li> +<li>./15/155205b8e288fd49bf203135871d66de879c8c04/paper.txt shows usage by DSTO Australia, supported parimal@iisc.ac.in</li> +</ul> +</section><section><div class='meta'><div><div class='gray'>Created</div><div>2002 – 2004</div></div><div><div class='gray'>Images</div><div>13,233</div></div><div><div class='gray'>Identities</div><div>5,749</div></div><div><div class='gray'>Origin</div><div>Yahoo! News Images</div></div><div><div class='gray'>Used by</div><div>Facebook, Google, Microsoft, Baidu, Tencent, SenseTime, Face++, CIA, NSA, IARPA</div></div><div><div class='gray'>Website</div><div><a href="http://vis-www.cs.umass.edu/lfw">umass.edu</a></div></div></div><section><section><ul> +<li>There are about 3 men for every 1 woman in the LFW dataset<a class="footnote_shim" name="[^lfw_www]_2"> </a><a href="#[^lfw_www]" class="footnote" title="Footnote 1">1</a></li> +<li>The person with the most images is <a href="http://vis-www.cs.umass.edu/lfw/person/George_W_Bush_comp.html">George W. Bush</a> with 530</li> +<li>There are about 3 George W. Bush's for every 1 <a href="http://vis-www.cs.umass.edu/lfw/person/Tony_Blair.html">Tony Blair</a></li> +<li>The LFW dataset includes over 500 actors, 30 models, 10 presidents, 124 basketball players, 24 football players, 11 kings, 7 queens, and 1 <a href="http://vis-www.cs.umass.edu/lfw/person/Moby.html">Moby</a></li> +<li>In all 3 of the LFW publications [^lfw_original_paper], [^lfw_survey], [^lfw_tech_report] the words "ethics", "consent", and "privacy" appear 0 times</li> +<li>The word "future" appears 71 times</li> +<li>* denotes partial funding for related research</li> +</ul> +</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^lfw_www]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_www]_1">a</a><a href="#[^lfw_www]_2">b</a></span><p><a href="http://vis-www.cs.umass.edu/lfw/results.html">http://vis-www.cs.umass.edu/lfw/results.html</a></p> +</li><li><a name="[^lfw_baidu]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_baidu]_1">a</a></span><p>Jingtuo Liu, Yafeng Deng, Tao Bai, Zhengping Wei, Chang Huang. Targeting Ultimate Accuracy: Face Recognition via Deep Embedding. <a href="https://arxiv.org/abs/1506.07310">https://arxiv.org/abs/1506.07310</a></p> +</li><li><a name="[^lfw_pingan]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_pingan]_1">a</a><a href="#[^lfw_pingan]_2">b</a><a href="#[^lfw_pingan]_3">c</a></span><p>Lee, Justin. "PING AN Tech facial recognition receives high score in latest LFW test results". BiometricUpdate.com. Feb 13, 2017. <a href="https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results">https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results</a></p> +</li></ul></section></section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html new file mode 100644 index 00000000..cf3a654f --- /dev/null +++ b/site/public/datasets/msceleb/index.html @@ -0,0 +1,139 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='splash'>Microsoft Celeb</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10,000,000 images and 93,000 identities of semi-public figures collected using the Bing search engine +</span></div></div></section><section><div class='left-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2016</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>1,000,000 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>100,000 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>Large-scale face recognition</div> + </div><div class='meta'> + <div class='gray'>Created by</div> + <div>Microsoft Research</div> + </div><div class='meta'> + <div class='gray'>Funded by</div> + <div>Microsoft Research</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://www.msceleb.org/' target='_blank' rel='nofollow noopener'>msceleb.org</a></div> + </div></div><h2>Microsoft Celeb Dataset (MS Celeb)</h2> +<p>[ PAGE UNDER DEVELOPMENT ]</p> +</section><section> + <h3>Who used Microsoft Celeb?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Biometric Trade Routes</h3> + + <p> + To help understand how Microsoft Celeb has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Microsoft Celebrity Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section><h3>Additional Information</h3> +<ul> +<li>The dataset author spoke about his research at the CVPR conference in 2016 <a href="https://www.youtube.com/watch?v=Nl2fBKxwusQ">https://www.youtube.com/watch?v=Nl2fBKxwusQ</a></li> +</ul> +</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p> +</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p> +</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p> +</li></ul></section></section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html new file mode 100644 index 00000000..63dc52d4 --- /dev/null +++ b/site/public/datasets/oxford_town_centre/index.html @@ -0,0 +1,149 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Oxford Town Centre is a dataset of surveillance camera footage from Cornmarket St Oxford, England" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='splash'>TownCentre</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Oxford Town Centre is a dataset of surveillance camera footage from Cornmarket St Oxford, England</span></div><div class='hero_subdesc'><span class='bgpad'>The Oxford Town Centre dataset includes approximately 2,200 identities and is used for research and development of face recognition systems +</span></div></div></section><section><div class='left-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2009</div> + </div><div class='meta'> + <div class='gray'>Videos</div> + <div>1 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>2,200 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>Person detection, gaze estimation</div> + </div><div class='meta'> + <div class='gray'>Funded by</div> + <div>EU FP6 Hermes project and Oxford Risk </div> + </div><div class='meta'> + <div class='gray'>Download Size</div> + <div>0.147 GB</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html' target='_blank' rel='nofollow noopener'>ox.ac.uk</a></div> + </div></div><h2>Oxford Town Centre</h2> +<p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p> +<p>The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of or consented to participation in the research project.</p> +</section><section> + <h3>Who used TownCentre?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Biometric Trade Routes</h3> + + <p> + To help understand how TownCentre has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Oxford Town Centre was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section><h3>Location</h3> +<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> it is likely that they would also be able to access to this camera.</p> +<p>To discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset proving the camera can and has been rotated before.</p> +<p>As for the capture date, the text on the storefront display shows a sale happening from December 2nd – 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007 since prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that is was probably a weekday after rubbish removal.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_cctv.jpg' alt=' Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)'><div class='caption'> Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)</div></div></section><section><div class='columns columns-'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_body.jpg' alt=' Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc'><div class='caption'> Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_face.jpg' alt=' Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc'><div class='caption'> Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc</div></div></section></div></section><section><h3>Demo Videos Using Oxford Town Centre Dataset</h3> +<p>Several researchers have posted their demo videos using the Oxford Town Centre dataset on YouTube:</p> +<ul> +<li><a href="https://www.youtube.com/watch?v=nO-3EM9dEd4">Multi target tracking on Oxford Dataset</a></li> +<li>[Multi-pedestrian tracking (TownCentre dataset)]<a href="https://www.youtube.com/watch?v=nO-3EM9dEd4">https://www.youtube.com/watch?v=nO-3EM9dEd4</a></li> +<li><a href="https://www.youtube.com/watch?v=SKXk6uB8348">Multiple object tracking with kalman tracker and sort</a></li> +<li><a href="https://www.youtube.com/watch?v=RM_RdXH7pSY">Multi target tracking on Oxford dataset</a></li> +<li><a href="https://www.youtube.com/watch?v=ErLtfUAJA8U">towncentre</a></li> +<li><a href="https://www.youtube.com/watch?v=LwMOmqvhnoc">VTD - towncenter.avi</a></li> +</ul> +</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^ben_benfold_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^ben_benfold_orig]_1">a</a></span><p>Benfold, Ben and Reid, Ian. "Stable Multi-Target Tracking in Real-Time Surveillance Video". CVPR 2011. Pages 3457-3464.</p> +</li><li><a name="[^guiding_surveillance]" class="footnote_shim"></a><span class="backlinks"><a href="#[^guiding_surveillance]_1">a</a></span><p>"Guiding Visual Surveillance by Tracking Human Attention". 2009.</p> +</li></ul></section></section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file |
