summaryrefslogtreecommitdiff
path: root/site/public/datasets
diff options
context:
space:
mode:
Diffstat (limited to 'site/public/datasets')
-rw-r--r--site/public/datasets/duke_mtmc/index.html2
-rw-r--r--site/public/datasets/helen/index.html169
-rw-r--r--site/public/datasets/ibm_dif/index.html172
-rw-r--r--site/public/datasets/ijb_c/index.html22
-rw-r--r--site/public/datasets/megaface/index.html172
-rw-r--r--site/public/datasets/msceleb/index.html2
-rw-r--r--site/public/datasets/oxford_town_centre/index.html6
-rw-r--r--site/public/datasets/who_goes_there/index.html157
8 files changed, 681 insertions, 21 deletions
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index 9bae51a1..9a70a3f6 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -75,7 +75,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
- </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a>.</p>
+ </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a></p>
<p>For this analysis of the Duke MTMC dataset over 100 publicly available research papers that used the dataset were analyzed to find out who's using the dataset and where it's being used. The results show that the Duke MTMC dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.</p>
<p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in suspect tracking. Both SenseNets and SenseTime have been linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 4">4</a><a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a></p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.</div></div></section><section><p>Despite <a href="https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent">repeated</a> <a href="https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region">warnings</a> by Human Rights Watch that the authoritarian surveillance used in China represents a humanitarian crisis, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 90 research projects happening in China that publicly acknowledged using the Duke MTMC dataset. Amongst these were projects from CloudWalk, Hikvision, Megvii (Face++), SenseNets, SenseTime, Beihang University, China's National University of Defense Technology, and the PLA's Army Engineering University.</p>
diff --git a/site/public/datasets/helen/index.html b/site/public/datasets/helen/index.html
new file mode 100644
index 00000000..a7ada42a
--- /dev/null
+++ b/site/public/datasets/helen/index.html
@@ -0,0 +1,169 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: HELEN</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="HELEN Face Dataset" />
+ <meta property="og:title" content="MegaPixels: HELEN"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/helen/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+ <div class='page_name'>Helen Dataset</div>
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-dataset">
+
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>HELEN Face Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>HELEN (under development)
+</span></div></div></section><section><h2>HELEN</h2>
+</section><section><div class='right-sidebar'><div class='meta'>
+ <div class='gray'>Published</div>
+ <div>2012</div>
+ </div><div class='meta'>
+ <div class='gray'>Images</div>
+ <div>2,330 </div>
+ </div><div class='meta'>
+ <div class='gray'>Purpose</div>
+ <div>facial feature localization algorithm</div>
+ </div><div class='meta'>
+ <div class='gray'>Website</div>
+ <div><a href='http://www.ifp.illinois.edu/~vuongle2/helen/' target='_blank' rel='nofollow noopener'>illinois.edu</a></div>
+ </div></div><p>[ page under development ]</p>
+</section><section>
+ <h3>Who used Helen Dataset?</h3>
+
+ <p>
+ This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
+ </p>
+
+ </section>
+
+<section class="applet_container">
+<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
+</div> -->
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
+</section>
+
+<section class="applet_container">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
+</section>
+
+<section>
+
+ <h3>Information Supply chain</h3>
+
+ <p>
+ To help understand how Helen Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Helen Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ </p>
+
+ </section>
+
+<section class="applet_container fullwidth">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
+</section>
+
+<div class="caption">
+ <ul class="map-legend">
+ <li class="edu">Academic</li>
+ <li class="com">Commercial</li>
+ <li class="gov">Military / Government</li>
+ </ul>
+ <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+</div>
+
+
+<section class="applet_container">
+
+ <h3>Dataset Citations</h3>
+ <p>
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
+ </p>
+
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
+</section><section>
+
+ <div class="hr-wave-holder">
+ <div class="hr-wave-line hr-wave-line1"></div>
+ <div class="hr-wave-line hr-wave-line2"></div>
+ </div>
+
+ <h2>Supplementary Information</h2>
+
+</section><section>
+
+ <h4>Cite Our Work</h4>
+ <p>
+
+ If you find this analysis helpful, please cite our work:
+
+<pre id="cite-bibtex">
+@online{megapixels,
+ author = {Harvey, Adam. LaPlace, Jules.},
+ title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
+ year = 2019,
+ url = {https://megapixels.cc/},
+ urldate = {2019-04-18}
+}</pre>
+
+ </p>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/datasets/ibm_dif/index.html b/site/public/datasets/ibm_dif/index.html
new file mode 100644
index 00000000..1c465f93
--- /dev/null
+++ b/site/public/datasets/ibm_dif/index.html
@@ -0,0 +1,172 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: MegaFace</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="MegaFace Dataset" />
+ <meta property="og:title" content="MegaPixels: MegaFace"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/ibm_dif/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+ <div class='page_name'>MegaFace Dataset</div>
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-dataset">
+
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MegaFace Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>MegaFace contains 670K identities and 4.7M images
+</span></div></div></section><section><h2>MegaFace</h2>
+</section><section><div class='right-sidebar'><div class='meta'>
+ <div class='gray'>Published</div>
+ <div>2016</div>
+ </div><div class='meta'>
+ <div class='gray'>Images</div>
+ <div>4,753,520 </div>
+ </div><div class='meta'>
+ <div class='gray'>Identities</div>
+ <div>672,057 </div>
+ </div><div class='meta'>
+ <div class='gray'>Purpose</div>
+ <div>face recognition</div>
+ </div><div class='meta'>
+ <div class='gray'>Website</div>
+ <div><a href='http://megaface.cs.washington.edu/' target='_blank' rel='nofollow noopener'>washington.edu</a></div>
+ </div></div><p>[ page under development ]</p>
+</section><section>
+ <h3>Who used MegaFace Dataset?</h3>
+
+ <p>
+ This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
+ </p>
+
+ </section>
+
+<section class="applet_container">
+<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
+</div> -->
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
+</section>
+
+<section class="applet_container">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
+</section>
+
+<section>
+
+ <h3>Information Supply chain</h3>
+
+ <p>
+ To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ </p>
+
+ </section>
+
+<section class="applet_container fullwidth">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
+</section>
+
+<div class="caption">
+ <ul class="map-legend">
+ <li class="edu">Academic</li>
+ <li class="com">Commercial</li>
+ <li class="gov">Military / Government</li>
+ </ul>
+ <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+</div>
+
+
+<section class="applet_container">
+
+ <h3>Dataset Citations</h3>
+ <p>
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
+ </p>
+
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
+</section><section>
+
+ <div class="hr-wave-holder">
+ <div class="hr-wave-line hr-wave-line1"></div>
+ <div class="hr-wave-line hr-wave-line2"></div>
+ </div>
+
+ <h2>Supplementary Information</h2>
+
+</section><section>
+
+ <h4>Cite Our Work</h4>
+ <p>
+
+ If you find this analysis helpful, please cite our work:
+
+<pre id="cite-bibtex">
+@online{megapixels,
+ author = {Harvey, Adam. LaPlace, Jules.},
+ title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
+ year = 2019,
+ url = {https://megapixels.cc/},
+ urldate = {2019-04-18}
+}</pre>
+
+ </p>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/datasets/ijb_c/index.html b/site/public/datasets/ijb_c/index.html
index ccb7d90d..a36fac14 100644
--- a/site/public/datasets/ijb_c/index.html
+++ b/site/public/datasets/ijb_c/index.html
@@ -76,26 +76,16 @@
<div class='gray'>Website</div>
<div><a href='https://www.nist.gov/programs-projects/face-challenges' target='_blank' rel='nofollow noopener'>nist.gov</a></div>
</div></div><p>[ page under development ]</p>
-<p>The IARPA Janus Benchmark C (IJB&ndash;C) is a dataset of web images used for face recognition research and development. The IJB&ndash;C dataset contains 3,531 people</p>
-<p>Among the target list of 3,531 names are activists, artists, journalists, foreign politicians,</p>
+<p>The IARPA Janus Benchmark C (IJB&ndash;C) is a dataset of web images used for face recognition research and development. The IJB&ndash;C dataset contains 3,531 people from 21,294 images and 3,531 videos. The list of 3,531 names are activists, artists, journalists, foreign politicians, and public speakers.</p>
+<p>Key Findings:</p>
<ul>
-<li>Subjects 3531</li>
-<li>Templates: 140739</li>
-<li>Genuine Matches: 7819362</li>
-<li>Impostor Matches: 39584639</li>
-</ul>
-<p>Why not include US Soliders instead of activists?</p>
-<p>was creted by Nobilis, a United States Government contractor is used to develop software for the US intelligence agencies as part of the IARPA Janus program.</p>
-<p>The IARPA Janus program is</p>
-<p>these representations must address the challenges of Aging, Pose, Illumination, and Expression (A-PIE) by exploiting all available imagery.</p>
-<ul>
-<li>metadata annotations were created using crowd annotations</li>
-<li>created by Nobilis</li>
-<li>used mechanical turk</li>
+<li>metadata annotations were created using crowd annotations on Mechanical Turk</li>
+<li>The dataset was creatd Nobilis</li>
<li>made for intelligence analysts</li>
<li>improve performance of face recognition tools</li>
<li>by fusing the rich spatial, temporal, and contextual information available from the multiple views captured by today’s "media in the wild"</li>
</ul>
+<p>The dataset includes Creative Commons images</p>
<p>The name list includes</p>
<ul>
<li>2 videos from CCC<ul>
@@ -134,7 +124,7 @@
<p>The first 777 are non-alphabetical. From 777-3531 is alphabetical</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/ijb_c_montage.jpg' alt=' A visualization of the IJB-C dataset'><div class='caption'> A visualization of the IJB-C dataset</div></div></section><section><h2>Research notes</h2>
<p>From original papers: <a href="https://noblis.org/wp-content/uploads/2018/03/icb2018.pdf">https://noblis.org/wp-content/uploads/2018/03/icb2018.pdf</a></p>
-<p>Collection for the dataset began by identifying CreativeCommons subject videos, which are often more scarce thanCreative Commons subject images. Search terms that re-sulted in large quantities of person-centric videos (e.g. “in-terview”) were generated and translated into numerous lan-guages including Arabic, Korean, Swahili, and Hindi to in-crease diversity of the subject pool. Certain YouTube userswho upload well-labeled, person-centric videos, such as the World Economic Forum and the International University Sports Federation were also identified. Titles of videos per-taining to these search terms and usernames were scrapedusing the YouTube Data API and translated into English us-ing the Yandex Translate API4. Pattern matching was per-formed to extract potential names of subjects from the trans-lated titles, and these names were searched using the Wiki-data API to verify the subject’s existence and status as a public figure, and to check for Wikimedia Commons im-agery. Age, gender, and geographic region were collectedusing the Wikipedia API.Using the candidate subject names, Creative Commonsimages were scraped from Google and Wikimedia Com-mons, and Creative Commons videos were scraped fromYouTube. After images and videos of the candidate subjectwere identified, AMT Workers were tasked with validat-ing the subject’s presence throughout the video. The AMTWorkers marked segments of the video in which the subjectwas present, and key frames</p>
+<p>Collection for the dataset began by identifying CreativeCommons subject videos, which are often more scarce than Creative Commons subject images. Search terms that re-sulted in large quantities of person-centric videos (e.g. “in-terview”) were generated and translated into numerous lan-guages including Arabic, Korean, Swahili, and Hindi to in-crease diversity of the subject pool. Certain YouTube userswho upload well-labeled, person-centric videos, such as the World Economic Forum and the International University Sports Federation were also identified. Titles of videos per-taining to these search terms and usernames were scrapedusing the YouTube Data API and translated into English us-ing the Yandex Translate API4. Pattern matching was per-formed to extract potential names of subjects from the trans-lated titles, and these names were searched using the Wiki-data API to verify the subject’s existence and status as a public figure, and to check for Wikimedia Commons im-agery. Age, gender, and geographic region were collectedusing the Wikipedia API.Using the candidate subject names, Creative Commonsimages were scraped from Google and Wikimedia Com-mons, and Creative Commons videos were scraped fromYouTube. After images and videos of the candidate subjectwere identified, AMT Workers were tasked with validat-ing the subject’s presence throughout the video. The AMTWorkers marked segments of the video in which the subjectwas present, and key frames</p>
<p>IARPA funds Italian researcher <a href="https://www.micc.unifi.it/projects/glaivejanus/">https://www.micc.unifi.it/projects/glaivejanus/</a></p>
</section><section>
<h3>Who used IJB-C?</h3>
diff --git a/site/public/datasets/megaface/index.html b/site/public/datasets/megaface/index.html
new file mode 100644
index 00000000..33abf6c1
--- /dev/null
+++ b/site/public/datasets/megaface/index.html
@@ -0,0 +1,172 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: MegaFace</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="MegaFace Dataset" />
+ <meta property="og:title" content="MegaPixels: MegaFace"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/megaface/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+ <div class='page_name'>MegaFace Dataset</div>
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-dataset">
+
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MegaFace Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>MegaFace contains 670K identities and 4.7M images
+</span></div></div></section><section><h2>MegaFace</h2>
+</section><section><div class='right-sidebar'><div class='meta'>
+ <div class='gray'>Published</div>
+ <div>2016</div>
+ </div><div class='meta'>
+ <div class='gray'>Images</div>
+ <div>4,753,520 </div>
+ </div><div class='meta'>
+ <div class='gray'>Identities</div>
+ <div>672,057 </div>
+ </div><div class='meta'>
+ <div class='gray'>Purpose</div>
+ <div>face recognition</div>
+ </div><div class='meta'>
+ <div class='gray'>Website</div>
+ <div><a href='http://megaface.cs.washington.edu/' target='_blank' rel='nofollow noopener'>washington.edu</a></div>
+ </div></div><p>[ page under development ]</p>
+</section><section>
+ <h3>Who used MegaFace Dataset?</h3>
+
+ <p>
+ This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
+ </p>
+
+ </section>
+
+<section class="applet_container">
+<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
+</div> -->
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
+</section>
+
+<section class="applet_container">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
+</section>
+
+<section>
+
+ <h3>Information Supply chain</h3>
+
+ <p>
+ To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ </p>
+
+ </section>
+
+<section class="applet_container fullwidth">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
+</section>
+
+<div class="caption">
+ <ul class="map-legend">
+ <li class="edu">Academic</li>
+ <li class="com">Commercial</li>
+ <li class="gov">Military / Government</li>
+ </ul>
+ <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+</div>
+
+
+<section class="applet_container">
+
+ <h3>Dataset Citations</h3>
+ <p>
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
+ </p>
+
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
+</section><section>
+
+ <div class="hr-wave-holder">
+ <div class="hr-wave-line hr-wave-line1"></div>
+ <div class="hr-wave-line hr-wave-line2"></div>
+ </div>
+
+ <h2>Supplementary Information</h2>
+
+</section><section>
+
+ <h4>Cite Our Work</h4>
+ <p>
+
+ If you find this analysis helpful, please cite our work:
+
+<pre id="cite-bibtex">
+@online{megapixels,
+ author = {Harvey, Adam. LaPlace, Jules.},
+ title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
+ year = 2019,
+ url = {https://megapixels.cc/},
+ urldate = {2019-04-18}
+}</pre>
+
+ </p>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index 2e326416..7109cc9b 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -206,7 +206,7 @@
<p>Earlier in 2019, Microsoft President and Chief Legal Officer <a href="https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/">Brad Smith</a> called for the governmental regulation of face recognition, citing the potential for misuse, a rare admission that Microsoft's surveillance-driven business model had lost its bearing. More recently Smith also <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">announced</a> that Microsoft would seemingly take a stand against such potential misuse, and had decided to not sell face recognition to an unnamed United States agency, citing a lack of accuracy. In effect, Microsoft's face recognition software was not suitable to be used on minorities because it was trained mostly on white male faces.</p>
<p>What the decision to block the sale announces is not so much that Microsoft had upgraded their ethics policy, but that Microsoft publicly acknowledged it can't sell a data-driven product without data. In other words, Microsoft can't sell face recognition if they don't have enough face training data to build it.</p>
<p>Until now, that data has been freely harvested from the Internet and packaged in training sets like MS Celeb, which are overwhelmingly <a href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html">white</a> and <a href="https://gendershades.org">male</a>. Without balanced data, facial recognition contains blind spots. But without the large-scale datasets like MS Celeb, the powerful yet inaccurate facial recognition services like Microsoft Azure Cognitive would be even less usable.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/msceleb_montage.jpg' alt=' A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>," Microsoft used the MS Celeb face dataset to build their algorithms and advertise the results. Interestingly, Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> of the paper does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-access version</a> published on arxiv.org does. It states that Microsoft Research analyzed their algorithms using "the MS-Celeb-1M low-shot learning benchmark task."<a class="footnote_shim" name="[^one_shot]_1"> </a><a href="#[^one_shot]" class="footnote" title="Footnote 5">5</a></p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/msceleb_montage.jpg' alt=' A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>," Microsoft used the MS Celeb face dataset to build their algorithms and advertise the results. Interestingly, Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> of the paper does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-access version</a> published on arxiv.org does. It states that Microsoft analyzed their algorithms "on the MS-Celeb-1M low-shot learning <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">benchmark task</a>"<a class="footnote_shim" name="[^one_shot]_1"> </a><a href="#[^one_shot]" class="footnote" title="Footnote 5">5</a>, which is described as a refined version of the original MS-Celeb-1M face dataset.</p>
<p>Typically researchers will phrase this differently and say that they only use a dataset to validate their algorithm. But validation data can't be easily separated from the training process. To develop a neural network model, image training datasets are split into three parts: train, test, and validation. Training data is used to fit a model, and the validation and test data are used to provide feedback about the hyperparameters, biases, and outputs. In reality, test and validation data steers and influences the final results of neural networks.</p>
<h2>Runaway Data</h2>
<p>Despite the recent termination of the <a href="https://msceleb.org">msceleb.org</a> website, the dataset still exists in several repositories on GitHub, the hard drives of countless researchers, and will likely continue to be used in research projects around the world.</p>
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index 6f6bd70b..40f8bbc6 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -141,9 +141,9 @@
<h2>Supplementary Information</h2>
</section><section><h3>Location</h3>
-<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> it is increasingly likely that they would also be able to access to this camera.</p>
-<p>Next, to discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset, proving the camera can and has been rotated before.</p>
-<p>As for the capture date, the text on the storefront display shows a sale happening from December 2nd &ndash; 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007, since prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that it was probably a weekday after rubbish removal.</p>
+<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. The upper camera, a public CCTV camera installed for security, is most likely the camera used to create this dataset.</p>
+<p>The camera can be seen pointing in the same direction as the dataset's view in this <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">public image</a>, and the researchers used other existing public CCTV cameras for additional <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> increasing the likelihood that they could have had access to this camera.</p>
+<p>The capture date is estimated to be during late November or early December in 2007 or 2008. The text on the storefront display shows a sale happening from December 2nd &ndash; 7th indicating the capture date was likely around this time. Prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. And since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, it was probably recorded in 2007. The slight waste residue near the bench and the lack street vendors that typically appear on a weekend, suggest that it was perhaps a weekday after rubbish removal.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_cctv.jpg' alt=' Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)'><div class='caption'> Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)</div></div></section><section><div class='columns columns-'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_body.jpg' alt=' Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc'><div class='caption'> Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_face.jpg' alt=' Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc'><div class='caption'> Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc</div></div></section></div></section><section>
<h4>Cite Our Work</h4>
diff --git a/site/public/datasets/who_goes_there/index.html b/site/public/datasets/who_goes_there/index.html
new file mode 100644
index 00000000..3db77ff7
--- /dev/null
+++ b/site/public/datasets/who_goes_there/index.html
@@ -0,0 +1,157 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: Who Goes There Dataset</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="Who Goes There Dataset" />
+ <meta property="og:title" content="MegaPixels: Who Goes There Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/who_goes_there/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/who_goes_there/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+ <div class='page_name'>Who Goes There Dataset</div>
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-dataset">
+
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/who_goes_there/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Who Goes There Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>Who Goes There (page under development)
+</span></div></div></section><section><h2>Who Goes There</h2>
+</section><section><div class='right-sidebar'></div><p>[ page under development ]</p>
+</section><section>
+ <h3>Who used Who Goes There Dataset?</h3>
+
+ <p>
+ This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
+ </p>
+
+ </section>
+
+<section class="applet_container">
+<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
+</div> -->
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
+</section>
+
+<section class="applet_container">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
+</section>
+
+<section>
+
+ <h3>Information Supply chain</h3>
+
+ <p>
+ To help understand how Who Goes There Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing WhoGoesThere was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ </p>
+
+ </section>
+
+<section class="applet_container fullwidth">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
+</section>
+
+<div class="caption">
+ <ul class="map-legend">
+ <li class="edu">Academic</li>
+ <li class="com">Commercial</li>
+ <li class="gov">Military / Government</li>
+ </ul>
+ <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+</div>
+
+
+<section class="applet_container">
+
+ <h3>Dataset Citations</h3>
+ <p>
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
+ </p>
+
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
+</section><section>
+
+ <div class="hr-wave-holder">
+ <div class="hr-wave-line hr-wave-line1"></div>
+ <div class="hr-wave-line hr-wave-line2"></div>
+ </div>
+
+ <h2>Supplementary Information</h2>
+
+</section><section>
+
+ <h4>Cite Our Work</h4>
+ <p>
+
+ If you find this analysis helpful, please cite our work:
+
+<pre id="cite-bibtex">
+@online{megapixels,
+ author = {Harvey, Adam. LaPlace, Jules.},
+ title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
+ year = 2019,
+ url = {https://megapixels.cc/},
+ urldate = {2019-04-18}
+}</pre>
+
+ </p>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file