summaryrefslogtreecommitdiff
path: root/site/public/datasets
diff options
context:
space:
mode:
Diffstat (limited to 'site/public/datasets')
-rw-r--r--site/public/datasets/50_people_one_question/index.html27
-rw-r--r--site/public/datasets/afad/index.html27
-rw-r--r--site/public/datasets/brainwash/index.html66
-rw-r--r--site/public/datasets/caltech_10k/index.html27
-rw-r--r--site/public/datasets/celeba/index.html27
-rw-r--r--site/public/datasets/cofw/index.html29
-rw-r--r--site/public/datasets/duke_mtmc/index.html63
-rw-r--r--site/public/datasets/feret/index.html27
-rw-r--r--site/public/datasets/hrt_transgender/index.html53
-rw-r--r--site/public/datasets/ijb_c/index.html49
-rw-r--r--site/public/datasets/index.html77
-rw-r--r--site/public/datasets/lfpw/index.html27
-rw-r--r--site/public/datasets/lfw/index.html27
-rw-r--r--site/public/datasets/market_1501/index.html27
-rw-r--r--site/public/datasets/msceleb/index.html264
-rw-r--r--site/public/datasets/oxford_town_centre/index.html61
-rw-r--r--site/public/datasets/pipa/index.html27
-rw-r--r--site/public/datasets/pubfig/index.html27
-rw-r--r--site/public/datasets/uccs/index.html57
-rw-r--r--site/public/datasets/vgg_face2/index.html27
-rw-r--r--site/public/datasets/viper/index.html27
-rw-r--r--site/public/datasets/youtube_celebrities/index.html27
22 files changed, 603 insertions, 467 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html
index dc7919f7..bc879799 100644
--- a/site/public/datasets/50_people_one_question/index.html
+++ b/site/public/datasets/50_people_one_question/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>50 People One Question Dataset</div>
+ <div class='page_name'>50 People One Question Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -88,7 +89,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -96,17 +97,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html
index f2b0a5ba..f5a04251 100644
--- a/site/public/datasets/afad/index.html
+++ b/site/public/datasets/afad/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Asian Face Age Dataset</div>
+ <div class='page_name'>Asian Face Age Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -90,7 +91,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -109,17 +110,17 @@ Motivation</p>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index b17617a6..8ae6b122 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -5,19 +5,46 @@
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014" />
+ <meta property="og:title" content="MegaPixels: Brainwash Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/brainwash/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Brainwash Dataset</div>
+ <div class='page_name'>Brainwash Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -49,10 +76,11 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><p>Brainwash is a dataset of livecam images taken from San Francisco's Brainwash Cafe. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. The Brainwash dataset includes 3 full days of webcam images taken on October 27, November 13, and November 24 in 2014. According the author's <a href="https://www.semanticscholar.org/paper/End-to-End-People-Detection-in-Crowded-Scenes-Stewart-Andriluka/1bd1645a629f1b612960ab9bba276afd4cf7c666">reserach paper</a> introducing the dataset, the images were acquired with the help of Angelcam.com<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
-<p>The Brainwash dataset is unique because it uses images from a publicly available webcam that records people inside a privately owned business without any consent. No ordinary cafe custom could ever suspect there image would end up in dataset used for surveillance reserach and development, but that is exactly what happened to customers at Brainwash cafe in San Francisco.</p>
-<p>Although Brainwash appears to be a less popular dataset, it was used in 2016 and 2017 by researchers from the National University of Defense Technology in China took note of the dataset and used it for two <a href="https://www.semanticscholar.org/paper/Localized-region-context-and-object-feature-fusion-Li-Dou/b02d31c640b0a31fb18c4f170d841d8e21ffb66c">research</a> <a href="https://www.semanticscholar.org/paper/A-Replacement-Algorithm-of-Non-Maximum-Suppression-Zhao-Wang/591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b">projects</a> on advancing the capabilities of object detection to more accurately isolate the target region in an image (<a href="https://www.itm-conferences.org/articles/itmconf/pdf/2017/04/itmconf_ita2017_05006.pdf">PDF</a>). <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a>. The dataset also appears in a 2017 <a href="https://ieeexplore.ieee.org/document/7877809">research paper</a> from Peking University for the purpose of improving surveillance capabilities for "people detection in the crowded scenes".</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
+ </div></div><p>Brainwash is a dataset of livecam images taken from San Francisco's Brainwash Cafe. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throughout the entire day. The Brainwash dataset includes 3 full days of webcam images taken on October 27, November 13, and November 24 in 2014. According the author's <a href="https://www.semanticscholar.org/paper/End-to-End-People-Detection-in-Crowded-Scenes-Stewart-Andriluka/1bd1645a629f1b612960ab9bba276afd4cf7c666">research paper</a> introducing the dataset, the images were acquired with the help of Angelcam.com<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
+<p>The Brainwash dataset is unique because it uses images from a publicly available webcam that records people inside a privately owned business without any consent. No ordinary cafe customer would ever suspect that their image would end up in dataset used for surveillance research and development, but that is exactly what happened to customers at Brainwash cafe in San Francisco.</p>
+<p>Although Brainwash appears to be a less popular dataset, it was notably used in 2016 and 2017 by researchers affiliated with the National University of Defense Technology in China for two <a href="https://www.semanticscholar.org/paper/Localized-region-context-and-object-feature-fusion-Li-Dou/b02d31c640b0a31fb18c4f170d841d8e21ffb66c">research</a> <a href="https://www.semanticscholar.org/paper/A-Replacement-Algorithm-of-Non-Maximum-Suppression-Zhao-Wang/591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b">projects</a> on advancing the capabilities of object detection to more accurately isolate the target region in an image. <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a>. The <a href="https://en.wikipedia.org/wiki/National_University_of_Defense_Technology">National University of Defense Technology</a> is controlled by China's top military body, the Central Military Commission.</p>
+<p>The dataset also appears in a 2017 <a href="https://ieeexplore.ieee.org/document/7877809">research paper</a> from Peking University for the purpose of improving surveillance capabilities for "people detection in the crowded scenes".</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' Nine of 11,917 images from the the Brainwash dataset. Graphics credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> Nine of 11,917 images from the the Brainwash dataset. Graphics credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
<h3>Who used Brainwash Dataset?</h3>
<p>
@@ -99,7 +127,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -112,7 +140,7 @@
<h2>Supplementary Information</h2>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_example.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_saliency_map.jpg' alt=' A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_example.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The dataset contains a total of 11,917 images and 81,973 annotated heads. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The dataset contains a total of 11,917 images and 81,973 annotated heads. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_saliency_map.jpg' alt=' A visualization of the active regions for 81,973 head annotations in the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of the active regions for 81,973 head annotations in the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
<h4>Cite Our Work</h4>
<p>
@@ -137,17 +165,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/caltech_10k/index.html b/site/public/datasets/caltech_10k/index.html
index 04d63ee3..5848b804 100644
--- a/site/public/datasets/caltech_10k/index.html
+++ b/site/public/datasets/caltech_10k/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Brainwash Dataset</div>
+ <div class='page_name'>Brainwash Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -96,7 +97,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -106,17 +107,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html
index c72f3798..92c0e334 100644
--- a/site/public/datasets/celeba/index.html
+++ b/site/public/datasets/celeba/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>CelebA Dataset</div>
+ <div class='page_name'>CelebA Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -94,7 +95,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -108,17 +109,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html
index eef8cf5e..fd6d86ae 100644
--- a/site/public/datasets/cofw/index.html
+++ b/site/public/datasets/cofw/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>COFW Dataset</div>
+ <div class='page_name'>COFW Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -87,7 +88,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -138,7 +139,7 @@ To increase the number of training images, and since COFW has the exact same la
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -161,17 +162,17 @@ To increase the number of training images, and since COFW has the exact same la
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index 14e6bee0..24ee6cc2 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -5,19 +5,46 @@
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus" />
+ <meta property="og:title" content="MegaPixels: Duke MTMC Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/duke_mtmc/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Duke MTMC Dataset</div>
+ <div class='page_name'>Duke MTMC Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -46,7 +73,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
- </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60FPS with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a>.</p>
+ </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a>.</p>
<p>In this investigation into the Duke MTMC dataset we tracked down over 100 publicly available research papers that explicitly acknowledged using Duke MTMC. Our analysis shows that the dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers with explicit and direct links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.</p>
<p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in "finding missing elderly and children, and suspect tracking, etc." Both SenseNets and SenseTime have been directly linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 4">4</a><a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a></p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.</div></div></section><section><p>Despite <a href="https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent">repeated</a> <a href="https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region">warnings</a> by Human Rights Watch that the authoritarian surveillance used in China represents a violation of human rights, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 70 research projects happening in China that publicly acknowledged benefiting from the Duke MTMC dataset. Amongst these were projects from SenseNets, SenseTime, CloudWalk, Megvii, Beihang University, and the PLA's National University of Defense Technology.</p>
@@ -139,8 +166,8 @@
</tr>
</tbody>
</table>
-<p>The reasons that companies in China use the Duke MTMC dataset for research are technically no different than the reasons it is used in the United States and Europe. In fact, the original creators of the dataset published a follow up report in 2017 titled <a href="https://www.semanticscholar.org/paper/Tracking-Social-Groups-Within-and-Across-Cameras-Solera-Calderara/9e644b1e33dd9367be167eb9d832174004840400">Tracking Social Groups Within and Across Cameras</a> with specific applications to "automated analysis of crowds and social gatherings for surveillance and security applications". Their work, as well as the creation of the original dataset in 2014 were both supported in part by the United States Army Research Laboratory.</p>
-<p>Citations from the United States and Europe show a similar trend to that in China, including publicly acknowledged and verified usage of the Duke MTMC dataset supported or carried out by the United States Department of Homeland Security, IARPA, IBM, Microsoft (who provides surveillance to ICE), and Vision Semantics (who works with the UK Ministry of Defence). One <a href="https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf">paper</a> is even jointly published by researchers affiliated with both the University College of London and the National University of Defense Technology in China.</p>
+<p>The reasons that companies in China use the Duke MTMC dataset for research are technically no different than the reasons it is used in the United States and Europe. In fact, the original creators of the dataset published a follow up report in 2017 titled "<a href="https://www.semanticscholar.org/paper/Tracking-Social-Groups-Within-and-Across-Cameras-Solera-Calderara/9e644b1e33dd9367be167eb9d832174004840400">Tracking Social Groups Within and Across Cameras</a>" with specific applications to "automated analysis of crowds and social gatherings for surveillance and security applications". Their work, as well as the creation of the original dataset in 2014 were both supported in part by the United States Army Research Laboratory.</p>
+<p>Citations from the United States and Europe show a similar trend to that in China, including publicly acknowledged and verified usage of the Duke MTMC dataset supported or carried out by the United States Department of Homeland Security, IARPA, IBM, Microsoft (who has provided surveillance to ICE), and Vision Semantics (who has worked with the UK Ministry of Defence). One <a href="https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf">paper</a> is even jointly published by researchers affiliated with both the University College of London and the National University of Defense Technology in China.</p>
<table>
<thead><tr>
<th>Organization</th>
@@ -246,7 +273,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -260,7 +287,7 @@
<h2>Supplementary Information</h2>
</section><section><h4>Video Timestamps</h4>
-<p>The video timestamps contain the likely, but not yet confirmed, date and times the video recorded. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least confirms the relative timing. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&amp;req_state=NC&amp;req_statename=North%20Carolina&amp;reqdb.zip=27708&amp;reqdb.magic=1&amp;reqdb.wmo=99999">precipitous weather</a> on March 14, 2014 in Durham, North Carolina supports, but does not confirm, that this day is a potential capture date.</p>
+<p>The video timestamps contain the likely, but not yet confirmed, date and times the video recorded. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least confirms the relative timing. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&amp;req_state=NC&amp;req_statename=North%20Carolina&amp;reqdb.zip=27708&amp;reqdb.magic=1&amp;reqdb.wmo=99999">precipitous weather</a> on March 14, 2014 in Durham, North Carolina supports, but does not confirm, that this day is the likely capture date.</p>
</section><section><div class='columns columns-2'><div class='column'><table>
<thead><tr>
<th>Camera</th>
@@ -369,17 +396,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html
index 387826b0..88b025ae 100644
--- a/site/public/datasets/feret/index.html
+++ b/site/public/datasets/feret/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>LFW</div>
+ <div class='page_name'>LFW</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -90,7 +91,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -119,17 +120,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html
index 6b9ae7be..1dde3ded 100644
--- a/site/public/datasets/hrt_transgender/index.html
+++ b/site/public/datasets/hrt_transgender/index.html
@@ -5,19 +5,46 @@
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="TBD" />
+ <meta property="og:title" content="MegaPixels: HRT Transgender Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/hrt_transgender/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/hrt_transgender/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>HRT Transgender</div>
+ <div class='page_name'>HRT Transgender</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -49,17 +76,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/ijb_c/index.html b/site/public/datasets/ijb_c/index.html
index b6a16bfe..3bc23ca5 100644
--- a/site/public/datasets/ijb_c/index.html
+++ b/site/public/datasets/ijb_c/index.html
@@ -4,20 +4,47 @@
<title>MegaPixels</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
- <meta name="description" content="IJB-C is a datset ..." />
+ <meta name="description" content="IARPA Janus Benchmark C is a dataset of web images used" />
+ <meta property="og:title" content="MegaPixels: IJB-C"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/ijb_c/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>IJB-C</div>
+ <div class='page_name'>IJB-C</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -26,7 +53,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>IJB-C is a datset ...</span></div><div class='hero_subdesc'><span class='bgpad'>The IJB-C dataset contains...
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>IARPA Janus Benchmark C is a dataset of web images used</span></div><div class='hero_subdesc'><span class='bgpad'>The IJB-C dataset contains 21,294 images and 11,779 videos of 3,531 identities
</span></div></div></section><section><h2>IARPA Janus Benchmark C (IJB-C)</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
@@ -42,13 +69,13 @@
<div>3,531 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>face recognition challenge by NIST in full motion videos</div>
+ <div>Face recognition</div>
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://www.nist.gov/programs-projects/face-challenges' target='_blank' rel='nofollow noopener'>nist.gov</a></div>
- </div></div><p>Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor</p>
-<p>Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor Loren ipsum dolor</p>
-</section><section>
+ </div></div><p>[ page under development ]</p>
+<p>The IARPA Janus Benchmark C is a dataset created by</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/ijb_c_montage.jpg' alt=' A visualization of the IJB-C dataset'><div class='caption'> A visualization of the IJB-C dataset</div></div></section><section>
<h3>Who used IJB-C?</h3>
<p>
@@ -125,11 +152,7 @@
}</pre>
</p>
-</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^readme]" class="footnote_shim"></a><span class="backlinks"></span>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.
-</li><li>2 <a name="[^end_to_end]" class="footnote_shim"></a><span class="backlinks"></span>Stewart, Russel. Andriluka, Mykhaylo. "End-to-end people detection in crowded scenes". 2016.
-</li><li>3 <a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"></span>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.
-</li><li>4 <a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"></span>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.
-</li></ul></section></section>
+</section>
</div>
<footer>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index b463b378..b5fe52ed 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -5,12 +5,39 @@
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="Facial Recognition Datasets" />
+ <meta property="og:title" content="MegaPixels: MegaPixels: Face Recognition Datasets"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
@@ -28,8 +55,8 @@
<div class='dataset-heading'>
- <section><h1>Facial Recognition Datasets</h1>
-<p>Explore publicly available facial recognition datasets feeding into research and development of biometric surveillance technologies at the largest technology companies and defense contractors in the world.</p>
+ <section><h1>Face Recognition Datasets</h1>
+<p>Explore face recognition datasets contributing to the growing crisis of authoritarian biometric surveillance technologies. This first group of 5 datasets focuses on image usage connected to foreign surveillance and defense organizations.</p>
</section>
</div>
@@ -41,7 +68,7 @@
<a href="/datasets/brainwash/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/index.jpg)">
<div class="dataset">
- <span class='title'>Brainwash</span>
+ <span class='title'>Brainwash Dataset</span>
<div class='fields'>
<div class='year visible'><span>2015</span></div>
<div class='purpose'><span>Head detection</span></div>
@@ -53,7 +80,7 @@
<a href="/datasets/duke_mtmc/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/index.jpg)">
<div class="dataset">
- <span class='title'>Duke MTMC</span>
+ <span class='title'>Duke MTMC Dataset</span>
<div class='fields'>
<div class='year visible'><span>2016</span></div>
<div class='purpose'><span>Person re-identification, multi-camera tracking</span></div>
@@ -63,21 +90,9 @@
</div>
</a>
- <a href="/datasets/hrt_transgender/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/hrt_transgender/assets/index.jpg)">
- <div class="dataset">
- <span class='title'>HRT Transgender Dataset</span>
- <div class='fields'>
- <div class='year visible'><span>2013</span></div>
- <div class='purpose'><span>Face recognition, gender transition biometrics</span></div>
- <div class='images'><span>10,564 images</span></div>
- <div class='identities'><span>38 </span></div>
- </div>
- </div>
- </a>
-
<a href="/datasets/msceleb/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/index.jpg)">
<div class="dataset">
- <span class='title'>Microsoft Celeb</span>
+ <span class='title'>Microsoft Celeb Dataset</span>
<div class='fields'>
<div class='year visible'><span>2016</span></div>
<div class='purpose'><span>Large-scale face recognition</span></div>
@@ -89,7 +104,7 @@
<a href="/datasets/oxford_town_centre/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/index.jpg)">
<div class="dataset">
- <span class='title'>Oxford Town Centre</span>
+ <span class='title'>Oxford Town Centre Dataset</span>
<div class='fields'>
<div class='year visible'><span>2009</span></div>
<div class='purpose'><span>Person detection, gaze estimation</span></div>
@@ -101,7 +116,7 @@
<a href="/datasets/uccs/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/index.jpg)">
<div class="dataset">
- <span class='title'>UnConstrained College Students</span>
+ <span class='title'>UnConstrained College Students Dataset</span>
<div class='fields'>
<div class='year visible'><span>2016</span></div>
<div class='purpose'><span>Face recognition, face detection</span></div>
@@ -117,17 +132,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html
index 45de2599..68c3e033 100644
--- a/site/public/datasets/lfpw/index.html
+++ b/site/public/datasets/lfpw/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>LFWP</div>
+ <div class='page_name'>LFWP</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -83,7 +84,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -98,17 +99,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
index ca17b1cd..7ae440a8 100644
--- a/site/public/datasets/lfw/index.html
+++ b/site/public/datasets/lfw/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>LFW</div>
+ <div class='page_name'>LFW</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -97,7 +98,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -148,17 +149,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html
index 7c545335..0415f969 100644
--- a/site/public/datasets/market_1501/index.html
+++ b/site/public/datasets/market_1501/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Market 1501</div>
+ <div class='page_name'>Market 1501</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -91,7 +92,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -114,17 +115,17 @@ organization={Springer}
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index fa485ac0..f1d59366 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -4,20 +4,47 @@
<title>MegaPixels</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
- <meta name="description" content="Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition technologies" />
+ <meta name="description" content="Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition" />
+ <meta property="og:title" content="MegaPixels: Microsoft Celeb Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/msceleb/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Microsoft Celeb</div>
+ <div class='page_name'>Microsoft Celeb</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -26,20 +53,20 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition technologies</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10 million images of about 100K people and a target list of 1 million individuals
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10 million images of about 100K people and a target list of 1 million individuals
</span></div></div></section><section><h2>Microsoft Celeb Dataset (MS Celeb)</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
</div><div class='meta'>
<div class='gray'>Images</div>
- <div>1,000,000 </div>
+ <div>10,000,000 </div>
</div><div class='meta'>
<div class='gray'>Identities</div>
<div>100,000 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>Large-scale face recognition</div>
+ <div>Face recognition</div>
</div><div class='meta'>
<div class='gray'>Created by</div>
<div>Microsoft Research</div>
@@ -49,210 +76,133 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.msceleb.org/' target='_blank' rel='nofollow noopener'>msceleb.org</a></div>
- </div></div><p>Microsoft Celeb (MS Celeb) is a dataset of 10 million face images scraped from the Internet and used for research and development of large-scale biometric recognition systems. According to Microsoft Research who created and published the <a href="http://msceleb.org">dataset</a> in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of nearly 100,000 individuals. Microsoft's goal in building this dataset was to distribute the initial training dataset of 100,000 individuals images and use this to accelerate reserch into recognizing a target list of one million individuals from their face images "using all the possibly collected face images of this individual on the web as training data".<a class="footnote_shim" name="[^msceleb_orig]_1"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 2">2</a></p>
-<p>These one million people, defined as Micrsoft Research as "celebrities", are often merely people who must maintain an online presence for their professional lives. Microsoft's list of 1 million people is an expansive exploitation of the current reality that for many people including academics, policy makers, writers, artists, and especially journalists maintaining an online presence is mandatory and should not allow Microsoft (or anyone else) to use their biometrics for reserach and development of surveillance technology. Many of names in target list even include people critical of the very technology Microsoft is using their name and biometric information to build. The list includes digital rights activists like Jillian York and [add more]; artists critical of surveillance including Trevor Paglen, Hito Steryl, Kyle McDonald, Jill Magid, and Aram Bartholl; Intercept founders Laura Poitras, Jeremy Scahill, and Glen Greenwald; Data and Society founder danah boyd; and even Julie Brill the former FTC commissioner responsible for protecting consumer’s privacy to name a few.</p>
+ </div></div><p>Microsoft Celeb (MS Celeb) is a dataset of 10 million face images scraped from the Internet and used for research and development of large-scale biometric recognition systems. According to Microsoft Research, who created and published the <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">dataset</a> in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of nearly 100,000 individuals. Microsoft's goal in building this dataset was to distribute an initial training dataset of 100,000 individuals' images to accelerate research into recognizing a larger target list of one million people "using all the possibly collected face images of this individual on the web as training data".<a class="footnote_shim" name="[^msceleb_orig]_1"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a></p>
+<p>These one million people, defined by Microsoft Research as "celebrities", are often merely people who must maintain an online presence for their professional lives. Microsoft's list of 1 million people is an expansive exploitation of the current reality that for many people, including academics, policy makers, writers, artists, and especially journalists; maintaining an online presence is mandatory. This fact should not allow Microsoft nor anyone else to use their biometrics for research and development of surveillance technology. Many names in the target list even include people critical of the very technology Microsoft is using their name and biometric information to build. The list includes digital rights activists like Jillian York; artists critical of surveillance including Trevor Paglen, Jill Magid, and Aram Bartholl; Intercept founders Laura Poitras, Jeremy Scahill, and Glenn Greenwald; Data and Society founder danah boyd; and even Julie Brill, the former FTC commissioner responsible for protecting consumer privacy, to name a few.</p>
<h3>Microsoft's 1 Million Target List</h3>
-<p>Below is a list of names that were included in list of 1 million individuals curated to illustrate Microsoft's expansive and exploitative practice of scraping the Internet for biometric training data. The entire name file can be downloaded from <a href="https://msceleb.org">msceleb.org</a>. Names appearing with * indicate that Microsoft also distributed imaged.</p>
-<p>[ cleaning this up ]</p>
+<p>Below is a selection of 24 names from the full target list, curated to illustrate Microsoft's expansive and exploitative practice of scraping the Internet for biometric training data. The entire name file can be downloaded from <a href="https://www.msceleb.org">msceleb.org</a>. You can email <a href="mailto:msceleb@microsoft.com?subject=MS-Celeb-1M Removal Request&body=Dear%20Microsoft%2C%0A%0AI%20recently%20discovered%20that%20you%20use%20my%20identity%20for%20commercial%20use%20in%20your%20MS-Celeb-1M%20dataset%20used%20for%20research%20and%20development%20of%20face%20recognition.%20I%20do%20not%20wish%20to%20be%20included%20in%20your%20dataset%20in%20any%20format.%20%0A%0APlease%20remove%20my%20name%20and%2For%20any%20associated%20images%20immediately%20and%20send%20a%20confirmation%20once%20you've%20updated%20your%20%22Top1M_MidList.Name.tsv%22%20file.%0A%0AThanks%20for%20promptly%20handing%20this%2C%0A%5B%20your%20name%20%5D">msceleb@microsoft.com</a> to have your name removed. Names appearing with * indicate that Microsoft also distributed your images.</p>
</section><section><div class='columns columns-2'><div class='column'><table>
<thead><tr>
<th>Name</th>
-<th>ID</th>
<th>Profession</th>
-<th>Images</th>
</tr>
</thead>
<tbody>
<tr>
-<td>Jeremy Scahill</td>
-<td>/m/02p_8_n</td>
+<td>Adrian Chen</td>
<td>Journalist</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Jillian York</td>
-<td>/m/0g9_3c3</td>
-<td>Digital rights activist</td>
-<td>x</td>
</tr>
<tr>
-<td>Astra Taylor</td>
-<td>/m/05f6_39</td>
-<td>Author, activist</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Jonathan Zittrain</td>
-<td>/m/01f75c</td>
-<td>EFF board member</td>
-<td>no</td>
+<td>Ai Weiwei*</td>
+<td>Artist</td>
</tr>
<tr>
-<td>Julie Brill</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Aram Bartholl</td>
+<td>Internet artist</td>
</tr>
<tr>
-<td>Jonathan Zittrain</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Astra Taylor</td>
+<td>Author, director, activist</td>
</tr>
<tr>
-<td>Bruce Schneier</td>
-<td>m.095js</td>
-<td>Cryptologist and author</td>
-<td>yes</td>
+<td>Alexander Madrigal</td>
+<td>Journalist</td>
</tr>
<tr>
-<td>Julie Brill</td>
-<td>m.0bs3s9g</td>
-<td>x</td>
-<td>x</td>
+<td>Bruce Schneier*</td>
+<td>Cryptologist</td>
</tr>
<tr>
-<td>Kim Zetter</td>
-<td>/m/09r4j3</td>
-<td>x</td>
-<td>x</td>
+<td>danah boyd</td>
+<td>Data &amp; Society founder</td>
</tr>
<tr>
-<td>Ethan Zuckerman</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Edward Felten</td>
+<td>Former FTC Chief Technologist</td>
</tr>
<tr>
-<td>Jill Magid</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Evgeny Morozov*</td>
+<td>Tech writer, researcher</td>
</tr>
<tr>
-<td>Kyle McDonald</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Glenn Greenwald*</td>
+<td>Journalist, author</td>
</tr>
<tr>
-<td>Trevor Paglen</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Hito Steyerl</td>
+<td>Artist, writer</td>
</tr>
<tr>
-<td>R. Luke DuBois</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>James Risen</td>
+<td>Journalist</td>
</tr>
</tbody>
</table>
</div><div class='column'><table>
<thead><tr>
<th>Name</th>
-<th>ID</th>
<th>Profession</th>
-<th>Images</th>
</tr>
</thead>
<tbody>
<tr>
-<td>Trevor Paglen</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Ai Weiwei</td>
-<td>/m/0278dyq</td>
-<td>x</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Jer Thorp</td>
-<td>/m/01h8lg</td>
-<td>x</td>
-<td>x</td>
+<td>Jeremy Scahill*</td>
+<td>Journalist</td>
</tr>
<tr>
-<td>Edward Felten</td>
-<td>/m/028_7k</td>
-<td>x</td>
-<td>x</td>
+<td>Jill Magid</td>
+<td>Artist</td>
</tr>
<tr>
-<td>Evgeny Morozov</td>
-<td>/m/05sxhgd</td>
-<td>Scholar and technology critic</td>
-<td>yes</td>
+<td>Jillian York</td>
+<td>Digital rights activist</td>
</tr>
<tr>
-<td>danah boyd</td>
-<td>/m/06zmx5</td>
-<td>Data and Society founder</td>
-<td>x</td>
+<td>Jonathan Zittrain</td>
+<td>EFF board member</td>
</tr>
<tr>
-<td>Bruce Schneier</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Julie Brill</td>
+<td>Former FTC Commissioner</td>
</tr>
<tr>
-<td>Laura Poitras</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Kim Zetter</td>
+<td>Journalist, author</td>
</tr>
<tr>
-<td>Trevor Paglen</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Laura Poitras*</td>
+<td>Filmmaker</td>
</tr>
<tr>
-<td>Astra Taylor</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Luke DuBois</td>
+<td>Artist</td>
</tr>
<tr>
-<td>Shoshanaa Zuboff</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Michael Anti</td>
+<td>Political blogger</td>
</tr>
<tr>
-<td>Eyal Weizman</td>
-<td>m.0g54526</td>
-<td>x</td>
-<td>x</td>
+<td>Manal al-Sharif*</td>
+<td>Womens's rights activist</td>
</tr>
<tr>
-<td>Aram Bartholl</td>
-<td>m.06_wjyc</td>
-<td>x</td>
-<td>x</td>
+<td>Shoshana Zuboff</td>
+<td>Author, academic</td>
</tr>
<tr>
-<td>James Risen</td>
-<td>m.09pk6b</td>
-<td>x</td>
-<td>x</td>
+<td>Trevor Paglen</td>
+<td>Artist, researcher</td>
</tr>
</tbody>
</table>
-</div></div></section><section><p>After publishing this list, researchers from Microsoft Asia then worked with researchers affilliated with China's National University of Defense Technology (controlled by China's Central Military Commission) and used the the MS Celeb dataset for their <a href="https://www.semanticscholar.org/paper/Faces-as-Lighting-Probes-via-Unsupervised-Deep-Yi-Zhu/b301fd2fc33f24d6f75224e7c0991f4f04b64a65">research paper</a> on using "Faces as Lighting Probes via Unsupervised Deep Highlight Extraction" with potential applications in 3D face recognition.</p>
-<p>In an article published by the Financial Times based on data discovered during this investigation, Samm Sacks (senior fellow at New American and China tech policy expert) commented that this research raised "red flags because of the nature of the technology, the authors affilliations, combined with the what we know about how this technology is being deployed in China right now".<a class="footnote_shim" name="[^madhu_ft]_1"> </a><a href="#[^madhu_ft]" class="footnote" title="Footnote 3">3</a></p>
-<p>Four more papers published by SenseTime which also use the MS Celeb dataset raise similar flags. SenseTime is Beijing based company providing surveillance to Chinese authorities including [ add context here ] has been <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">flagged</a> as complicity in potential human rights violations.</p>
-<p>One of the 4 SenseTime papers, "Exploring Disentangled Feature Representation Beyond Face Identification", shows how SenseTime is developing automated face analysis technology to infer race, narrow eyes, nose size, and chin size, all of which could be used to target vulnerable ethnic groups based on their facial appearances.<a class="footnote_shim" name="[^disentangled]_1"> </a><a href="#[^disentangled]" class="footnote" title="Footnote 4">4</a></p>
-<p>Earlier in 2019, Microsoft CEO <a href="https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/">Brad Smith</a> called for the governmental regulation of face recognition, citing the potential for misuse, a rare admission that Microsoft's surveillance-driven business model had lost its bearing. More recently Smith also <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">announced</a> that Microsoft would seemingly take stand against potential misuse and decided to not sell face recognition to an unnamed United States law enforcement agency, citing that their technology was not accurate enough to be used on minorities because it was trained mostly on white male faces.</p>
-<p>What the decision to block the sale announces is not so much that Microsoft has upgraded their ethics, but that it publicly acknolwedged it can't sell a data-driven product without data. Microsoft can't sell face recognition for faces they can't train on.</p>
-<p>Until now, that data has been freely harvested from the Internet and packaged in training sets like MS Celeb, which are overwhelmingly <a href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html">white</a> and <a href="https://gendershades.org">male</a>. Without balanced data, facial recognition contains blind spots. And without datasets like MS Celeb, the powerful yet innaccurate facial recognition services like Microsoft's Azure Cognitive Service also would not be able to see at all.</p>
-<p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "(<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>)", Microsoft leveraged the MS Celeb dataset to analyse their algorithms and advertise the results. Interestingly, the Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-acess version</a> of the paper published on arxiv.org that same year explicity mentions that Microsoft Research tested their algorithms "on the MS-Celeb-1M low-shot learning benchmark task."</p>
-<p>We suggest that if Microsoft Research wants biometric data for surveillance research and development, they should start with own researcher's biometric data instead of scraping the Internet for journalists, artists, writers, and academics.</p>
+</div></div></section><section><p>After publishing this list, researchers affiliated with Microsoft Asia then worked with researchers affiliated with China's National University of Defense Technology (controlled by China's Central Military Commission) and used the the MS Celeb dataset for their <a href="https://www.semanticscholar.org/paper/Faces-as-Lighting-Probes-via-Unsupervised-Deep-Yi-Zhu/b301fd2fc33f24d6f75224e7c0991f4f04b64a65">research paper</a> on using "Faces as Lighting Probes via Unsupervised Deep Highlight Extraction" with potential applications in 3D face recognition.</p>
+<p>In an April 10, 2019 <a href="https://www.ft.com/content/9378e7ee-5ae6-11e9-9dde-7aedca0a081a">article</a> published by Financial Times based on data surfaced during this investigation, Samm Sacks (a senior fellow at the New America think tank) commented that this research raised "red flags because of the nature of the technology, the author's affiliations, combined with what we know about how this technology is being deployed in China right now". Adding, that "the [Chinese] government is using these technologies to build surveillance systems and to detain minorities [in Xinjiang]".<a class="footnote_shim" name="[^madhu_ft]_1"> </a><a href="#[^madhu_ft]" class="footnote" title="Footnote 2">2</a></p>
+<p>Four more papers published by SenseTime, which also use the MS Celeb dataset, raise similar flags. SenseTime is a computer vision surveillance company that until <a href="https://uhrp.org/news-commentary/china%E2%80%99s-sensetime-sells-out-xinjiang-security-joint-venture">April 2019</a> provided surveillance to Chinese authorities to monitor and track Uighur Muslims in Xinjiang province, and had been <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">flagged</a> numerous times as having potential links to human rights violations.</p>
+<p>One of the 4 SenseTime papers, "<a href="https://www.semanticscholar.org/paper/Exploring-Disentangled-Feature-Representation-Face-Liu-Wei/1fd5d08394a3278ef0a89639e9bfec7cb482e0bf">Exploring Disentangled Feature Representation Beyond Face Identification</a>", shows how SenseTime was developing automated face analysis technology to infer race, narrow eyes, nose size, and chin size, all of which could be used to target vulnerable ethnic groups based on their facial appearances.</p>
+<p>Earlier in 2019, Microsoft President and Chief Legal Officer <a href="https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/">Brad Smith</a> called for the governmental regulation of face recognition, citing the potential for misuse, a rare admission that Microsoft's surveillance-driven business model had lost its bearing. More recently Smith also <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">announced</a> that Microsoft would seemingly take a stand against such potential misuse, and had decided to not sell face recognition to an unnamed United States agency, citing a lack of accuracy. The software was not suitable to be used on minorities, because it was trained mostly on white male faces.</p>
+<p>What the decision to block the sale announces is not so much that Microsoft had upgraded their ethics, but that Microsoft publicly acknowledged it can't sell a data-driven product without data. In other words, Microsoft can't sell face recognition for faces they can't train on.</p>
+<p>Until now, that data has been freely harvested from the Internet and packaged in training sets like MS Celeb, which are overwhelmingly <a href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html">white</a> and <a href="https://gendershades.org">male</a>. Without balanced data, facial recognition contains blind spots. And without datasets like MS Celeb, the powerful yet inaccurate facial recognition services like Microsoft's Azure Cognitive Service also would not be able to see at all.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/msceleb_montage.jpg' alt=' A visualization of 2,000 of the 100,000 identity included in the image dataset distributed by Microsoft Research. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 2,000 of the 100,000 identity included in the image dataset distributed by Microsoft Research. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>," Microsoft leveraged the MS Celeb dataset to analyze their algorithms and advertise the results. Interestingly, Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> of the paper does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-access version</a> published on arxiv.org explicitly mentions that Microsoft Research introspected their algorithms "on the MS-Celeb-1M low-shot learning benchmark task."</p>
+<p>We suggest that if Microsoft Research wants to make biometric data publicly available for surveillance research and development, they should start with releasing their researchers' own biometric data, instead of scraping the Internet for journalists, artists, writers, actors, athletes, musicians, and academics.</p>
</section><section>
<h3>Who used Microsoft Celeb?</h3>
@@ -300,7 +250,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -313,25 +263,23 @@
<h2>Supplementary Information</h2>
-</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^brad_smith]" class="footnote_shim"></a><span class="backlinks"></span>Brad Smith cite
-</li><li>2 <a name="[^msceleb_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^msceleb_orig]_1">a</a></span>MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
-</li><li>3 <a name="[^madhu_ft]" class="footnote_shim"></a><span class="backlinks"><a href="#[^madhu_ft]_1">a</a></span>Microsoft worked with Chinese military university on artificial intelligence
-</li><li>4 <a name="[^disentangled]" class="footnote_shim"></a><span class="backlinks"><a href="#[^disentangled]_1">a</a></span>"Exploring Disentangled Feature Representation Beyond Face Identification"
+</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^msceleb_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^msceleb_orig]_1">a</a></span>MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
+</li><li>2 <a name="[^madhu_ft]" class="footnote_shim"></a><span class="backlinks"><a href="#[^madhu_ft]_1">a</a></span>Murgia, Madhumita. Microsoft worked with Chinese military university on artificial intelligence. Financial Times. April 10, 2019.
</li></ul></section></section>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index 4fbcaccb..0cf55b5c 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -5,19 +5,46 @@
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="Oxford Town Centre is a dataset of surveillance camera footage from Cornmarket St Oxford, England" />
+ <meta property="og:title" content="MegaPixels: Oxford Town Centre Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/oxford_town_centre/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>TownCentre</div>
+ <div class='page_name'>TownCentre</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -50,7 +77,7 @@
<div class='gray'>Website</div>
<div><a href='http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html' target='_blank' rel='nofollow noopener'>ox.ac.uk</a></div>
</div></div><p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">Oxford Town Centre dataset</a> has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
-<p>The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of or consented to participation in the research project.</p>
+<p>The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of nor consented to participation in the research project.</p>
</section><section>
<h3>Who used TownCentre?</h3>
@@ -98,7 +125,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -113,8 +140,8 @@
</section><section><h3>Location</h3>
<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> it is increasingly likely that they would also be able to access to this camera.</p>
-<p>Next, to discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset proving the camera can and has been rotated before.</p>
-<p>As for the capture date, the text on the storefront display shows a sale happening from December 2nd &ndash; 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007 since prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that is was probably a weekday after rubbish removal.</p>
+<p>Next, to discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset, proving the camera can and has been rotated before.</p>
+<p>As for the capture date, the text on the storefront display shows a sale happening from December 2nd &ndash; 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007, since prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that it was probably a weekday after rubbish removal.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_cctv.jpg' alt=' Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)'><div class='caption'> Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)</div></div></section><section><div class='columns columns-'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_body.jpg' alt=' Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc'><div class='caption'> Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_face.jpg' alt=' Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc'><div class='caption'> Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc</div></div></section></div></section><section>
<h4>Cite Our Work</h4>
@@ -138,17 +165,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html
index 6c920b46..065f3e47 100644
--- a/site/public/datasets/pipa/index.html
+++ b/site/public/datasets/pipa/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>PIPA Dataset</div>
+ <div class='page_name'>PIPA Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -94,7 +95,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -102,17 +103,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/pubfig/index.html b/site/public/datasets/pubfig/index.html
index e81e12bc..79644e40 100644
--- a/site/public/datasets/pubfig/index.html
+++ b/site/public/datasets/pubfig/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>PubFig</div>
+ <div class='page_name'>PubFig</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -91,7 +92,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -99,17 +100,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index 23aeeff1..b5ceebd3 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -5,19 +5,46 @@
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
<meta name="description" content="UnConstrained College Students is a dataset of long-range surveillance photos of students on University of Colorado in Colorado Springs campus" />
+ <meta property="og:title" content="MegaPixels: UnConstrained College Students Dataset"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/datasets/uccs/"/>
+ <meta property="og:site_name" content="MegaPixels" />
<meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
<link rel='stylesheet' href='/assets/css/fonts.css' />
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>UCCS</div>
+ <div class='page_name'>UCCS</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -56,7 +83,7 @@ Their setup made it impossible for students to know they were being photographed
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_grid.jpg' alt=' Example images from the UnConstrained College Students Dataset. '><div class='caption'> Example images from the UnConstrained College Students Dataset. </div></div></section><section><p>The EXIF data embedded in the images shows that the photo capture times follow a similar pattern to that outlined by the researchers, but also highlights that the vast majority of photos (over 7,000) were taken on Tuesdays around noon during students' lunch break. The lack of any photos taken between Friday through Sunday shows that the researchers were only interested in capturing images of students during the peak campus hours.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot_days.png' alt=' UCCS photos captured per weekday &copy; megapixels.cc'><div class='caption'> UCCS photos captured per weekday &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot.png' alt=' UCCS photos captured per weekday &copy; megapixels.cc'><div class='caption'> UCCS photos captured per weekday &copy; megapixels.cc</div></div></section><section><p>The two research papers associated with the release of the UCCS dataset (<a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">Unconstrained Face Detection and Open-Set Face Recognition Challenge</a> and <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">Large Scale Unconstrained Open Set Face Database</a>), acknowledge that the primary funding sources for their work were United States defense and intelligence agencies. Specifically, development of the UnContsrianed College Students dataset was funded by the Intelligence Advanced Research Projects Activity (IARPA), Office of Director of National Intelligence (ODNI), Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative (ONR MURI), and the Special Operations Command and Small Business Innovation Research (SOCOM SBIR) amongst others. UCCS's VAST site also explicitly <a href="https://vast.uccs.edu/project/iarpa-janus/">states</a> their involvement in the <a href="https://www.iarpa.gov/index.php/research-programs/janus">IARPA Janus</a> face recognition project developed to serve the needs of national intelligence, establishing that immediate benefactors of this dataset include United States defense and intelligence agencies, but it would go on to benefit other similar organizations.</p>
<p>In 2017, one year after its public release, the UCCS face dataset formed the basis for a defense and intelligence agency funded <a href="http://www.face-recognition-challenge.com/">face recognition challenge</a> project at the International Joint Biometrics Conference in Denver, CO. And in 2018 the dataset was again used for the <a href="https://erodner.github.io/ial2018eccv/">2nd Unconstrained Face Detection and Open Set Recognition Challenge</a> at the European Computer Vision Conference (ECCV) in Munich, Germany.</p>
-<p>As of April 15, 2019, the UCCS dataset is no longer available for public download. But during the three years it was publicly available (2016-2019) the UCCS dataset appeared in at least 6 publicly available research papers including verified usage from Beihang University who is known to provide research and development for China's military; and Vision Semantics Ltd who lists the UK Ministory of Defence as a project partner.</p>
+<p>As of April 15, 2019, the UCCS dataset is no longer available for public download. But during the three years it was publicly available (2016-2019) the UCCS dataset appeared in at least 6 publicly available research papers including verified usage from Beihang University who is known to provide research and development for China's military; and Vision Semantics Ltd who lists the UK Ministry of Defence as a project partner.</p>
</section><section>
<h3>Who used UCCS?</h3>
@@ -104,7 +131,7 @@ Their setup made it impossible for students to know they were being photographed
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -258,17 +285,17 @@ Their setup made it impossible for students to know they were being photographed
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html
index a9d318f1..7844f5f4 100644
--- a/site/public/datasets/vgg_face2/index.html
+++ b/site/public/datasets/vgg_face2/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>Brainwash Dataset</div>
+ <div class='page_name'>Brainwash Dataset</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -96,7 +97,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -124,17 +125,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html
index bc4ddd3d..320899ea 100644
--- a/site/public/datasets/viper/index.html
+++ b/site/public/datasets/viper/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>VIPeR</div>
+ <div class='page_name'>VIPeR</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -96,7 +97,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -104,17 +105,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/youtube_celebrities/index.html b/site/public/datasets/youtube_celebrities/index.html
index 69b3a02e..b871ab18 100644
--- a/site/public/datasets/youtube_celebrities/index.html
+++ b/site/public/datasets/youtube_celebrities/index.html
@@ -11,13 +11,14 @@
<link rel='stylesheet' href='/assets/css/css.css' />
<link rel='stylesheet' href='/assets/css/leaflet.css' />
<link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile .css' />
</head>
<body>
<header>
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='splash'>YouTube Celebrities</div>
+ <div class='page_name'>YouTube Celebrities</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -75,7 +76,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -95,17 +96,17 @@ the views of our sponsors.</li>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>