summaryrefslogtreecommitdiff
path: root/site/public/datasets
diff options
context:
space:
mode:
Diffstat (limited to 'site/public/datasets')
-rw-r--r--site/public/datasets/adience/index.html9
-rw-r--r--site/public/datasets/brainwash/index.html19
-rw-r--r--site/public/datasets/duke_mtmc/index.html13
-rw-r--r--site/public/datasets/helen/index.html96
-rw-r--r--site/public/datasets/ibm_dif/index.html31
-rw-r--r--site/public/datasets/ijb_c/index.html9
-rw-r--r--site/public/datasets/index.html32
-rw-r--r--site/public/datasets/lfpw/index.html17
-rw-r--r--site/public/datasets/megaface/index.html37
-rw-r--r--site/public/datasets/msceleb/index.html31
-rw-r--r--site/public/datasets/oxford_town_centre/index.html12
-rw-r--r--site/public/datasets/pipa/index.html9
-rw-r--r--site/public/datasets/uccs/index.html13
-rw-r--r--site/public/datasets/who_goes_there/index.html9
14 files changed, 239 insertions, 98 deletions
diff --git a/site/public/datasets/adience/index.html b/site/public/datasets/adience/index.html
index b2aa2733..9f621441 100644
--- a/site/public/datasets/adience/index.html
+++ b/site/public/datasets/adience/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/adience/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Adience ...</span></div><div class='hero_subdesc'><span class='bgpad'>Adience ...
-</span></div></div></section><section><h2>Adience</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/adience/assets/background.jpg)'></section><section><h2>Adience</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2014</div>
@@ -97,10 +96,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how Adience Benchmark Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Adience Benchmark was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how Adience Benchmark Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Adience Benchmark was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -115,7 +114,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 18600b6f..31390edf 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -55,8 +55,8 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco</span></div><div class='hero_subdesc'><span class='bgpad'>It includes 11,917 images of "everyday life of a busy downtown cafe" and is used for training face and head detection algorithms
-</span></div></div></section><section><h2>Brainwash Dataset</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>One of the 11,917 images in the Brainwash dataset captured from the Brainwash Cafe in San Francisco</div></div></section><section><h1>Brainwash Dataset</h1>
+<p>Update: In response to the publication of this report, the Brainwash dataset has been "removed from access at the request of the depositor."</p>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
@@ -78,7 +78,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><p>Brainwash is a dataset of livecam images taken from San Francisco's Brainwash Cafe. It includes 11,917 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throughout the day. The Brainwash dataset includes 3 full days of webcam images taken on October 27, November 13, and November 24 in 2014. According the author's <a href="https://www.semanticscholar.org/paper/End-to-End-People-Detection-in-Crowded-Scenes-Stewart-Andriluka/1bd1645a629f1b612960ab9bba276afd4cf7c666">research paper</a> introducing the dataset, the images were acquired with the help of Angelcam.com. <a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
+ </div><div class='meta'><div class='gray'>Press coverage</div><div><a href="https://www.nytimes.com/2019/07/13/technology/">New York Times</a>, <a href="https://www.tijd.be/dossier/legrandinconnu/brainwash/10136670.html">De Tijd</a></div></div></div><p>Brainwash is a dataset of livecam images taken from San Francisco's Brainwash Cafe. It includes 11,917 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throughout the day. The Brainwash dataset includes 3 full days of webcam images taken on October 27, November 13, and November 24 in 2014. According the author's <a href="https://www.semanticscholar.org/paper/End-to-End-People-Detection-in-Crowded-Scenes-Stewart-Andriluka/1bd1645a629f1b612960ab9bba276afd4cf7c666">research paper</a> introducing the dataset, the images were acquired with the help of Angelcam.com. <a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
<p>The Brainwash dataset is unique because it uses images from a publicly available webcam that records people inside a privately owned business without their consent. No ordinary cafe customer could ever suspect that their image would end up in dataset used for surveillance research and development, but that is exactly what happened to customers at Brainwash Cafe in San Francisco.</p>
<p>Although Brainwash appears to be a less popular dataset, it was notably used in 2016 and 2017 by researchers affiliated with the National University of Defense Technology in China for two <a href="https://www.semanticscholar.org/paper/Localized-region-context-and-object-feature-fusion-Li-Dou/b02d31c640b0a31fb18c4f170d841d8e21ffb66c">research</a> <a href="https://www.semanticscholar.org/paper/A-Replacement-Algorithm-of-Non-Maximum-Suppression-Zhao-Wang/591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b">projects</a> on advancing the capabilities of object detection to more accurately isolate the target region in an image. <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a> The <a href="https://en.wikipedia.org/wiki/National_University_of_Defense_Technology">National University of Defense Technology</a> is controlled by China's top military body, the Central Military Commission.</p>
<p>The Brainwash dataset also appears in a 2018 research paper affiliated with Megvii (Face++) that used images from Brainwash cafe "to validate the generalization ability of [their] CrowdHuman dataset for head detection."<a class="footnote_shim" name="[^crowdhuman]_1"> </a><a href="#[^crowdhuman]" class="footnote" title="Footnote 5">5</a>. Megvii is the parent company of Face++, who has provided surveillance technology to <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">monitor Uighur Muslims</a> in Xinjiang and may be <a href="https://www.bloomberg.com/news/articles/2019-05-22/trump-weighs-blacklisting-two-chinese-surveillance-companies">blacklisted</a> in the United States.</p>
@@ -106,10 +106,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -124,7 +124,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
@@ -145,7 +145,12 @@
<h2>Supplementary Information</h2>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' Nine of 11,917 images from the the Brainwash dataset. Graphic: megapixels.cc based on Brainwash dataset by Russel et. al. License: <a href="https://opendatacommons.org/licenses/pddl/summary/index.html">Open Data Commons Public Domain Dedication</a> (PDDL)'><div class='caption'> Nine of 11,917 images from the the Brainwash dataset. Graphic: megapixels.cc based on Brainwash dataset by Russel et. al. License: <a href="https://opendatacommons.org/licenses/pddl/summary/index.html">Open Data Commons Public Domain Dedication</a> (PDDL)</div></div></section><section>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' Nine of 11,917 images from the the Brainwash dataset. Graphic: megapixels.cc based on Brainwash dataset by Russel et. al. License: <a href="https://opendatacommons.org/licenses/pddl/summary/index.html">Open Data Commons Public Domain Dedication</a> (PDDL)'><div class='caption'> Nine of 11,917 images from the the Brainwash dataset. Graphic: megapixels.cc based on Brainwash dataset by Russel et. al. License: <a href="https://opendatacommons.org/licenses/pddl/summary/index.html">Open Data Commons Public Domain Dedication</a> (PDDL)</div></div></section><section><h3>Press Coverage</h3>
+<ul>
+<li>New York Times: <a href="https://www.nytimes.com/2019/07/13/technology/">Facial Recognition Tech Is Growing Stronger, Thanks to Your Face</a></li>
+<li>De Tijd: <a href="https://www.tijd.be/dossier/legrandinconnu/brainwash/10136670.html">Brainwash</a></li>
+</ul>
+</section><section>
<h4>Cite Our Work</h4>
<p>
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index fc141450..e86afe63 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -55,8 +55,8 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014
-</span></div></div></section><section><h2>Duke MTMC</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>A still frame from the Duke MTMC (Multi-Target-Multi-Camera) CCTV dataset captured on Duke University campus in 2014. The dataset has now been terminated by the author in response to this report.</div></div></section><section><h1>Duke MTMC</h1>
+<p>Update: In response to this report and an <a href="https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e">investigation</a> by the Financial Times, Duke University has terminated the Duke MTMC dataset.</p>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
@@ -75,7 +75,8 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
- </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a></p>
+ </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition.</p>
+<p>The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a></p>
<p>For this analysis of the Duke MTMC dataset over 100 publicly available research papers that used the dataset were analyzed to find out who's using the dataset and where it's being used. The results show that the Duke MTMC dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.</p>
<p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in suspect tracking. Both SenseNets and SenseTime have been linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 4">4</a><a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a></p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.</div></div></section><section><p>Despite <a href="https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent">repeated</a> <a href="https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region">warnings</a> by Human Rights Watch that the authoritarian surveillance used in China represents a humanitarian crisis, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 90 research projects happening in China that publicly acknowledged using the Duke MTMC dataset. Amongst these were projects from CloudWalk, Hikvision, Megvii (Face++), SenseNets, SenseTime, Beihang University, China's National University of Defense Technology, and the PLA's Army Engineering University.</p>
@@ -268,10 +269,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -286,7 +287,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/helen/index.html b/site/public/datasets/helen/index.html
index 44ef462e..08791d29 100644
--- a/site/public/datasets/helen/index.html
+++ b/site/public/datasets/helen/index.html
@@ -4,7 +4,7 @@
<title>MegaPixels: HELEN</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
- <meta name="description" content="HELEN Face Dataset" />
+ <meta name="description" content="HELEN is a dataset of face images from Flickr used for training facial component localization algorithms" />
<meta property="og:title" content="MegaPixels: HELEN"/>
<meta property="og:type" content="website"/>
<meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>HELEN Face Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>HELEN (under development)
-</span></div></div></section><section><h2>HELEN</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>Example images from the HELEN dataset</div></div></section><section><h1>HELEN Dataset</h1>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2012</div>
@@ -69,8 +68,74 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.ifp.illinois.edu/~vuongle2/helen/' target='_blank' rel='nofollow noopener'>illinois.edu</a></div>
- </div></div><p>[ page under development ]</p>
-</section><section>
+ </div></div><p>Helen is a dataset of annotated face images used for facial component localization. It includes 2,330 images from Flickr found by searching for "portrait" combined with terms such as "family", "wedding", "boy", "outdoor", and "studio".<a class="footnote_shim" name="[^orig_paper]_1"> </a><a href="#[^orig_paper]" class="footnote" title="Footnote 1">1</a></p>
+<p>The dataset was published in 2012 with the primary motivation listed as facilitating "high quality editing of portraits". However, the paper's introduction also mentions that facial feature localization "is an essential component for face recognition, tracking and expression analysis."<a class="footnote_shim" name="[^orig_paper]_2"> </a><a href="#[^orig_paper]" class="footnote" title="Footnote 1">1</a></p>
+<p>Irregardless of the authors' primary motivations, the HELEN dataset has become one of the most widely used datasets for training facial landmark algorithms, which are essential parts of most facial recogntion processing systems. Facial landmarking are used to isolate facial features such as the eyes, nose, jawline, and mouth in order to align faces to match a templated pose.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/montage_lms_21_14_14_14_26.png' alt=' An example annotation from the HELEN dataset showing 194 points that were originally annotated by Mechanical Turk workers. Graphic &copy; 2019 MegaPixels.cc based on data from HELEN dataset by Le, Vuong et al.'><div class='caption'> An example annotation from the HELEN dataset showing 194 points that were originally annotated by Mechanical Turk workers. Graphic &copy; 2019 MegaPixels.cc based on data from HELEN dataset by Le, Vuong et al.</div></div></section><section><p>This analysis shows that since its initial publication in 2012, the HELEN dataset has been used in over 200 research projects related to facial recognition with the vast majority of research taking place in China.</p>
+<p>Commercial use includes IBM, NVIDIA, NEC, Microsoft Research Asia, Google, Megvii, Microsoft, Intel, Daimler, Tencent, Baidu, Adobe, Facebook</p>
+<p>Military and Defense Usage includes NUDT</p>
+<p><a href="http://eccv2012.unifi.it/">http://eccv2012.unifi.it/</a></p>
+<p>TODO</p>
+<ul>
+<li>add proof of use in dlib and openface</li>
+<li>add proof of use in commercial use of dlib? ibm dif</li>
+<li>make landmark over blurred images</li>
+<li>add 6x6 gride for landmarks</li>
+<li>highlight key findings</li>
+<li>highlight key commercial usage</li>
+<li>look for most interesting research papers to provide example of how it's used for face recognition</li>
+<li>estimated time: 6 hours</li>
+<li>add data to github repo?</li>
+</ul>
+<table>
+<thead><tr>
+<th>Organization</th>
+<th>Paper</th>
+<th>Link</th>
+<th>Year</th>
+<th>Used Duke MTMC</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>SenseTime, Amazon</td>
+<td><a href="https://arxiv.org/pdf/1805.10483.pdf">Look at Boundary: A Boundary-Aware Face Alignment Algorithm</a></td>
+</tr>
+<tr>
+<td>2018</td>
+<td>year</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>SenseTime</td>
+<td><a href="https://arxiv.org/pdf/1807.11079.pdf">ReenactGAN: Learning to Reenact Faces via Boundary Transfer</a></td>
+<td>2018</td>
+<td>year</td>
+<td>&#x2714;</td>
+</tr>
+</tbody>
+</table>
+<p>The dataset was used for training the OpenFace software "we used the HELEN and LFPW training subsets for training and the rest for testing" <a href="https://github.com/TadasBaltrusaitis/OpenFace/wiki/Datasets">https://github.com/TadasBaltrusaitis/OpenFace/wiki/Datasets</a></p>
+<p>The popular dlib facial landmark detector was trained using HELEN</p>
+<p>In addition to the 200+ verified citations, the HELEN dataset was used for</p>
+<ul>
+<li><a href="https://github.com/memoiry/face-alignment">https://github.com/memoiry/face-alignment</a></li>
+<li><a href="http://www.dsp.toronto.edu/projects/face_analysis/">http://www.dsp.toronto.edu/projects/face_analysis/</a></li>
+</ul>
+<p>It's been converted into new datasets including</p>
+<ul>
+<li><a href="https://github.com/JPlin/Relabeled-HELEN-Dataset">https://github.com/JPlin/Relabeled-HELEN-Dataset</a></li>
+<li><a href="https://www.kaggle.com/kmader/helen-eye-dataset">https://www.kaggle.com/kmader/helen-eye-dataset</a></li>
+</ul>
+<p>The original site</p>
+<ul>
+<li><a href="http://www.ifp.illinois.edu/~vuongle2/helen/">http://www.ifp.illinois.edu/~vuongle2/helen/</a></li>
+</ul>
+<h3>Example Images</h3>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/feature_outdoor_02.jpg' alt=' An image from the HELEN dataset "wedding" category used for training face recognition 2839127417_1.jpg for outdoor studio'><div class='caption'> An image from the HELEN dataset "wedding" category used for training face recognition 2839127417_1.jpg for outdoor studio</div></div>
+<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/feature_graduation.jpg' alt=' An image from the HELEN dataset "wedding" category used for training face recognition 2325274893_1 '><div class='caption'> An image from the HELEN dataset "wedding" category used for training face recognition 2325274893_1 </div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/feature_wedding.jpg' alt=' An image from the HELEN dataset "wedding" category used for training face recognition 2325274893_1 '><div class='caption'> An image from the HELEN dataset "wedding" category used for training face recognition 2325274893_1 </div></div>
+<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/feature_wedding_02.jpg' alt=' An image from the HELEN dataset "wedding" category used for training face recognition 2325274893_1 '><div class='caption'> An image from the HELEN dataset "wedding" category used for training face recognition 2325274893_1 </div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/feature_family.jpg' alt=' Original Flickr image used in HELEN facial analysis and recognition dataset for the keyword "family". 296814969'><div class='caption'> Original Flickr image used in HELEN facial analysis and recognition dataset for the keyword "family". 296814969</div></div>
+<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/feature_family_05.jpg' alt=' Original Flickr image used in HELEN facial analysis and recognition dataset for the keyword "family". 296814969'><div class='caption'> Original Flickr image used in HELEN facial analysis and recognition dataset for the keyword "family". 296814969</div></div></section><section>
<h3>Who used Helen Dataset?</h3>
<p>
@@ -91,10 +156,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how Helen Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Helen Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how Helen Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Helen Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -109,7 +174,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
@@ -130,7 +195,10 @@
<h2>Supplementary Information</h2>
+</section><section><h3>Age and Gender Distribution</h3>
</section><section>
+ <p>Age and gender estimation distribution were calculated by anlayzing all faces in the dataset images. This may include additional faces appearing next to an annotated face, or this may skip false faces that were erroneously included as part of the original dataset. These numbers are provided as an estimation and not a factual representation of the exact gender and age of all faces.</p>
+</section><section><div class='columns columns-2'><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /datasets/helen/assets/age.csv", "fields": ["Caption: HELEN dataset age distribution", "Top: 10", "OtherLabel: Other"]}'></div></section><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /datasets/helen/assets/gender.csv", "fields": ["Caption: HELEN dataset gender distribution", "Top: 10", "OtherLabel: Other"]}'></div></section></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/montage_lms_21_15_15_7_26_0.png' alt=' Visualization of the HELEN dataset 194-point facial landmark annotations. Credit: graphic &copy; MegaPixels.cc 2019, data from HELEN dataset by Zhou, Brand, Lin 2013. If you use this image please credit both the graphic and data source.'><div class='caption'> Visualization of the HELEN dataset 194-point facial landmark annotations. Credit: graphic &copy; MegaPixels.cc 2019, data from HELEN dataset by Zhou, Brand, Lin 2013. If you use this image please credit both the graphic and data source.</div></div></section><section>
<h4>Cite Our Work</h4>
<p>
@@ -147,7 +215,17 @@
}</pre>
</p>
-</section>
+</section><section><h4>Cite the Original Author's Work</h4>
+<p>If you find the HELEN dataset useful or reference it in your work, please cite the author's original work as:</p>
+<pre>
+@inproceedings{Le2012InteractiveFF,
+ title={Interactive Facial Feature Localization},
+ author={Vuong Le and Jonathan Brandt and Zhe L. Lin and Lubomir D. Bourdev and Thomas S. Huang},
+ booktitle={ECCV},
+ year={2012}
+}
+</pre></section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^orig_paper]" class="footnote_shim"></a><span class="backlinks"><a href="#[^orig_paper]_1">a</a><a href="#[^orig_paper]_2">b</a></span>Le, Vuong et al. “Interactive Facial Feature Localization.” ECCV (2012).
+</li></ul></section></section>
</div>
<footer>
diff --git a/site/public/datasets/ibm_dif/index.html b/site/public/datasets/ibm_dif/index.html
index be5dbfe4..924194a7 100644
--- a/site/public/datasets/ibm_dif/index.html
+++ b/site/public/datasets/ibm_dif/index.html
@@ -1,11 +1,11 @@
<!doctype html>
<html>
<head>
- <title>MegaPixels: MegaFace</title>
+ <title>MegaPixels: IBM DiF</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
- <meta name="description" content="MegaFace Dataset" />
- <meta property="og:title" content="MegaPixels: MegaFace"/>
+ <meta name="description" content="Diversity in Faces Dataset" />
+ <meta property="og:title" content="MegaPixels: IBM DiF"/>
<meta property="og:type" content="website"/>
<meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
<meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg" />
@@ -45,7 +45,7 @@
<a class='slogan' href="/">
<div class='logo'></div>
<div class='site_name'>MegaPixels</div>
- <div class='page_name'>MegaFace Dataset</div>
+ <div class='page_name'>IBM Diversity in Faces</div>
</a>
<div class='links'>
<a href="/datasets/">Datasets</a>
@@ -55,26 +55,19 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MegaFace Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>MegaFace contains 670K identities and 4.7M images
-</span></div></div></section><section><h2>MegaFace</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg)'></section><section><h2>IBM Diversity in Faces</h2>
</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2016</div>
- </div><div class='meta'>
<div class='gray'>Images</div>
- <div>4,753,520 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>672,057 </div>
+ <div>1,070,000 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>face recognition</div>
+ <div>Face recognition and cranio-facial analysis</div>
</div><div class='meta'>
<div class='gray'>Website</div>
- <div><a href='http://megaface.cs.washington.edu/' target='_blank' rel='nofollow noopener'>washington.edu</a></div>
+ <div><a href='https://www.research.ibm.com/artificial-intelligence/trusted-ai/diversity-in-faces/' target='_blank' rel='nofollow noopener'>ibm.com</a></div>
</div></div><p>[ page under development ]</p>
</section><section>
- <h3>Who used MegaFace Dataset?</h3>
+ <h3>Who used IBM Diversity in Faces?</h3>
<p>
This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
@@ -94,10 +87,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how IBM Diversity in Faces has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Diversity in Faces Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -112,7 +105,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/ijb_c/index.html b/site/public/datasets/ijb_c/index.html
index abe7d5ed..05826c3f 100644
--- a/site/public/datasets/ijb_c/index.html
+++ b/site/public/datasets/ijb_c/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>IARPA Janus Benchmark C is a dataset of web images used</span></div><div class='hero_subdesc'><span class='bgpad'>The IJB-C dataset contains 21,294 images and 11,779 videos of 3,531 identities
-</span></div></div></section><section><h2>IARPA Janus Benchmark C (IJB-C)</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/background.jpg)'></section><section><h2>IARPA Janus Benchmark C (IJB-C)</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2017</div>
@@ -147,10 +146,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how IJB-C has been used around the world by commercial, military, and academic organizations; existing publicly available research citing IARPA Janus Benchmark C was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how IJB-C has been used around the world by commercial, military, and academic organizations; existing publicly available research citing IARPA Janus Benchmark C was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -165,7 +164,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index d38feb2e..a354a2d5 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -53,13 +53,13 @@
<a href="/research">Research</a>
</div>
</header>
- <div class="content content-">
+ <div class="content content-dataset-list">
<div class='dataset-heading'>
<section><h1>Dataset Analyses</h1>
<p>Explore face and person recognition datasets contributing to the growing crisis of biometric surveillance technologies. This first group of 5 datasets focuses on image usage connected to foreign surveillance and defense organizations.</p>
-<p>In response to the analyses below, the <a href="https://purl.stanford.edu/sx925dc9385">Brainwash</a>, <a href="http://vision.cs.duke.edu/DukeMTMC/">Duke MTMC</a>, and <a href="http://msceleb.org/">MS Celeb</a> datasets have been taken down by their authors. The <a href="https://vast.uccs.edu/Opensetface/">UCCS</a> dataset was temporarily deactivated due to metadata exposure. Read more <a href="/about/news">news</a>. A more complete list of datasets and research will be published in September 2019. These 5 are only a preview.</p>
+<p>In response to the analyses below, the <a href="/datasets/brainwash">Brainwash</a>, <a href="/datasets/duke_mtmc">Duke MTMC</a>, and <a href="/datasets/msceleb/">MS Celeb</a> datasets have been taken down by their authors. The <a href="/dataests/uccs/">UCCS</a> dataset was temporarily deactivated due to metadata exposure. Read more <a href="/about/news">news</a>. A more complete list of datasets and research will be published in September 2019. These 5 are only a preview.</p>
</section>
</div>
@@ -97,6 +97,34 @@
</div>
</a>
+ <a href="/datasets/helen/">
+ <div class="dataset-image" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/index.jpg)"></div>
+ <div class="dataset">
+ <span class='title'>HELEN</span>
+ <div class='fields'>
+ <div class='year visible'><span>2012</span></div>
+ <div class='purpose'><span>facial feature localization algorithm</span></div>
+
+ <div class='images'><span>2,330 images</span></div>
+
+ </div>
+ </div>
+ </a>
+
+ <a href="/datasets/megaface/">
+ <div class="dataset-image" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/index.jpg)"></div>
+ <div class="dataset">
+ <span class='title'>MegaFace</span>
+ <div class='fields'>
+ <div class='year visible'><span>2016</span></div>
+ <div class='purpose'><span>face recognition</span></div>
+
+ <div class='images'><span>4,753,520 images</span></div>
+
+ </div>
+ </div>
+ </a>
+
<a href="/datasets/msceleb/">
<div class="dataset-image" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/index.jpg)"></div>
<div class="dataset">
diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html
index f2ddc636..cc2a2c3f 100644
--- a/site/public/datasets/lfpw/index.html
+++ b/site/public/datasets/lfpw/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfpw/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Labeled Face Parts in the Wild Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>Labeled Face Parts in the Wild ...
-</span></div></div></section><section><h2>Labeled Face Parts in the Wild</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfpw/assets/background.jpg)'></section><section><h2>Labeled Face Parts in the Wild</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2011</div>
@@ -69,7 +68,13 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://neerajkumar.org/databases/lfpw/' target='_blank' rel='nofollow noopener'>neerajkumar.org</a></div>
- </div></div><p>[ page under development ]</p>
+ </div></div><p>RESEARCH below this line</p>
+<blockquote><p>Release 1 of LFPW consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset. LFPW was originally described in the following publication:</p>
+<p>Due to copyright issues, we cannot distribute image files in any format to anyone. Instead, we have made available a list of image URLs where you can download the images yourself. We realize that this makes it impossible to exactly compare numbers, as image links will slowly disappear over time, but we have no other option. This seems to be the way other large web-based databases seem to be evolving.</p>
+</blockquote>
+<p><a href="https://neerajkumar.org/databases/lfpw/">https://neerajkumar.org/databases/lfpw/</a></p>
+<blockquote><p>This research was performed at Kriegman-Belhumeur Vision Technologies and was funded by the CIA through the Office of the Chief Scientist. <a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a> (nk_cvpr2011_faceparts.pdf)</p>
+</blockquote>
</section><section>
<h3>Who used LFPW?</h3>
@@ -91,10 +96,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how LFPW has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Face Parts in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how LFPW has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Face Parts in the Wild was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -109,7 +114,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/megaface/index.html b/site/public/datasets/megaface/index.html
index 712af28a..d213293a 100644
--- a/site/public/datasets/megaface/index.html
+++ b/site/public/datasets/megaface/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MegaFace Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>MegaFace contains 670K identities and 4.7M images
-</span></div></div></section><section><h2>MegaFace</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>Images from the MegaFace face recognition training and benchmarking dataset</div></div></section><section><h1>MegaFace</h1>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
@@ -68,11 +67,32 @@
<div>672,057 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>face recognition</div>
+ <div>Face recognition</div>
+ </div><div class='meta'>
+ <div class='gray'>Created by</div>
+ <div>Ira Kemelmacher-Shlizerman</div>
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://megaface.cs.washington.edu/' target='_blank' rel='nofollow noopener'>washington.edu</a></div>
- </div></div><p>[ page under development ]</p>
+ </div></div><p>MegaFace is a dataset of 4,700,000 face images of 672,000 individuals used for developing face recognition technologies. All images were downloaded from Flickr.</p>
+<h4>How was it made</h4>
+<p>MegaFace was developed by the University of Washington for the purpose of trainng, validating, and benchmarking face recognition algorithms.</p>
+<p>The images are from Flickr, but are they all from YFCC100M?</p>
+<h4>Who used it</h4>
+<p>MegaFace was used for research projects associated with SenseTime, Google, Mitsubishi, Vision Semantics Ltd, Microsoft.</p>
+<h4>Subsets</h4>
+<p>MegaFace was also used for MegaFace Asian, and MegaAge, and glasses.</p>
+<h4>A sample of the research projects</h4>
+<p>Used for face recognition</p>
+<p>screenshots of papers</p>
+<h4>Visuals</h4>
+<ul>
+<li>facial landmarks</li>
+<li>bounding boxes</li>
+<li>animation of all the titles of the paper</li>
+<li></li>
+</ul>
+<h2>#</h2>
</section><section>
<h3>Who used MegaFace Dataset?</h3>
@@ -94,10 +114,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -112,7 +132,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
@@ -133,6 +153,9 @@
<h2>Supplementary Information</h2>
+</section><section><h3>Age and Gender Distribution</h3>
+</section><section><div class='columns columns-2'><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /datasets/megaface/assets/age.csv", "fields": ["Caption: MegaFace dataset age distribution", "Top: 10", "OtherLabel: Other"]}'></div></section><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /datasets/megaface/assets/gender.csv", "fields": ["Caption: MegaFace dataset gender distribution", "Top: 10", "OtherLabel: Other"]}'></div></section></div></section><section>
+ <p>Age and gender estimation distribution were calculated by anlayzing all faces in the dataset images. This may include additional faces appearing next to an annotated face, or this may skip false faces that were erroneously included as part of the original dataset. These numbers are provided as an estimation and not a factual representation of the exact gender and age of all faces.</p>
</section><section>
<h4>Cite Our Work</h4>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index 42a44571..a664e99f 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -55,8 +55,8 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MS Celeb is a dataset of 10 million face images harvested from the Internet</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes 10 million images of 100,000 people and an additional target list of 1,000,000 individuals
-</span></div></div></section><section><h2>Microsoft Celeb Dataset (MS Celeb)</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>Example images forom the MS-Celeb-1M dataset</div></div></section><section><h1>Microsoft Celeb Dataset (MS Celeb)</h1>
+<p><em>Update: In response to this report and an <a href="https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e">investigation</a> by the Financial Times, Microsoft has terminated their MS-Celeb website <a href="https://msceleb.org">https://msceleb.org</a>.</em></p>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
@@ -78,7 +78,8 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.msceleb.org/' target='_blank' rel='nofollow noopener'>msceleb.org</a></div>
- </div></div><p>Microsoft Celeb (MS-Celeb-1M) is a dataset of 10 million face images harvested from the Internet for the purpose of developing face recognition technologies. According to Microsoft Research, who created and published the <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">dataset</a> in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of nearly 100,000 individuals. Microsoft's goal in building this dataset was to distribute an initial training dataset of 100,000 individuals' biometric data to accelerate research into recognizing a larger target list of one million people "using all the possibly collected face images of this individual on the web as training data".<a class="footnote_shim" name="[^msceleb_orig]_1"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a></p>
+ </div><div class='meta'><div class='gray'>Press coverage</div><div><a href="https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e">Financial Times</a>, <a href="https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html">New York Times</a>, <a href="https://www.bbc.com/news/technology-48555149">BBC</a>, <a href="https://www.spiegel.de/netzwelt/web/microsoft-gesichtserkennung-datenbank-mit-zehn-millionen-fotos-geloescht-a-1271221.html">Spiegel</a>, <a href="https://www.lesechos.fr/tech-medias/intelligence-artificielle/le-mariage-explosif-de-nos-donnees-et-de-lia-1031813">Les Echos</a>, <a href="https://www.lastampa.it/2019/06/22/tecnologia/microsoft-ha-cancellato-il-suo-database-per-il-riconoscimento-facciale-PWwLGmpO1fKQdykMZVBd9H/pagina.html">La Stampa</a></div></div></div><p>Microsoft Celeb (MS-Celeb-1M) is a dataset of 10 million face images harvested from the Internet for the purpose of developing face recognition technologies.</p>
+<p>According to Microsoft Research, who created and published the <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">dataset</a> in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of nearly 100,000 individuals. Microsoft's goal in building this dataset was to distribute an initial training dataset of 100,000 individuals' biometric data to accelerate research into recognizing a larger target list of one million people "using all the possibly collected face images of this individual on the web as training data".<a class="footnote_shim" name="[^msceleb_orig]_1"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a></p>
<p>While the majority of people in this dataset are American and British actors, the exploitative use of the term "celebrity" extends far beyond Hollywood. Many of the names in the MS Celeb face recognition dataset are merely people who must maintain an online presence for their professional lives: journalists, artists, musicians, activists, policy makers, writers, and academics. Many people in the target list are even vocal critics of the very technology Microsoft is using their name and biometric information to build. It includes digital rights activists like Jillian York; artists critical of surveillance including Trevor Paglen, Jill Magid, and Aram Bartholl; Intercept founders Laura Poitras, Jeremy Scahill, and Glenn Greenwald; Data and Society founder danah boyd; Shoshana Zuboff, author of <em>Surveillance Capitalism</em>; and even Julie Brill, the former FTC commissioner responsible for protecting consumer privacy.</p>
<h3>Microsoft's 1 Million Target List</h3>
<p>Microsoft Research distributed two main digital assets: a dataset of approximately 10,000,000 images of 100,000 individuals and a target list of exactly 1 million names. The 900,000 names without images are the target list, which is used to gather more images for each subject.</p>
@@ -219,6 +220,8 @@
<p>In 2017 Microsoft Research organized a face recognition competition at the International Conference on Computer Vision (ICCV), one of the top 2 computer vision conferences worldwide, where industry and academia used the MS Celeb dataset to compete for the highest performance scores. The 2017 winner was Beijing-based OrionStar Technology Co., Ltd.. In their <a href="https://www.prnewswire.com/news-releases/orionstar-wins-challenge-to-recognize-one-million-celebrity-faces-with-artificial-intelligence-300494265.html">press release</a>, OrionStar boasted a 13% increase on the difficult set over last year's winner. The prior year's competitors included Beijing-based Faceall Technology Co., Ltd., a company providing face recognition for "smart city" applications.</p>
<p>Considering the multiple citations from commercial organizations (Canon, Hitachi, IBM, Megvii/Face++, Microsoft, Microsoft Asia, SenseTime, OrionStar, Faceall), military use (National University of Defense Technology in China), the proliferation of subset data (Racial Faces in the Wild), and the real-time visible proliferation via Academic Torrents it's fairly clear that Microsoft has lost control of their MS Celeb dataset and the biometric data of nearly 100,000 individuals.</p>
<p>To provide insight into where these 10 million faces images have traveled, over 100 research papers have been verified and geolocated to show who used the dataset and where they used it.</p>
+<h2>GDPR and MS-Celeb</h2>
+<p>[ in progress ]</p>
</section><section>
<h3>Who used Microsoft Celeb?</h3>
@@ -240,10 +243,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how Microsoft Celeb has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Microsoft Celebrity Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how Microsoft Celeb has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Microsoft Celebrity Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -258,7 +261,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
@@ -279,11 +282,19 @@
<h2>Supplementary Information</h2>
-</section><section><h5>FAQs and Fact Check</h5>
+</section><section><h3>Age and Gender Distribution</h3>
+</section><section><div class='columns columns-2'><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /datasets/msceleb/assets/age.csv", "fields": ["Caption: MS-Celeb dataset age distribution", "Top: 10", "OtherLabel: Other"]}'></div></section><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /datasets/helen/assets/gender.csv", "fields": ["Caption: MS-Celeb dataset gender distribution", "Top: 10", "OtherLabel: Other"]}'></div></section></div></section><section><h5>FAQs and Fact Check</h5>
<ul>
-<li><strong>The MS Celeb images were not derived from Creative Commons sources</strong>. They were obtained by "retriev[ing] approximately 100 images per celebrity from popular search engines"<a class="footnote_shim" name="[^msceleb_orig]_2"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a>. The dataset actually includes many copyrighted images. Microsoft doesn't provide any image URLs, but manually reviewing a small portion of images from the dataset shows many images with watermarked "Copyright" text over the image. TinEye could be used to more accurately determine the image origins in aggregate</li>
-<li><strong>Microsoft did not distribute images of all one million people.</strong> They distributed images for about 100,000 and then encouraged other researchers to download the remaining 900,000 people "by using all the possibly collected face images of this individual on the web as training data."<a class="footnote_shim" name="[^msceleb_orig]_3"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a></li>
-<li><strong>Microsoft had not deleted or stopped distribution of their MS Celeb at the time of most press reports on June 4.</strong> Until at least June 6, 2019 the Microsoft Research data portal provided the MS Celeb dataset for download: <a href="http://web.archive.org/web/20190606150005/https://msropendata.com/datasets/98fdfc70-85ee-5288-a69f-d859bbe9c737">http://web.archive.org/web/20190606150005/https://msropendata.com/datasets/98fdfc70-85ee-5288-a69f-d859bbe9c737</a></li>
+<li><strong>Despite several erroneous reports mentioning the MS-Celeb images were derived from Creative Commons licensed media, the MS Celeb images were obtained from web search engines</strong>. The authors mention "they were obtained by "retriev[ing] approximately 100 images per celebrity from popular search engines"<a class="footnote_shim" name="[^msceleb_orig]_2"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a>. Many, if not the vast majority, are copyrighted images. Microsoft doesn't provide image URLs, but manually reviewing a small portion of images from the dataset shows images with watermarked "Copyright" text over the image and sources including stock photo agencies such as Getty. TinEye could be used to more accurately determine the image origins in aggregate.</li>
+<li><strong>Most reports incorrectly reported that Microsoft distributed images of all one million people. As this analysis mentions several times, Microsoft distributed images for 100,000 people and a separate target list of 900,000 more names.</strong> Other researchers where then expected and encouraged to download the remaining 900,000 people "by using all the possibly collected face images of this individual on the web as training data."<a class="footnote_shim" name="[^msceleb_orig]_3"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a></li>
+<li><strong>Microsoft claimed that they had deleted or stopped distribution of their MS Celeb dataset in April 2019 after the Financial Times investigation. This false.</strong> Until at least June 6, 2019 the Microsoft Research data portal freely provided the full MS Celeb dataset download: <a href="http://web.archive.org/web/20190606150005/https://msropendata.com/datasets/98fdfc70-85ee-5288-a69f-d859bbe9c737">http://web.archive.org/web/20190606150005/https://msropendata.com/datasets/98fdfc70-85ee-5288-a69f-d859bbe9c737</a></li>
+</ul>
+<h3>Press Coverage</h3>
+<ul>
+<li>Financial Times (original story): <a href="https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e">Who’s using your face? The ugly truth about facial recognition</a> </li>
+<li>New York Times (front page story): <a href="https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html">Facial Recognition Tech Is Growing Stronger, Thanks to Your Face</a></li>
+<li>BBC: <a href="https://www.bbc.com/news/technology-48555149">Microsoft deletes massive face recognition database</a></li>
+<li>Spiegel: <a href="https://www.spiegel.de/netzwelt/web/microsoft-gesichtserkennung-datenbank-mit-zehn-millionen-fotos-geloescht-a-1271221.html">Microsoft löscht Datenbank mit zehn Millionen Fotos</a></li>
</ul>
</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^msceleb_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^msceleb_orig]_1">a</a><a href="#[^msceleb_orig]_2">b</a><a href="#[^msceleb_orig]_3">c</a></span>MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition. Accessed April 18, 2019. <a href="http://web.archive.org/web/20190418151913/http://msceleb.org/">http://web.archive.org/web/20190418151913/http://msceleb.org/</a>
</li><li>2 <a name="[^madhu_ft]" class="footnote_shim"></a><span class="backlinks"><a href="#[^madhu_ft]_1">a</a></span>Murgia, Madhumita. Microsoft worked with Chinese military university on artificial intelligence. Financial Times. April 10, 2019.
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index 11fb436f..3a7eabf0 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Oxford Town Centre is a dataset of surveillance camera footage from Cornmarket St Oxford, England</span></div><div class='hero_subdesc'><span class='bgpad'>The Oxford Town Centre dataset includes approximately 2,200 identities and is used for research and development of face recognition systems
-</span></div></div></section><section><h2>Oxford Town Centre</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>A still frame from the Oxford Town Centre CCTV video-dataset</div></div></section><section><h1>Oxford Town Centre</h1>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2009</div>
@@ -78,7 +77,8 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html' target='_blank' rel='nofollow noopener'>ox.ac.uk</a></div>
- </div></div><p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">Oxford Town Centre dataset</a> has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
+ </div></div><p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a></p>
+<p>The CCTV video was obtained from a surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">Oxford Town Centre dataset</a> has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
<p>The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of nor consented to participation in the research project.</p>
</section><section>
<h3>Who used TownCentre?</h3>
@@ -101,10 +101,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how TownCentre has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Oxford Town Centre was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how TownCentre has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Oxford Town Centre was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -119,7 +119,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html
index 95b288fb..9c0a974a 100644
--- a/site/public/datasets/pipa/index.html
+++ b/site/public/datasets/pipa/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>PIPA ...</span></div><div class='hero_subdesc'><span class='bgpad'>PIPA ...
-</span></div></div></section><section><h2>MegaFace</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/background.jpg)'></section><section><h2>PIPA: People in Photo Albums</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
@@ -97,10 +96,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how PIPA Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing People in Photo Albums Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how PIPA Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing People in Photo Albums Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -115,7 +114,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index 2dcf88a1..8cc11c90 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -55,8 +55,8 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">UnConstrained College Students</span> is a dataset of long-range surveillance photos of students on University of Colorado in Colorado Springs campus</span></div><div class='hero_subdesc'><span class='bgpad'>The UnConstrained College Students dataset includes 16,149 images of 1,732 students, faculty, and pedestrians and is used for developing face recognition and face detection algorithms
-</span></div></div></section><section><h2>UnConstrained College Students</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'></section><section><div class='image'><div class='intro-caption caption'>One of 16,149 images form the UnConstrained College Students face recognition dataset captured at University of Colorado, Colorado Springs</div></div></section><section><h1>UnConstrained College Students</h1>
+<p><em>Update: In response to this report and its previous publication of metadata from UCCS dataset photos, UCCS has temporarily suspended its dataset, but plans to release a new version.</em></p>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Images</div>
<div>16,149 </div>
@@ -75,7 +75,8 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vast.uccs.edu/Opensetface/' target='_blank' rel='nofollow noopener'>uccs.edu</a></div>
- </div></div><p>UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs developed primarily for research and development of "face detection and recognition research towards surveillance applications"<a class="footnote_shim" name="[^uccs_vast]_1"> </a><a href="#[^uccs_vast]" class="footnote" title="Footnote 1">1</a>. According to the authors of <a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">two</a> <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">papers</a> associated with the dataset, over 1,700 students and pedestrians were "photographed using a long-range high-resolution surveillance camera without their knowledge".<a class="footnote_shim" name="[^funding_uccs]_1"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 3">3</a> This analysis examines the <a href="http://vast.uccs.edu/Opensetface/">UCCS dataset</a> contents of the <a href="">dataset</a>, its funding sources, timestamp data, and information from publicly available research project citations.</p>
+ </div></div><p>UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs developed primarily for research and development of "face detection and recognition research towards surveillance applications"<a class="footnote_shim" name="[^uccs_vast]_1"> </a><a href="#[^uccs_vast]" class="footnote" title="Footnote 1">1</a>.</p>
+<p>According to the authors of <a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">two</a> <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">papers</a> associated with the dataset, over 1,700 students and pedestrians were "photographed using a long-range high-resolution surveillance camera without their knowledge".<a class="footnote_shim" name="[^funding_uccs]_1"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 3">3</a> This analysis examines the <a href="http://vast.uccs.edu/Opensetface/">UCCS dataset</a> contents of the <a href="">dataset</a>, its funding sources, timestamp data, and information from publicly available research project citations.</p>
<p>The UCCS dataset includes over 1,700 unique identities, most of which are students walking to and from class. In 2018, it was the "largest surveillance [face recognition] benchmark in the public domain."<a class="footnote_shim" name="[^surv_face_qmul]_1"> </a><a href="#[^surv_face_qmul]" class="footnote" title="Footnote 4">4</a> The photos were taken during the spring semesters of 2012 &ndash; 2013 on the West Lawn of the University of Colorado Colorado Springs campus. The photographs were timed to capture students during breaks between their scheduled classes in the morning and afternoon during Monday through Thursday. "For example, a student taking Monday-Wednesday classes at 12:30 PM will show up in the camera on almost every Monday and Wednesday."<a class="footnote_shim" name="[^sapkota_boult]_1"> </a><a href="#[^sapkota_boult]" class="footnote" title="Footnote 2">2</a>.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_map_aerial.jpg' alt=' The location at University of Colorado Colorado Springs where students were surreptitiously photographed with a long-range surveillance camera for use in a defense and intelligence agency funded research project on face recognition. Image: Google Maps'><div class='caption'> The location at University of Colorado Colorado Springs where students were surreptitiously photographed with a long-range surveillance camera for use in a defense and intelligence agency funded research project on face recognition. Image: Google Maps</div></div></section><section><p>The long-range surveillance images in the UnConsrained College Students dataset were taken using a Canon 7D 18-megapixel digital camera fitted with a Sigma 800mm F5.6 EX APO DG HSM telephoto lens and pointed out an office window across the university's West Lawn. The students were photographed from a distance of approximately 150 meters through an office window. "The camera [was] programmed to start capturing images at specific time intervals between classes to maximize the number of faces being captured."<a class="footnote_shim" name="[^sapkota_boult]_2"> </a><a href="#[^sapkota_boult]" class="footnote" title="Footnote 2">2</a>
Their setup made it impossible for students to know they were being photographed, providing the researchers with realistic surveillance images to help build face recognition systems for real world applications for defense, intelligence, and commercial partners.</p>
@@ -107,10 +108,10 @@ Their setup made it impossible for students to know they were being photographed
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how UCCS has been used around the world by commercial, military, and academic organizations; existing publicly available research citing UnConstrained College Students Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how UCCS has been used around the world by commercial, military, and academic organizations; existing publicly available research citing UnConstrained College Students Dataset was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -125,7 +126,7 @@ Their setup made it impossible for students to know they were being photographed
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>
diff --git a/site/public/datasets/who_goes_there/index.html b/site/public/datasets/who_goes_there/index.html
index a00fd151..0d19da0b 100644
--- a/site/public/datasets/who_goes_there/index.html
+++ b/site/public/datasets/who_goes_there/index.html
@@ -55,8 +55,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/who_goes_there/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Who Goes There Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>Who Goes There (page under development)
-</span></div></div></section><section><h2>Who Goes There</h2>
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/who_goes_there/assets/background.jpg)'></section><section><h2>Who Goes There</h2>
</section><section><div class='right-sidebar'></div><p>[ page under development ]</p>
</section><section>
<h3>Who used Who Goes There Dataset?</h3>
@@ -79,10 +78,10 @@
<section>
- <h3>Information Supply chain</h3>
+ <h3>Information Supply Chain</h3>
<p>
- To help understand how Who Goes There Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing WhoGoesThere was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ To help understand how Who Goes There Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing WhoGoesThere was collected, verified, and geocoded to show how AI training data has proliferated around the world. Click on the markers to reveal research projects at that location.
</p>
</section>
@@ -97,7 +96,7 @@
<li class="com">Commercial</li>
<li class="gov">Military / Government</li>
</ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+ <div class="source">Citation data is collected using SemanticScholar.org then dataset usage verified and geolocated. Citations are used to provide overview of how and where images were used.</div>
</div>