summaryrefslogtreecommitdiff
path: root/site
diff options
context:
space:
mode:
Diffstat (limited to 'site')
-rw-r--r--site/assets/css/css.css17
-rw-r--r--site/public/about/index.html12
-rw-r--r--site/public/datasets/index.html10
-rw-r--r--site/public/datasets/msceleb/index.html203
4 files changed, 80 insertions, 162 deletions
diff --git a/site/assets/css/css.css b/site/assets/css/css.css
index e67b60d7..fa91a67f 100644
--- a/site/assets/css/css.css
+++ b/site/assets/css/css.css
@@ -767,7 +767,7 @@ section.fullwidth .image {
display: flex;
flex-direction: row;
flex-wrap: wrap;
- margin:0;
+ margin: 20px 0 0 0;
}
.dataset-list a {
text-decoration: none;
@@ -838,16 +838,23 @@ section.fullwidth .image {
display: inline-block;
margin: 0;
cursor: pointer;
- margin-right: 20px;
- font-size: 16px;
+ margin-right: 10px;
transition: background 0.2s;
background: #555;
color: #fff;
- padding: 4px 6px;
+ padding: 8px 12px;
border-radius: 4px;
font-weight: 500;
- font-size: 14px;
+ font-size: 11px;
cursor: pointer;
+ user-select: none;
+ -moz-user-select: none;
+ -khtml-user-select: none;
+ -webkit-user-select: none;
+ -o-user-select: none;
+}
+.dataset-list .sort-options li:last-child {
+ margin-right:0;
}
.dataset-list .sort-options li.active {
background: #fff;
diff --git a/site/public/about/index.html b/site/public/about/index.html
index b83736d3..86059e42 100644
--- a/site/public/about/index.html
+++ b/site/public/about/index.html
@@ -35,8 +35,7 @@
<li><a href="/about/legal/">Legal / Privacy</a></li>
</ul>
</section><p>MegaPixels is an independent art and research project by Adam Harvey and Jules LaPlace that investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies.</p>
-<p>MegaPixels is made possible with support from <a href="http://mozilla.org">Mozilla</a>, our primary funding partner.</p>
-<p>Additional support for MegaPixels is provided by the European ARTificial Intelligence Network (AI LAB) at the Ars Electronica Center, 1-year research-in-residence grant from Karlsruhe HfG, and sales from the Privacy Gift Shop.</p>
+<p>This project is made possible with support from <a href="http://mozilla.org">Mozilla</a>.</p>
<div class="flex-container team-photos-container">
<div class="team-member">
<h3>Adam Harvey</h3>
@@ -50,11 +49,10 @@
</p>
<p><a href="https://asdf.us/">asdf.us</a></p>
</div>
-</div><p>The MegaPixels website is based on an <a href="https://ahprojects.com/megapixels-glassroom/">earlier installation from 2017</a> and ongoing research and lectures (<a href="https://www.youtube.com/watch?v=bfhcco9gS30">TedX</a>, <a href="https://www.cpdpconferences.org/events/megapixels-is-in-publicly-available-facial-recognition-datasets">CPDP</a>) about facial recognition datasets. Over the last several years this project has evolved into a large-scale interrogation of hundreds of publicly-available face and person analysis datasets.</p>
-<p>MegaPixels aims to provide a critical perspective on machine learning image datsets, one that might otherwise escape academia and the industry funded artificial intelligence think tanks that are often supported by the same technology companies who have created many of the datasets presented on this site.</p>
-<p>MegaPixels is an independent project, designed as a public resource for educators, students, journalists, and researchers. Each dataset presented on this site undergoes a thorough review of its images, intent, and funding sources. Though the goals are similar to publishing a public academic paper, MegaPixels is a website-first reserch project aligns closley with the goals of pre-print academic publications. As such we welcome feedback and ways to improve this site and the clarity of the research.</p>
-<p>Because this project surfaces many funding issues with datasets (from datasets funded by the C.I.A. to the National Unviversity of Defense and Technology in China), it is important that we are transparent about own funding. The original MegaPixels installation in 2017 was built as a commission for and with support from Tactical Technology Collective and Mozilla. The bulk of the research and web-development during 2018 - 2018 was supported by a grant from Mozilla. Continued development in 2019 is partially supported by a 1-year Reseacher-in-Residence grant from Karlsruhe HfG, lecture and workshop fees, and from commissions and sales from the Privacy Gift Shop.</p>
-<p>Please get in touch if you are interested in supporting this project.</p>
+</div><p>MegaPixels is an art and research project first launched in 2017 for an <a href="https://ahprojects.com/megapixels-glassroom/">installation</a> at Tactical Technology Collective's GlassRoom about facial recognition datasets. In 2018 it was extended to cover pedestrian analysis datasets for a <a href="https://esc.mur.at/de/node/2370">commission by Elevate Arts festival</a> in Austria. Since then MegaPixels has evolved into a large-scale interrogation of hundreds of publicly-available face and person analysis datasets.</p>
+<p>MegaPixels aims to provide a critical perspective on machine learning image datsets, one that might otherwise escape academia and industry funded artificial intelligence think tanks that are often supported by the same technology companies who have created many of the datasets presented on this site.</p>
+<p>MegaPixels is an independent project, designed as a public resource for educators, students, journalists, and researchers. Each dataset presented on this site undergoes a thorough review of its images, intent, and funding sources. Though the goals are similar to publishing a public academic paper, MegaPixels is a website-first research project.</p>
+<p>One of the main focuses of the dataset investigations is uncovering where funding originated. Because of our empahasis on other researchers' funding sources, it is important that we are transparent about our own. This site and the past year of reserach have been primarily funded by a privacy art grant from Mozilla in 2018. The original MegaPixels installation in 2017 was built as a commission for and with support from Tactical Technology Collective and Mozilla. Continued development in 2019 is partially supported by a 1-year Reseacher-in-Residence grant from Karlsruhe HfG and lecture and workshop fees.</p>
</section><section><div class='columns columns-3'><div class='column'><h5>Team</h5>
<ul>
<li>Adam Harvey: Concept, research and analysis, design, computer vision</li>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index c6c4185a..047b6874 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -27,7 +27,7 @@
<div class="content content-">
- <section><h1>Facial Recognition Datasets</h1>
+ <section><h1>Face Recognition Datasets</h1>
<p>Explore publicly available facial recognition datasets feeding into research and development of biometric surveillance technologies at the largest technology companies and defense contractors in the world.</p>
</section>
@@ -56,7 +56,7 @@
<div class='year visible'><span>2016</span></div>
<div class='purpose'><span>Person re-identification, multi-camera tracking</span></div>
<div class='images'><span>2,000,000 images</span></div>
- <div class='identities'><span>1,812 </span></div>
+ <div class='identities'><span>2,700 </span></div>
</div>
</div>
</a>
@@ -66,7 +66,7 @@
<span class='title'>HRT Transgender Dataset</span>
<div class='fields'>
<div class='year visible'><span>2013</span></div>
- <div class='purpose'><span>gender transition and facial recognition</span></div>
+ <div class='purpose'><span>Face recognition, gender transition biometrics</span></div>
<div class='images'><span>10,564 images</span></div>
<div class='identities'><span>38 </span></div>
</div>
@@ -101,10 +101,10 @@
<div class="dataset">
<span class='title'>Oxford Town Centre</span>
<div class='fields'>
- <div class='year visible'><span>2011</span></div>
+ <div class='year visible'><span>2009</span></div>
<div class='purpose'><span>Person detection, gaze estimation</span></div>
<div class='images'><span> images</span></div>
- <div class='identities'><span></span></div>
+ <div class='identities'><span>2,200 </span></div>
</div>
</div>
</a>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index c42a2767..5ed2f3a2 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -4,7 +4,7 @@
<title>MegaPixels</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
- <meta name="description" content="Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition technologies" />
+ <meta name="description" content="Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition" />
<meta name="referrer" content="no-referrer" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<link rel='stylesheet' href='/assets/css/fonts.css' />
@@ -26,7 +26,7 @@
</header>
<div class="content content-dataset">
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition technologies</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10 million images of about 100K people and a target list of 1 million individuals
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Microsoft Celeb 1M is a target list and dataset of web images used for research and development of face recognition</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10 million images of about 100K people and a target list of 1 million individuals
</span></div></div></section><section><h2>Microsoft Celeb Dataset (MS Celeb)</h2>
</section><section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
@@ -49,210 +49,125 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.msceleb.org/' target='_blank' rel='nofollow noopener'>msceleb.org</a></div>
- </div></div><p>Microsoft Celeb (MS Celeb) is a dataset of 10 million face images scraped from the Internet and used for research and development of large-scale biometric recognition systems. According to Microsoft Research who created and published the <a href="http://msceleb.org">dataset</a> in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of nearly 100,000 individuals. Microsoft's goal in building this dataset was to distribute the initial training dataset of 100,000 individuals images and use this to accelerate reserch into recognizing a target list of one million individuals from their face images "using all the possibly collected face images of this individual on the web as training data".<a class="footnote_shim" name="[^msceleb_orig]_1"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 2">2</a></p>
-<p>These one million people, defined as Micrsoft Research as "celebrities", are often merely people who must maintain an online presence for their professional lives. Microsoft's list of 1 million people is an expansive exploitation of the current reality that for many people including academics, policy makers, writers, artists, and especially journalists maintaining an online presence is mandatory and should not allow Microsoft (or anyone else) to use their biometrics for reserach and development of surveillance technology. Many of names in target list even include people critical of the very technology Microsoft is using their name and biometric information to build. The list includes digital rights activists like Jillian York and [add more]; artists critical of surveillance including Trevor Paglen, Hito Steryl, Kyle McDonald, Jill Magid, and Aram Bartholl; Intercept founders Laura Poitras, Jeremy Scahill, and Glen Greenwald; Data and Society founder danah boyd; and even Julie Brill the former FTC commissioner responsible for protecting consumer’s privacy to name a few.</p>
+ </div></div><p>Microsoft Celeb (MS Celeb) is a dataset of 10 million face images scraped from the Internet and used for research and development of large-scale biometric recognition systems. According to Microsoft Research who created and published the <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">dataset</a> in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of nearly 100,000 individuals. Microsoft's goal in building this dataset was to distribute an initial training dataset of 100,000 individuals images and use this to accelerate reserch into recognizing a target list of one million individuals from their face images "using all the possibly collected face images of this individual on the web as training data".<a class="footnote_shim" name="[^msceleb_orig]_1"> </a><a href="#[^msceleb_orig]" class="footnote" title="Footnote 1">1</a></p>
+<p>These one million people, defined by Micrsoft Research as "celebrities", are often merely people who must maintain an online presence for their professional lives. Microsoft's list of 1 million people is an expansive exploitation of the current reality that for many people including academics, policy makers, writers, artists, and especially journalists maintaining an online presence is mandatory and should not allow Microsoft or anyone else to use their biometrics for reserach and development of surveillance technology. Many of names in target list even include people critical of the very technology Microsoft is using their name and biometric information to build. The list includes digital rights activists like Jillian York and [add more]; artists critical of surveillance including Trevor Paglen, Hito Steryl, Jill Magid, and Aram Bartholl; Intercept founders Laura Poitras, Jeremy Scahill, and Glen Greenwald; Data and Society founder danah boyd; and even Julie Brill the former FTC commissioner responsible for protecting consumer’s privacy to name a few.</p>
<h3>Microsoft's 1 Million Target List</h3>
-<p>Below is a list of names that were included in list of 1 million individuals curated to illustrate Microsoft's expansive and exploitative practice of scraping the Internet for biometric training data. The entire name file can be downloaded from <a href="https://msceleb.org">msceleb.org</a>. Names appearing with * indicate that Microsoft also distributed imaged.</p>
-<p>[ cleaning this up ]</p>
+<p>Below is a list of names that were included in list of 1 million individuals curated to illustrate Microsoft's expansive and exploitative practice of scraping the Internet for biometric training data. The entire name file can be downloaded from <a href="https://msceleb.org">msceleb.org</a>. Email <a href="mailto:msceleb@microsoft.com?subject=MS-Celeb-1M Removal Request&body=Dear%20Microsoft%2C%0A%0AI%20recently%20discovered%20that%20you%20use%20my%20identity%20for%20commercial%20use%20in%20your%20MS-Celeb-1M%20dataset%20used%20for%20research%20and%20development%20of%20face%20recognition.%20I%20do%20not%20wish%20to%20be%20included%20in%20your%20dataset%20in%20any%20format.%20%0A%0APlease%20remove%20my%20name%20and%2For%20any%20associated%20images%20immediately%20and%20send%20a%20confirmation%20once%20you've%20updated%20your%20%22Top1M_MidList.Name.tsv%22%20file.%0A%0AThanks%20for%20promptly%20handing%20this%2C%0A%5B%20your%20name%20%5D">msceleb@microsoft.com</a> to have your name removed. Names appearing with * indicate that Microsoft also distributed images.</p>
</section><section><div class='columns columns-2'><div class='column'><table>
<thead><tr>
<th>Name</th>
-<th>ID</th>
<th>Profession</th>
-<th>Images</th>
</tr>
</thead>
<tbody>
<tr>
-<td>Jeremy Scahill</td>
-<td>/m/02p_8_n</td>
+<td>Adrian Chen</td>
<td>Journalist</td>
-<td>x</td>
</tr>
<tr>
-<td>Jillian York</td>
-<td>/m/0g9_3c3</td>
-<td>Digital rights activist</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Astra Taylor</td>
-<td>/m/05f6_39</td>
-<td>Author, activist</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Jonathan Zittrain</td>
-<td>/m/01f75c</td>
-<td>EFF board member</td>
-<td>no</td>
-</tr>
-<tr>
-<td>Julie Brill</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Ai Weiwei*</td>
+<td>Artist</td>
</tr>
<tr>
-<td>Jonathan Zittrain</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Aram Bartholl</td>
+<td>Internet artist</td>
</tr>
<tr>
-<td>Bruce Schneier</td>
-<td>m.095js</td>
-<td>Cryptologist and author</td>
-<td>yes</td>
+<td>Astra Taylor</td>
+<td>Author, director, activist</td>
</tr>
<tr>
-<td>Julie Brill</td>
-<td>m.0bs3s9g</td>
-<td>x</td>
-<td>x</td>
+<td>Alexander Madrigal</td>
+<td>Journlist</td>
</tr>
<tr>
-<td>Kim Zetter</td>
-<td>/m/09r4j3</td>
-<td>x</td>
-<td>x</td>
+<td>Bruce Schneier*</td>
+<td>Cryptologist</td>
</tr>
<tr>
-<td>Ethan Zuckerman</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>danah boyd</td>
+<td>Data &amp; Society founder</td>
</tr>
<tr>
-<td>Jill Magid</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Edward Felten</td>
+<td>Former FTC Chief Technologist</td>
</tr>
<tr>
-<td>Kyle McDonald</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Evgeny Morozov*</td>
+<td>Tech writer, researcher</td>
</tr>
<tr>
-<td>Trevor Paglen</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Glen Greenwald*</td>
+<td>Journalist, author</td>
</tr>
<tr>
-<td>R. Luke DuBois</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Hito Steryl</td>
+<td>Artist, writer</td>
</tr>
</tbody>
</table>
</div><div class='column'><table>
<thead><tr>
<th>Name</th>
-<th>ID</th>
<th>Profession</th>
-<th>Images</th>
</tr>
</thead>
<tbody>
<tr>
-<td>Trevor Paglen</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Ai Weiwei</td>
-<td>/m/0278dyq</td>
-<td>x</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Jer Thorp</td>
-<td>/m/01h8lg</td>
-<td>x</td>
-<td>x</td>
-</tr>
-<tr>
-<td>Edward Felten</td>
-<td>/m/028_7k</td>
-<td>x</td>
-<td>x</td>
+<td>James Risen</td>
+<td>Journalist</td>
</tr>
<tr>
-<td>Evgeny Morozov</td>
-<td>/m/05sxhgd</td>
-<td>Scholar and technology critic</td>
-<td>yes</td>
+<td>Jeremy Scahill*</td>
+<td>Journalist</td>
</tr>
<tr>
-<td>danah boyd</td>
-<td>/m/06zmx5</td>
-<td>Data and Society founder</td>
-<td>x</td>
+<td>Jill Magid</td>
+<td>Artist</td>
</tr>
<tr>
-<td>Bruce Schneier</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Jillian York</td>
+<td>Digital rights activist</td>
</tr>
<tr>
-<td>Laura Poitras</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Jonathan Zittrain</td>
+<td>EFF board member</td>
</tr>
<tr>
-<td>Trevor Paglen</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Julie Brill</td>
+<td>Former FTC Commissioner</td>
</tr>
<tr>
-<td>Astra Taylor</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Kim Zetter</td>
+<td>Journalist, author</td>
</tr>
<tr>
-<td>Shoshanaa Zuboff</td>
-<td>x</td>
-<td>x</td>
-<td>x</td>
+<td>Laura Poitras*</td>
+<td>Filmmaker</td>
</tr>
<tr>
-<td>Eyal Weizman</td>
-<td>m.0g54526</td>
-<td>x</td>
-<td>x</td>
+<td>Luke DuBois</td>
+<td>Artist</td>
</tr>
<tr>
-<td>Aram Bartholl</td>
-<td>m.06_wjyc</td>
-<td>x</td>
-<td>x</td>
+<td>Shoshana Zuboff</td>
+<td>Author, academic</td>
</tr>
<tr>
-<td>James Risen</td>
-<td>m.09pk6b</td>
-<td>x</td>
-<td>x</td>
+<td>Trevor Paglen</td>
+<td>Artist, researcher</td>
</tr>
</tbody>
</table>
-</div></div></section><section><p>After publishing this list, researchers from Microsoft Asia then worked with researchers affilliated with China's National University of Defense Technology (controlled by China's Central Military Commission) and used the the MS Celeb dataset for their <a href="https://www.semanticscholar.org/paper/Faces-as-Lighting-Probes-via-Unsupervised-Deep-Yi-Zhu/b301fd2fc33f24d6f75224e7c0991f4f04b64a65">research paper</a> on using "Faces as Lighting Probes via Unsupervised Deep Highlight Extraction" with potential applications in 3D face recognition.</p>
-<p>In an article published by the Financial Times based on data discovered during this investigation, Samm Sacks (senior fellow at New American and China tech policy expert) commented that this research raised "red flags because of the nature of the technology, the authors affilliations, combined with the what we know about how this technology is being deployed in China right now".<a class="footnote_shim" name="[^madhu_ft]_1"> </a><a href="#[^madhu_ft]" class="footnote" title="Footnote 3">3</a></p>
-<p>Four more papers published by SenseTime which also use the MS Celeb dataset raise similar flags. SenseTime is Beijing based company providing surveillance to Chinese authorities including [ add context here ] has been <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">flagged</a> as complicity in potential human rights violations.</p>
-<p>One of the 4 SenseTime papers, "Exploring Disentangled Feature Representation Beyond Face Identification", shows how SenseTime is developing automated face analysis technology to infer race, narrow eyes, nose size, and chin size, all of which could be used to target vulnerable ethnic groups based on their facial appearances.<a class="footnote_shim" name="[^disentangled]_1"> </a><a href="#[^disentangled]" class="footnote" title="Footnote 4">4</a></p>
+</div></div></section><section><p>After publishing this list, researchers from Microsoft Asia then worked with researchers affiliated with China's National University of Defense Technology (controlled by China's Central Military Commission) and used the the MS Celeb dataset for their <a href="https://www.semanticscholar.org/paper/Faces-as-Lighting-Probes-via-Unsupervised-Deep-Yi-Zhu/b301fd2fc33f24d6f75224e7c0991f4f04b64a65">research paper</a> on using "Faces as Lighting Probes via Unsupervised Deep Highlight Extraction" with potential applications in 3D face recognition.</p>
+<p>In an <a href="https://www.ft.com/content/9378e7ee-5ae6-11e9-9dde-7aedca0a081a">article</a> published by Financial Times based on data surfaced during this investigation, Samm Sacks (a senior fellow at New America think tank) commented that this research raised "red flags because of the nature of the technology, the author's affiliations, combined with what we know about how this technology is being deployed in China right now". Adding, that "the [Chinese] government is using these technologies to biuld surveillance systems and to detain minorities [in Xinjiang]".<a class="footnote_shim" name="[^madhu_ft]_1"> </a><a href="#[^madhu_ft]" class="footnote" title="Footnote 2">2</a></p>
+<p>Four more papers published by SenseTime which also use the MS Celeb dataset raise similar flags. SenseTime is a computer vision surveillance company who until <a href="https://uhrp.org/news-commentary/china%E2%80%99s-sensetime-sells-out-xinjiang-security-joint-venture">April 2019</a> provided surveillance to Chinese authorities to monitor and track Uighur Muslims in Xinjiang province and had been <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">flagged</a> numerous times as having potential links to human rights violations.</p>
+<p>One of the 4 SenseTime papers, "<a href="https://www.semanticscholar.org/paper/Exploring-Disentangled-Feature-Representation-Face-Liu-Wei/1fd5d08394a3278ef0a89639e9bfec7cb482e0bf">Exploring Disentangled Feature Representation Beyond Face Identification</a>", shows how SenseTime was developing automated face analysis technology to infer race, narrow eyes, nose size, and chin size, all of which could be used to target vulnerable ethnic groups based on their facial appearances.</p>
<p>Earlier in 2019, Microsoft CEO <a href="https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/">Brad Smith</a> called for the governmental regulation of face recognition, citing the potential for misuse, a rare admission that Microsoft's surveillance-driven business model had lost its bearing. More recently Smith also <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">announced</a> that Microsoft would seemingly take stand against potential misuse and decided to not sell face recognition to an unnamed United States law enforcement agency, citing that their technology was not accurate enough to be used on minorities because it was trained mostly on white male faces.</p>
-<p>What the decision to block the sale announces is not so much that Microsoft has upgraded their ethics, but that it publicly acknolwedged it can't sell a data-driven product without data. Microsoft can't sell face recognition for faces they can't train on.</p>
-<p>Until now, that data has been freely harvested from the Internet and packaged in training sets like MS Celeb, which are overwhelmingly <a href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html">white</a> and <a href="https://gendershades.org">male</a>. Without balanced data, facial recognition contains blind spots. And without datasets like MS Celeb, the powerful yet innaccurate facial recognition services like Microsoft's Azure Cognitive Service also would not be able to see at all.</p>
-<p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "(<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>)", Microsoft leveraged the MS Celeb dataset to analyse their algorithms and advertise the results. Interestingly, the Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-acess version</a> of the paper published on arxiv.org that same year explicity mentions that Microsoft Research tested their algorithms "on the MS-Celeb-1M low-shot learning benchmark task."</p>
-<p>We suggest that if Microsoft Research wants biometric data for surveillance research and development, they should start with own researcher's biometric data instead of scraping the Internet for journalists, artists, writers, and academics.</p>
+<p>What the decision to block the sale announces is not so much that Microsoft had upgraded their ethics, but that Microsoft publicly acknowledged it can't sell a data-driven product without data. In other words, Microsoft can't sell face recognition for faces they can't train on.</p>
+<p>Until now, that data has been freely harvested from the Internet and packaged in training sets like MS Celeb, which are overwhelmingly <a href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html">white</a> and <a href="https://gendershades.org">male</a>. Without balanced data, facial recognition contains blind spots. And without datasets like MS Celeb, the powerful yet inaccurate facial recognition services like Microsoft's Azure Cognitive Service also would not be able to see at all.</p>
+<p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "(<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>)", Microsoft leveraged the MS Celeb dataset to analyze their algorithms and advertise the results. Interestingly, Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> of the paper does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-access version</a> published on arxiv.org explicitly mentions that Microsoft Research tested their algorithms "on the MS-Celeb-1M low-shot learning benchmark task."</p>
+<p>We suggest that if Microsoft Research wants to make biometric data publicly available for surveillance research and development, they should start with releasing their researchers' own biometric data instead of scraping the Internet for journalists, artists, writers, actors, athletes, musicians, and academics.</p>
</section><section>
<h3>Who used Microsoft Celeb?</h3>
@@ -313,10 +228,8 @@
<h2>Supplementary Information</h2>
-</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^brad_smith]" class="footnote_shim"></a><span class="backlinks"></span>Brad Smith cite
-</li><li>2 <a name="[^msceleb_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^msceleb_orig]_1">a</a></span>MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
-</li><li>3 <a name="[^madhu_ft]" class="footnote_shim"></a><span class="backlinks"><a href="#[^madhu_ft]_1">a</a></span>Microsoft worked with Chinese military university on artificial intelligence
-</li><li>4 <a name="[^disentangled]" class="footnote_shim"></a><span class="backlinks"><a href="#[^disentangled]_1">a</a></span>"Exploring Disentangled Feature Representation Beyond Face Identification"
+</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^msceleb_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^msceleb_orig]_1">a</a></span>MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
+</li><li>2 <a name="[^madhu_ft]" class="footnote_shim"></a><span class="backlinks"><a href="#[^madhu_ft]_1">a</a></span>Murgia, Madhumita. Microsoft worked with Chinese military university on artificial intelligence. Financial Times. April 10, 2019.
</li></ul></section></section>
</div>