summaryrefslogtreecommitdiff
path: root/site/public/research
diff options
context:
space:
mode:
authoradamhrv <adam@ahprojects.com>2019-06-27 23:58:23 +0200
committeradamhrv <adam@ahprojects.com>2019-06-27 23:58:23 +0200
commit852e4c1e36c38f57f80fc5d441da82d5991b2212 (patch)
tree0c8bc3bbcb6c679e28ba387d0c1e47fb3d16830a /site/public/research
parentae165ef1235a6997d5791ca241fd3fd134202c92 (diff)
update public
Diffstat (limited to 'site/public/research')
-rw-r--r--site/public/research/00_introduction/index.html76
-rw-r--r--site/public/research/01_munich_security_conference/index.html94
-rw-r--r--site/public/research/02_what_computers_can_see/index.html18
-rw-r--r--site/public/research/_from_1_to_100_pixels/index.html172
-rw-r--r--site/public/research/_introduction/index.html106
-rw-r--r--site/public/research/_what_computers_can_see/index.html357
-rw-r--r--site/public/research/index.html2
-rw-r--r--site/public/research/munich_security_conference/index.html123
8 files changed, 888 insertions, 60 deletions
diff --git a/site/public/research/00_introduction/index.html b/site/public/research/00_introduction/index.html
index e7f14be5..bfd048e9 100644
--- a/site/public/research/00_introduction/index.html
+++ b/site/public/research/00_introduction/index.html
@@ -1,11 +1,11 @@
<!doctype html>
<html>
<head>
- <title>MegaPixels: 00: Introduction</title>
+ <title>MegaPixels: Introducing MegaPixels</title>
<meta charset="utf-8" />
- <meta name="author" content="Megapixels" />
+ <meta name="author" content="Adam Harvey" />
<meta name="description" content="Introduction to Megapixels" />
- <meta property="og:title" content="MegaPixels: 00: Introduction"/>
+ <meta property="og:title" content="MegaPixels: Introducing MegaPixels"/>
<meta property="og:type" content="website"/>
<meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
<meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" />
@@ -53,10 +53,10 @@
<a href="/about/news">News</a>
</div>
</header>
- <div class="content content-">
+ <div class="content content-dataset">
<section>
- <h1>00: Introduction</h1>
+ <h1>Introducing MegaPixels</h1>
<div class='meta'>
<div>
<div class='gray'>Posted</div>
@@ -64,65 +64,27 @@
</div>
<div>
<div class='gray'>By</div>
- <div>Megapixels</div>
+ <div>Adam Harvey</div>
</div>
</div>
</section>
- <section><div class='meta'><div><div class='gray'>Posted</div><div>Dec. 15</div></div><div><div class='gray'>Author</div><div>Adam Harvey</div></div></div><section><section><p>Facial recognition is a scam.</p>
-<p>It's extractive and damaging industry that's built on the biometric backbone of the Internet.</p>
-<p>During the last 20 years commericial, academic, and governmental agencies have promoted the false dream of a future with face recognition. This essay debunks the popular myth that such a thing ever existed.</p>
-<p>There is no such thing as <em>face recognition</em>. For the last 20 years, government agencies, commercial organizations, and academic institutions have played the public as a fool, selling a roadmap of the future that simply does not exist. Facial recognition, as it is currently defined, promoted, and sold to the public, government, and commercial sector is a scam.</p>
-<p>Committed to developing robust solutions with superhuman accuracy, the industry has repeatedly undermined itself by never actually developing anything close to "face recognition".</p>
-<p>There is only biased feature vector clustering and probabilistic thresholding.</p>
-<h2>If you don't have data, you don't have a product.</h2>
-<p>Yesterday's <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">decision</a> by Brad Smith, CEO of Microsoft, to not sell facial recognition to a US law enforcement agency is not an about face by Microsoft to become more humane, it's simply a perfect illustration of the value of training data. Without data, you don't have a product to sell. Microsoft realized that doesn't have enough training data to sell</p>
-<h2>Cost of Faces</h2>
-<p>Univ Houston paid subjects $20/ea
-<a href="http://web.archive.org/web/20170925053724/http://cbl.uh.edu/index.php/pages/research/collecting_facial_images_from_multiples_in_texas">http://web.archive.org/web/20170925053724/http://cbl.uh.edu/index.php/pages/research/collecting_facial_images_from_multiples_in_texas</a></p>
-<p>FaceMeta facedataset.com</p>
+ <section><p>Face recognition has become the focal point for ...</p>
+<p>Add 68pt landmarks animation</p>
+<p>But biometric currency is ...</p>
+<p>Add rotation 3D head</p>
+<p>Inflationary...</p>
+<p>Add Theresea May 3D</p>
+<p>(comission for CPDP)</p>
+<p>Add info from the AI Traps talk</p>
<ul>
-<li>BASIC: 15,000 images for $6,000 USD</li>
-<li>RECOMMENDED: 50,000 images for $12,000 USD</li>
-<li>ADVANCED: 100,000 images for $18,000 USD*</li>
+<li>Posted: Dec. 15</li>
+<li>Author: Adam Harvey</li>
</ul>
-<h2>Use Your Own Biometrics First</h2>
-<p>If researchers want faces, they should take selfies and create their own dataset. If researchers want images of families to build surveillance software, they should use and distibute their own family portraits.</p>
-<h3>Motivation</h3>
-<p>Ever since government agencies began developing face recognition in the early 1960's, datasets of face images have always been central to developing and validating face recognition technologies. Today, these datasets no longer originate in labs, but instead from family photo albums posted on photo sharing sites, surveillance camera footage from college campuses, search engine queries for celebrities, cafe livestreams, or <a href="https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset">videos on YouTube</a>.</p>
-<p>During the last year, hundreds of these facial analysis datasets created "in the wild" have been collected to understand how they contribute to a global supply chain of biometric data that is powering the global facial recognition industry.</p>
-<p>While many of these datasets include public figures such as politicians, athletes, and actors; they also include many non-public figures: digital activists, students, pedestrians, and semi-private shared photo albums are all considered "in the wild" and fair game for research projects. Some images are used with creative commons licenses, yet others were taken in unconstrained scenarios without awareness or consent. At first glance it appears many of the datasets were created for seemingly harmless academic research, but when examined further it becomes clear that they're also used by foreign defense agencies.</p>
-<p>The MegaPixels site is based on an earlier <a href="https://ahprojects.com/megapixels-glassroom">installation</a> (also supported by Mozilla) at the <a href="https://theglassroom.org/">Tactical Tech Glassroom</a> in London in 2017; and a commission from the Elevate arts festival curated by Berit Gilma about pedestrian recognition datasets in 2018, and research during <a href="https://cvdazzle.com">CV Dazzle</a> from 2010-2015. Through the many prototypes, conversations, pitches, PDFs, and false starts this project has endured during the last 5 years, it eventually evolved into something much different than originally imagined. Now, as datasets become increasingly influential in shaping the computational future, it's clear that they must be critically analyzed to understand the biases, shortcomings, funding sources, and contributions to the surveillance industry. However, it's misguided to only criticize these datasets for their flaws without also praising their contribution to society. Without publicly available facial analysis datasets there would be less public discourse, less open-source software, and less peer-reviewed research. Public datasets can indeed become a vital public good for the information economy but as this projects aims to illustrate, many ethical questions arise about consent, intellectual property, surveillance, and privacy.</p>
-<!-- who provided funding to research, development this project understand the role these datasets have played in creating biometric surveillance technologies. -->
-
-
-
-
-<p>Ever since the first computational facial recognition research project by the CIA in the early 1960s, data has always played a vital role in the development of our biometric future. Without facial recognition datasets there would be no facial recognition. Datasets are an indispensable part of any artificial intelligence system because, as Geoffrey Hinton points out:</p>
-<blockquote><p>Our relationship to computers has changed. Instead of programming them, we now show them and they figure it out. - <a href="https://www.youtube.com/watch?v=-eyhCTvrEtE">Geoffrey Hinton</a></p>
-</blockquote>
-<p>Algorithms learn from datasets. And we program algorithms by building datasets. But datasets aren't like code. There's no programming language made of data except for the data itself.</p>
-<p>Ignore content below these lines</p>
-<p>It was the early 2000s. Face recognition was new and no one seemed sure exactly how well it was going to perform in practice. In theory, face recognition was poised to be a game changer, a force multiplier, a strategic military advantage, a way to make cities safer and to secure borders. This was the future John Ashcroft demanded with the Total Information Awareness act of the 2003 and that spooks had dreamed of for decades. It was a future that academics at Carnegie Mellon Universtiy and Colorado State University would help build. It was also a future that celebrities would play a significant role in building. And to the surprise of ordinary Internet users like myself and perhaps you, it was a future that millions of Internet users would unwittingly play role in creating.</p>
-<p>Now the future has arrived and it doesn't make sense. Facial recognition works yet it doesn't actually work. Facial recognition is cheap and accessible but also expensive and out of control. Facial recognition research has achieved headline grabbing superhuman accuracies over 99.9% yet facial recognition is also dangerously inaccurate. During a trial installation at Sudkreuz station in Berlin in 2018, 20% of the matches were wrong, a number so low that it should not have any connection to law enforcement or justice. And in London, the Metropolitan police had been using facial recognition software that mistakenly identified an alarming 98% of people as criminals <sup class="footnote-ref" id="fnref-met_police"><a href="#fn-met_police">1</a></sup>, which perhaps is a crime itself.</p>
-<p>MegaPixels is an online art project that explores the history of facial recognition from the perspective of datasets. To paraphrase the artist Trevor Paglen, whoever controls the dataset controls the meaning. MegaPixels aims to unravel the meanings behind the data and expose the darker corners of the biometric industry that have contributed to its growth. MegaPixels does not start with a conclusion, a moralistic slant, or a</p>
-<p>Whether or not to build facial recognition was a question that can no longer be asked. As an outspoken critic of face recognition I've developed, and hopefully furthered, my understanding during the last 10 years I've spent working with computer vision. Though I initially disagreed, I've come to see technocratic perspective as a non-negotiable reality. As Oren (nytimes article) wrote in NYT Op-Ed "the horse is out of the barn" and the only thing we can do collectively or individually is to steer towards the least worse outcome. Computational communication has entered a new era and it's both exciting and frightening to explore the potentials and opportunities. In 1997 getting access to 1 teraFLOPS of computational power would have cost you $55 million and required a strategic partnership with the Department of Defense. At the time of writing, anyone can rent 1 teraFLOPS on a cloud GPU marketplace for less than $1/day. <sup class="footnote-ref" id="fnref-asci_option_red"><a href="#fn-asci_option_red">2</a></sup>.</p>
-<p>I hope that this project will illuminate the darker areas of strange world of facial recognition that have not yet received attention and encourage discourse in academic, industry, and . By no means do I believe discourse can save the day. Nor do I think creating artwork can. In fact, I'm not exactly sure what the outcome of this project will be. The project is not so much what I publish here but what happens after. This entire project is only a prologue.</p>
-<p>As McLuhan wrote, "You can't have a static, fixed position in the electric age". And in our hyper-connected age of mass surveillance, artificial intelligece, and unevenly distributed virtual futures the most irrational thing to be is rational. Increasingly the world is becoming a contradiction where people use surveillance to protest surveillance, use</p>
-<p>Like many projects, MegaPixels had spent years meandering between formats, unfeasible budgets, and was generally too niche of a subject. The basic idea for this project, as proposed to the original <a href="https://tacticaltech.org/projects/the-glass-room-nyc/">Glass Room</a> installation in 2016 in NYC, was to build an interactive mirror that showed people if they had been included in the <a href="/datasets/lfw">LFW</a> facial recognition dataset. The idea was based on my reaction to all the datasets I'd come across during research for the CV Dazzle project. I'd noticed strange datasets created for training and testing face detection algorithms. Most were created in labratory settings and their interpretation of face data was very strict.</p>
-<h3>for other post</h3>
-<p>It was the early 2000s. Face recognition was new and no one seemed sure how well it was going to perform in practice. In theory, face recognition was poised to be a game changer, a force multiplier, a strategic military advantage, a way to make cities safer and to secure the borders. It was the future that John Ashcroft demanded with the Total Information Awareness act of the 2003. It was a future that academics helped build. It was a future that celebrities helped build. And it was a future that</p>
-<p>A decade earlier the Department of Homeland Security and the Counterdrug Technology Development Program Office initated a feasibilty study called FERET (FacE REcognition Technology) to "develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties [^feret_website]."</p>
-<p>One problem with FERET dataset was that the photos were in controlled settings. For face recognition to work it would have to be used in uncontrolled settings. Even newer datasets such as the Multi-PIE (Pose, Illumination, and Expression) from Carnegie Mellon University included only indoor photos of cooperative subjects. Not only were the photos completely unrealistic, CMU's Multi-Pie included only 18 individuals and cost $500 for academic use [^cmu_multipie_cost], took years to create, and required consent from every participant.</p>
-<h2>Add progressive gan of FERET</h2>
-<div class="footnotes">
-<hr>
-<ol><li id="fn-met_police"><p>Sharman, Jon. "Metropolitan Police's facial recognition technology 98% inaccurate, figures show". 2018. <a href="https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html">https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html</a><a href="#fnref-met_police" class="footnote">&#8617;</a></p></li>
-<li id="fn-asci_option_red"><p>Calle, Dan. "Supercomptuers". 1997. <a href="http://ei.cs.vt.edu/~history/SUPERCOM.Calle.HTML">http://ei.cs.vt.edu/~history/SUPERCOM.Calle.HTML</a><a href="#fnref-asci_option_red" class="footnote">&#8617;</a></p></li>
-</ol>
-</div>
-</section>
+</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/00_introduction/assets/summary_countries_top.csv", "fields": ["country, Xcitations"]}'></div></section><section><p>Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting.</p>
+<p>[ page under development ]</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/00_introduction/assets/test.png' alt=' This is the caption'><div class='caption'> This is the caption</div></div></section>
</div>
<footer>
diff --git a/site/public/research/01_munich_security_conference/index.html b/site/public/research/01_munich_security_conference/index.html
new file mode 100644
index 00000000..0598b1eb
--- /dev/null
+++ b/site/public/research/01_munich_security_conference/index.html
@@ -0,0 +1,94 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="Transnational Data Analysis of Publicly Available Face Recognition Training Datasets" />
+ <meta property="og:title" content="MegaPixels: Transnational Data Analysis of Publicly Available Face Recognition Training Datasets"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/research/01_munich_security_conference/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-">
+
+ <section>
+ <h1>Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-15</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section><p>Add subtitle</p>
+<h2>Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</h2>
+</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/msc/assets/embassy_counts_public.csv", "fields": ["Name, Images, Year, Gender, Description, URL"]}'></div></section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/research/02_what_computers_can_see/index.html b/site/public/research/02_what_computers_can_see/index.html
index 67dcbb5e..907b73d6 100644
--- a/site/public/research/02_what_computers_can_see/index.html
+++ b/site/public/research/02_what_computers_can_see/index.html
@@ -70,7 +70,21 @@
</div>
</section>
- <section><p>A list of 100 things computer vision can see, eg:</p>
+ <section><p>Rosalind Picard on Affective Computing Podcast with Lex Fridman</p>
+<ul>
+<li>we can read with an ordinary camera on your phone, from a neutral face if</li>
+<li>your heart is racing</li>
+<li>if your breating is becoming irregular and showing signs of stress</li>
+<li>how your heart rate variability power is changing even when your heart is not necessarily accelerating</li>
+<li>we can tell things about your stress even if you have a blank face</li>
+</ul>
+<p>in emotion studies</p>
+<ul>
+<li>when participants use smartphone and multiple data types are collected to understand patterns of life can predict tomorrow's mood</li>
+<li>get best results </li>
+<li>better than 80% accurate at predicting tomorrow's mood levels</li>
+</ul>
+<p>A list of 100 things computer vision can see, eg:</p>
<ul>
<li>age, race, gender, ancestral origin, body mass index</li>
<li>eye color, hair color, facial hair, glasses</li>
@@ -84,7 +98,7 @@
<h2>From SenseTime paper</h2>
<p>Exploring Disentangled Feature Representation Beyond Face Identification</p>
<p>From <a href="https://arxiv.org/pdf/1804.03487.pdf">https://arxiv.org/pdf/1804.03487.pdf</a>
-The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attrac-tive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p>
+The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attractive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p>
<h2>From PubFig Dataset</h2>
<ul>
<li>Male</li>
diff --git a/site/public/research/_from_1_to_100_pixels/index.html b/site/public/research/_from_1_to_100_pixels/index.html
new file mode 100644
index 00000000..74f334cc
--- /dev/null
+++ b/site/public/research/_from_1_to_100_pixels/index.html
@@ -0,0 +1,172 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: From 1 to 100 Pixels</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="High resolution insights from low resolution imagery" />
+ <meta property="og:title" content="MegaPixels: From 1 to 100 Pixels"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/research/_from_1_to_100_pixels/assets/intro.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/research/_from_1_to_100_pixels/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-">
+
+ <section>
+ <h1>From 1 to 100 Pixels</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-04</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section><h3>High resolution insights from low resolution data</h3>
+<p>This post will be about the meaning of "face". How do people define it? How to biometrics researchers define it? How has it changed during the last decade.</p>
+<p>What can you know from a very small amount of information?</p>
+<ul>
+<li>1 pixel grayscale</li>
+<li>2x2 pixels grayscale, font example, can encode letters</li>
+<li>3x3 pixels: can create a font</li>
+<li>4x4 pixels: how many variations</li>
+<li>8x8 yotta yotta, many more variations</li>
+<li>5x7 face recognition </li>
+<li>12x16 activity recognition</li>
+<li>6/5 (up to 124/106) pixels in height/width, and the average is 24/20 for QMUL SurvFace</li>
+<li>(prepare a Progan render of the QMUL dataset and TinyFaces)</li>
+<li>20x16 tiny faces paper</li>
+<li>20x20 MNIST handwritten images <a href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</a></li>
+<li>24x24 haarcascade detector idealized images</li>
+<li>32x32 CIFAR image dataset</li>
+<li>40x40 can do emotion detection, face recognition at scale, 3d modeling of the face. include datasets with faces at this resolution including pedestrian.</li>
+<li>NIST standards begin to appear from 40x40, distinguish occular pixels</li>
+<li>need more material from 60-100</li>
+<li>60x60 show how texture emerges and pupils, eye color, higher resolution of features and compare to lower resolution faces</li>
+<li>100x100 all you need for medical diagnosis</li>
+<li>100x100 0.5% of one Instagram photo</li>
+</ul>
+<p>Notes:</p>
+<ul>
+<li>Google FaceNet used images with (face?) sizes: Input sizes range from 96x96 pixels to 224x224pixels in our experiments. FaceNet: A Unified Embedding for Face Recognition and Clustering <a href="https://arxiv.org/pdf/1503.03832.pdf">https://arxiv.org/pdf/1503.03832.pdf</a></li>
+</ul>
+<p>Ideas:</p>
+<ul>
+<li>Find specific cases of facial resolution being used in legal cases, forensic investigations, or military footage</li>
+<li>resolution of boston bomber face</li>
+<li>resolution of the state of the union image</li>
+</ul>
+<h3>Research</h3>
+<ul>
+<li>NIST report on sres states several resolutions</li>
+<li>"Results show that the tested face recognition systems yielded similar performance for query sets with eye-to-eye distance from 60 pixels to 30 pixels" <sup class="footnote-ref" id="fnref-nist_sres"><a href="#fn-nist_sres">1</a></sup></li>
+</ul>
+<ul>
+<li>"Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf </li>
+<li>IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded."</li>
+</ul>
+<p>As the resolution
+formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values</p>
+<p>To consider how visual privacy applies to real world surveillance situations, the first</p>
+<p>A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet <code>a-Z0-9</code> with room to spare.</p>
+<p>A 2x2 pixels contains</p>
+<p>Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet</p>
+<p>The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase</p>
+<p>Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors.</p>
+<p>Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk?</p>
+<p>As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of</p>
+<p>This research module, tentatively called Visual Defense Tools, aims to explore the</p>
+<h3>Prior Research</h3>
+<ul>
+<li>MPI visual privacy advisor</li>
+<li>NIST: super resolution</li>
+<li>YouTube blur tool</li>
+<li>WITNESS: blur tool</li>
+<li>Pixellated text </li>
+<li>CV Dazzle</li>
+<li>Bellingcat guide to geolocation</li>
+<li>Peng! magic passport</li>
+</ul>
+<h3>Notes</h3>
+<ul>
+<li>In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition. </li>
+<li>In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000. </li>
+<li>In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals. </li>
+<li>In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%. </li>
+</ul>
+<p>What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends</p>
+<p>Globally, iPhone users unwittingly agree to 1/1,000,000 probably
+relying on FaceID and TouchID to protect their information agree to a</p>
+<div class="footnotes">
+<hr>
+<ol><li id="fn-nist_sres"><p>NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips<a href="#fnref-nist_sres" class="footnote">&#8617;</a></p></li>
+</ol>
+</div>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/research/_introduction/index.html b/site/public/research/_introduction/index.html
new file mode 100644
index 00000000..66905247
--- /dev/null
+++ b/site/public/research/_introduction/index.html
@@ -0,0 +1,106 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: Introducing MegaPixels</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="Introduction to Megapixels" />
+ <meta property="og:title" content="MegaPixels: Introducing MegaPixels"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/research/_introduction/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-dataset">
+
+ <section>
+ <h1>Introducing MegaPixels</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-15</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section><p>Face recognition has become the focal point for ...</p>
+<p>Add 68pt landmarks animation</p>
+<p>But biometric currency is ...</p>
+<p>Add rotation 3D head</p>
+<p>Inflationary...</p>
+<p>Add Theresea May 3D</p>
+<p>(comission for CPDP)</p>
+<p>Add info from the AI Traps talk</p>
+<ul>
+<li>Posted: Dec. 15</li>
+<li>Author: Adam Harvey</li>
+</ul>
+</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/00_introduction/assets/summary_countries_top.csv", "fields": ["country, Xcitations"]}'></div></section><section><p>Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting.</p>
+<p>[ page under development ]</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/_introduction/assets/test.png' alt=' This is the caption'><div class='caption'> This is the caption</div></div></section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/research/_what_computers_can_see/index.html b/site/public/research/_what_computers_can_see/index.html
new file mode 100644
index 00000000..003dd733
--- /dev/null
+++ b/site/public/research/_what_computers_can_see/index.html
@@ -0,0 +1,357 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: What Computers Can See</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="What Computers Can See" />
+ <meta property="og:title" content="MegaPixels: What Computers Can See"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/research/_what_computers_can_see/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-">
+
+ <section>
+ <h1>What Computers Can See</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-15</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section><p>Rosalind Picard on Affective Computing Podcast with Lex Fridman</p>
+<ul>
+<li>we can read with an ordinary camera on your phone, from a neutral face if</li>
+<li>your heart is racing</li>
+<li>if your breating is becoming irregular and showing signs of stress</li>
+<li>how your heart rate variability power is changing even when your heart is not necessarily accelerating</li>
+<li>we can tell things about your stress even if you have a blank face</li>
+</ul>
+<p>in emotion studies</p>
+<ul>
+<li>when participants use smartphone and multiple data types are collected to understand patterns of life can predict tomorrow's mood</li>
+<li>get best results </li>
+<li>better than 80% accurate at predicting tomorrow's mood levels</li>
+</ul>
+<p>A list of 100 things computer vision can see, eg:</p>
+<ul>
+<li>age, race, gender, ancestral origin, body mass index</li>
+<li>eye color, hair color, facial hair, glasses</li>
+<li>beauty score, </li>
+<li>intelligence</li>
+<li>what you're looking at</li>
+<li>medical conditions</li>
+<li>tired, drowsiness in car</li>
+<li>affectiva: interest in product, intent to buy</li>
+</ul>
+<h2>From SenseTime paper</h2>
+<p>Exploring Disentangled Feature Representation Beyond Face Identification</p>
+<p>From <a href="https://arxiv.org/pdf/1804.03487.pdf">https://arxiv.org/pdf/1804.03487.pdf</a>
+The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attractive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p>
+<h2>From PubFig Dataset</h2>
+<ul>
+<li>Male</li>
+<li>Asian</li>
+<li>White</li>
+<li>Black</li>
+<li>Baby</li>
+<li>Child</li>
+<li>Youth</li>
+<li>Middle Aged</li>
+<li>Senior</li>
+<li>Black Hair</li>
+<li>Blond Hair</li>
+<li>Brown Hair</li>
+<li>Bald</li>
+<li>No Eyewear</li>
+<li>Eyeglasses</li>
+<li>Sunglasses</li>
+<li>Mustache</li>
+<li>Smiling Frowning</li>
+<li>Chubby</li>
+<li>Blurry</li>
+<li>Harsh Lighting</li>
+<li>Flash</li>
+<li>Soft Lighting</li>
+<li>Outdoor Curly Hair</li>
+<li>Wavy Hair</li>
+<li>Straight Hair</li>
+<li>Receding Hairline</li>
+<li>Bangs</li>
+<li>Sideburns</li>
+<li>Fully Visible Forehead </li>
+<li>Partially Visible Forehead </li>
+<li>Obstructed Forehead</li>
+<li>Bushy Eyebrows </li>
+<li>Arched Eyebrows</li>
+<li>Narrow Eyes</li>
+<li>Eyes Open</li>
+<li>Big Nose</li>
+<li>Pointy Nose</li>
+<li>Big Lips</li>
+<li>Mouth Closed</li>
+<li>Mouth Slightly Open</li>
+<li>Mouth Wide Open</li>
+<li>Teeth Not Visible</li>
+<li>No Beard</li>
+<li>Goatee </li>
+<li>Round Jaw</li>
+<li>Double Chin</li>
+<li>Wearing Hat</li>
+<li>Oval Face</li>
+<li>Square Face</li>
+<li>Round Face </li>
+<li>Color Photo</li>
+<li>Posed Photo</li>
+<li>Attractive Man</li>
+<li>Attractive Woman</li>
+<li>Indian</li>
+<li>Gray Hair</li>
+<li>Bags Under Eyes</li>
+<li>Heavy Makeup</li>
+<li>Rosy Cheeks</li>
+<li>Shiny Skin</li>
+<li>Pale Skin</li>
+<li>5 o' Clock Shadow</li>
+<li>Strong Nose-Mouth Lines</li>
+<li>Wearing Lipstick</li>
+<li>Flushed Face</li>
+<li>High Cheekbones</li>
+<li>Brown Eyes</li>
+<li>Wearing Earrings</li>
+<li>Wearing Necktie</li>
+<li>Wearing Necklace</li>
+</ul>
+<p>for i in {1..9};do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for</a> i in {10..20}; do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done</a></p>
+<h2>From Market 1501</h2>
+<p>The 27 attributes are:</p>
+<table>
+<thead><tr>
+<th style="text-align:center">attribute</th>
+<th style="text-align:center">representation in file</th>
+<th style="text-align:center">label</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td style="text-align:center">gender</td>
+<td style="text-align:center">gender</td>
+<td style="text-align:center">male(1), female(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">hair length</td>
+<td style="text-align:center">hair</td>
+<td style="text-align:center">short hair(1), long hair(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">sleeve length</td>
+<td style="text-align:center">up</td>
+<td style="text-align:center">long sleeve(1), short sleeve(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">length of lower-body clothing</td>
+<td style="text-align:center">down</td>
+<td style="text-align:center">long lower body clothing(1), short(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">type of lower-body clothing</td>
+<td style="text-align:center">clothes</td>
+<td style="text-align:center">dress(1), pants(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">wearing hat</td>
+<td style="text-align:center">hat</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">carrying backpack</td>
+<td style="text-align:center">backpack</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">carrying bag</td>
+<td style="text-align:center">bag</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">carrying handbag</td>
+<td style="text-align:center">handbag</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">age</td>
+<td style="text-align:center">age</td>
+<td style="text-align:center">young(1), teenager(2), adult(3), old(4)</td>
+</tr>
+<tr>
+<td style="text-align:center">8 color of upper-body clothing</td>
+<td style="text-align:center">upblack, upwhite, upred, uppurple, upyellow, upgray, upblue, upgreen</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">9 color of lower-body clothing</td>
+<td style="text-align:center">downblack, downwhite, downpink, downpurple, downyellow, downgray, downblue, downgreen,downbrown</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+</tbody>
+</table>
+<p>source: <a href="https://github.com/vana77/Market-1501_Attribute/blob/master/README.md">https://github.com/vana77/Market-1501_Attribute/blob/master/README.md</a></p>
+<h2>From DukeMTMC</h2>
+<p>The 23 attributes are:</p>
+<table>
+<thead><tr>
+<th style="text-align:center">attribute</th>
+<th style="text-align:center">representation in file</th>
+<th style="text-align:center">label</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td style="text-align:center">gender</td>
+<td style="text-align:center">gender</td>
+<td style="text-align:center">male(1), female(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">length of upper-body clothing</td>
+<td style="text-align:center">top</td>
+<td style="text-align:center">short upper body clothing(1), long(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">wearing boots</td>
+<td style="text-align:center">boots</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">wearing hat</td>
+<td style="text-align:center">hat</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">carrying backpack</td>
+<td style="text-align:center">backpack</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">carrying bag</td>
+<td style="text-align:center">bag</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">carrying handbag</td>
+<td style="text-align:center">handbag</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">color of shoes</td>
+<td style="text-align:center">shoes</td>
+<td style="text-align:center">dark(1), light(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">8 color of upper-body clothing</td>
+<td style="text-align:center">upblack, upwhite, upred, uppurple, upgray, upblue, upgreen, upbrown</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+<tr>
+<td style="text-align:center">7 color of lower-body clothing</td>
+<td style="text-align:center">downblack, downwhite, downred, downgray, downblue, downgreen, downbrown</td>
+<td style="text-align:center">no(1), yes(2)</td>
+</tr>
+</tbody>
+</table>
+<p>source: <a href="https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md">https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md</a></p>
+<h2>From H3D Dataset</h2>
+<p>The joints and other keypoints (eyes, ears, nose, shoulders, elbows, wrists, hips, knees and ankles)
+The 3D pose inferred from the keypoints.
+Visibility boolean for each keypoint
+Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder)
+Body type (male, female or child)</p>
+<p>source: <a href="https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/">https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/</a></p>
+<h2>From Leeds Sports Pose</h2>
+<p>=INDEX(A2:A9,MATCH(datasets!D1,B2:B9,0))
+=VLOOKUP(A2, datasets!A:J, 7, FALSE)</p>
+<p>Right ankle
+Right knee
+Right hip
+Left hip
+Left knee
+Left ankle
+Right wrist
+Right elbow
+Right shoulder
+Left shoulder
+Left elbow
+Left wrist
+Neck
+Head top</p>
+<p>source: <a href="http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html">http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html</a></p>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/research/index.html b/site/public/research/index.html
index 007431bd..571b8230 100644
--- a/site/public/research/index.html
+++ b/site/public/research/index.html
@@ -56,7 +56,7 @@
<div class="content content-">
<section><h1>Research Blog</h1>
-</section>
+</section><div class='research_index'><a href='/research/_introduction/'><section class='wide'><img src='data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==' alt='Research post' /><section><h1>Introducing MegaPixels</h1><h2></h2></section></section></a><a href='/research/munich_security_conference/'><section class='wide'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/background.jpg' alt='Research post' /><section><h1>Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</h1><h2></h2></section></section></a></div>
</div>
<footer>
diff --git a/site/public/research/munich_security_conference/index.html b/site/public/research/munich_security_conference/index.html
new file mode 100644
index 00000000..499d8e9f
--- /dev/null
+++ b/site/public/research/munich_security_conference/index.html
@@ -0,0 +1,123 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels: MSC</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="Analyzing the Transnational Flow of Facial Recognition Data" />
+ <meta property="og:title" content="MegaPixels: MSC"/>
+ <meta property="og:type" content="website"/>
+ <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/>
+ <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/research/munich_security_conference/assets/background.jpg" />
+ <meta property="og:url" content="https://megapixels.cc/research/munich_security_conference/"/>
+ <meta property="og:site_name" content="MegaPixels" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
+ <meta name="apple-mobile-web-app-status-bar-style" content="black">
+ <meta name="apple-mobile-web-app-capable" content="yes">
+
+ <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png">
+ <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png">
+ <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png">
+ <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png">
+ <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png">
+ <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png">
+ <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png">
+ <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png">
+ <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png">
+ <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png">
+ <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png">
+ <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png">
+ <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png">
+ <link rel="manifest" href="/assets/img/favicon/manifest.json">
+ <meta name="msapplication-TileColor" content="#ffffff">
+ <meta name="msapplication-TileImage" content="/ms-icon-144x144.png">
+ <meta name="theme-color" content="#ffffff">
+
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+ <link rel='stylesheet' href='/assets/css/mobile.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/news">News</a>
+ </div>
+ </header>
+ <div class="content content-dataset">
+
+ <section>
+ <h1>MSC</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2019-4-18</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Analyzing the Transnational Flow of Facial Recognition Data</span></div><div class='hero_subdesc'><span class='bgpad'>Where does face data originate and who's using it?
+</span></div></div></section><section><p>[page under devlopment]</p>
+<p>Intro paragraph.</p>
+<p>[ add montage of extracted faces here]</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/montage_placeholder.jpg' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/bar_placeholder.png' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section><section><div class='columns columns-2'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/pie_placeholder.png' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/pie_placeholder.png' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section></div></section><section>
+
+ <div class="hr-wave-holder">
+ <div class="hr-wave-line hr-wave-line1"></div>
+ <div class="hr-wave-line hr-wave-line2"></div>
+ </div>
+
+ <h2>Supplementary Information</h2>
+
+</section><section><p>[ add a download button for CSV data ]</p>
+</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/munich_security_conference/assets/embassy_counts_public.csv", "fields": ["Images, Dataset, Embassy, Flickr ID, URL, Guest, Host"]}'></div></section><section>
+
+ <h4>Cite Our Work</h4>
+ <p>
+
+ If you find this analysis helpful, please cite our work:
+
+<pre id="cite-bibtex">
+@online{megapixels,
+ author = {Harvey, Adam. LaPlace, Jules.},
+ title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
+ year = 2019,
+ url = {https://megapixels.cc/},
+ urldate = {2019-04-18}
+}</pre>
+
+ </p>
+</section>
+
+ </div>
+ <footer>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/news/">News</a></li>
+ <li><a href="/about/legal/">Legal &amp; Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file