diff options
Diffstat (limited to 'site/public/research')
| -rw-r--r-- | site/public/research/_from_1_to_100_pixels/index.html | 158 | ||||
| -rw-r--r-- | site/public/research/_introduction/index.html | 92 | ||||
| -rw-r--r-- | site/public/research/_what_computers_can_see/index.html | 343 | ||||
| -rw-r--r-- | site/public/research/index.html | 87 | ||||
| -rw-r--r-- | site/public/research/munich_security_conference/index.html | 128 |
5 files changed, 808 insertions, 0 deletions
diff --git a/site/public/research/_from_1_to_100_pixels/index.html b/site/public/research/_from_1_to_100_pixels/index.html new file mode 100644 index 00000000..a978b264 --- /dev/null +++ b/site/public/research/_from_1_to_100_pixels/index.html @@ -0,0 +1,158 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: From 1 to 100 Pixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="High resolution insights from low resolution imagery" /> + <meta property="og:title" content="MegaPixels: From 1 to 100 Pixels"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/research/_from_1_to_100_pixels/assets/intro.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/_from_1_to_100_pixels/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/research">Research</a> + </div> + </header> + <div class="content content-"> + + <section><h1>From 1 to 100 Pixels</h1> +<h3>High resolution insights from low resolution data</h3> +<p>This post will be about the meaning of "face". How do people define it? How to biometrics researchers define it? How has it changed during the last decade.</p> +<p>What can you know from a very small amount of information?</p> +<ul> +<li>1 pixel grayscale</li> +<li>2x2 pixels grayscale, font example, can encode letters</li> +<li>3x3 pixels: can create a font</li> +<li>4x4 pixels: how many variations</li> +<li>8x8 yotta yotta, many more variations</li> +<li>5x7 face recognition </li> +<li>12x16 activity recognition</li> +<li>6/5 (up to 124/106) pixels in height/width, and the average is 24/20 for QMUL SurvFace</li> +<li>(prepare a Progan render of the QMUL dataset and TinyFaces)</li> +<li>20x16 tiny faces paper</li> +<li>20x20 MNIST handwritten images <a href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</a></li> +<li>24x24 haarcascade detector idealized images</li> +<li>32x32 CIFAR image dataset</li> +<li>40x40 can do emotion detection, face recognition at scale, 3d modeling of the face. include datasets with faces at this resolution including pedestrian.</li> +<li>NIST standards begin to appear from 40x40, distinguish occular pixels</li> +<li>need more material from 60-100</li> +<li>60x60 show how texture emerges and pupils, eye color, higher resolution of features and compare to lower resolution faces</li> +<li>100x100 all you need for medical diagnosis</li> +<li>100x100 0.5% of one Instagram photo</li> +</ul> +<p>Notes:</p> +<ul> +<li>Google FaceNet used images with (face?) sizes: Input sizes range from 96x96 pixels to 224x224pixels in our experiments. FaceNet: A Unified Embedding for Face Recognition and Clustering <a href="https://arxiv.org/pdf/1503.03832.pdf">https://arxiv.org/pdf/1503.03832.pdf</a></li> +</ul> +<p>Ideas:</p> +<ul> +<li>Find specific cases of facial resolution being used in legal cases, forensic investigations, or military footage</li> +<li>resolution of boston bomber face</li> +<li>resolution of the state of the union image</li> +</ul> +<h3>Research</h3> +<ul> +<li>NIST report on sres states several resolutions</li> +<li>"Results show that the tested face recognition systems yielded similar performance for query sets with eye-to-eye distance from 60 pixels to 30 pixels" <sup class="footnote-ref" id="fnref-nist_sres"><a href="#fn-nist_sres">1</a></sup></li> +</ul> +<ul> +<li>"Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf </li> +<li>IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded."</li> +</ul> +<p>As the resolution +formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values</p> +<p>To consider how visual privacy applies to real world surveillance situations, the first</p> +<p>A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet <code>a-Z0-9</code> with room to spare.</p> +<p>A 2x2 pixels contains</p> +<p>Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet</p> +<p>The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase</p> +<p>Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors.</p> +<p>Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk?</p> +<p>As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of</p> +<p>This research module, tentatively called Visual Defense Tools, aims to explore the</p> +<h3>Prior Research</h3> +<ul> +<li>MPI visual privacy advisor</li> +<li>NIST: super resolution</li> +<li>YouTube blur tool</li> +<li>WITNESS: blur tool</li> +<li>Pixellated text </li> +<li>CV Dazzle</li> +<li>Bellingcat guide to geolocation</li> +<li>Peng! magic passport</li> +</ul> +<h3>Notes</h3> +<ul> +<li>In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition. </li> +<li>In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000. </li> +<li>In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals. </li> +<li>In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%. </li> +</ul> +<p>What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends</p> +<p>Globally, iPhone users unwittingly agree to 1/1,000,000 probably +relying on FaceID and TouchID to protect their information agree to a</p> +<div class="footnotes"> +<hr> +<ol><li id="fn-nist_sres"><p>NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips<a href="#fnref-nist_sres" class="footnote">↩</a></p></li> +</ol> +</div> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/_introduction/index.html b/site/public/research/_introduction/index.html new file mode 100644 index 00000000..8b17c016 --- /dev/null +++ b/site/public/research/_introduction/index.html @@ -0,0 +1,92 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: Introducing MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Introduction to Megapixels" /> + <meta property="og:title" content="MegaPixels: Introducing MegaPixels"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/_introduction/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/research">Research</a> + </div> + </header> + <div class="content content-dataset"> + + <section><h1>Introduction</h1> +<p>Face recognition has become the focal point for ...</p> +<p>Add 68pt landmarks animation</p> +<p>But biometric currency is ...</p> +<p>Add rotation 3D head</p> +<p>Inflationary...</p> +<p>Add Theresea May 3D</p> +<p>(comission for CPDP)</p> +<p>Add info from the AI Traps talk</p> +<ul> +<li>Posted: Dec. 15</li> +<li>Author: Adam Harvey</li> +</ul> +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/00_introduction/assets/summary_countries_top.csv", "fields": ["Headings: country, Xcitations"]}'></div></section><section><p>Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting.</p> +<p>[ page under development ]</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/_introduction/assets/test.png' alt=' This is the caption'><div class='caption'> This is the caption</div></div></section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/_what_computers_can_see/index.html b/site/public/research/_what_computers_can_see/index.html new file mode 100644 index 00000000..35f6d47d --- /dev/null +++ b/site/public/research/_what_computers_can_see/index.html @@ -0,0 +1,343 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: What Computers Can See</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="What Computers Can See" /> + <meta property="og:title" content="MegaPixels: What Computers Can See"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/_what_computers_can_see/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/research">Research</a> + </div> + </header> + <div class="content content-"> + + <section><h1>What Computers Can See About Your Face</h1> +<p>Rosalind Picard on Affective Computing Podcast with Lex Fridman</p> +<ul> +<li>we can read with an ordinary camera on your phone, from a neutral face if</li> +<li>your heart is racing</li> +<li>if your breating is becoming irregular and showing signs of stress</li> +<li>how your heart rate variability power is changing even when your heart is not necessarily accelerating</li> +<li>we can tell things about your stress even if you have a blank face</li> +</ul> +<p>in emotion studies</p> +<ul> +<li>when participants use smartphone and multiple data types are collected to understand patterns of life can predict tomorrow's mood</li> +<li>get best results </li> +<li>better than 80% accurate at predicting tomorrow's mood levels</li> +</ul> +<p>A list of 100 things computer vision can see, eg:</p> +<ul> +<li>age, race, gender, ancestral origin, body mass index</li> +<li>eye color, hair color, facial hair, glasses</li> +<li>beauty score, </li> +<li>intelligence</li> +<li>what you're looking at</li> +<li>medical conditions</li> +<li>tired, drowsiness in car</li> +<li>affectiva: interest in product, intent to buy</li> +</ul> +<h2>From SenseTime paper</h2> +<p>Exploring Disentangled Feature Representation Beyond Face Identification</p> +<p>From <a href="https://arxiv.org/pdf/1804.03487.pdf">https://arxiv.org/pdf/1804.03487.pdf</a> +The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attractive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p> +<h2>From PubFig Dataset</h2> +<ul> +<li>Male</li> +<li>Asian</li> +<li>White</li> +<li>Black</li> +<li>Baby</li> +<li>Child</li> +<li>Youth</li> +<li>Middle Aged</li> +<li>Senior</li> +<li>Black Hair</li> +<li>Blond Hair</li> +<li>Brown Hair</li> +<li>Bald</li> +<li>No Eyewear</li> +<li>Eyeglasses</li> +<li>Sunglasses</li> +<li>Mustache</li> +<li>Smiling Frowning</li> +<li>Chubby</li> +<li>Blurry</li> +<li>Harsh Lighting</li> +<li>Flash</li> +<li>Soft Lighting</li> +<li>Outdoor Curly Hair</li> +<li>Wavy Hair</li> +<li>Straight Hair</li> +<li>Receding Hairline</li> +<li>Bangs</li> +<li>Sideburns</li> +<li>Fully Visible Forehead </li> +<li>Partially Visible Forehead </li> +<li>Obstructed Forehead</li> +<li>Bushy Eyebrows </li> +<li>Arched Eyebrows</li> +<li>Narrow Eyes</li> +<li>Eyes Open</li> +<li>Big Nose</li> +<li>Pointy Nose</li> +<li>Big Lips</li> +<li>Mouth Closed</li> +<li>Mouth Slightly Open</li> +<li>Mouth Wide Open</li> +<li>Teeth Not Visible</li> +<li>No Beard</li> +<li>Goatee </li> +<li>Round Jaw</li> +<li>Double Chin</li> +<li>Wearing Hat</li> +<li>Oval Face</li> +<li>Square Face</li> +<li>Round Face </li> +<li>Color Photo</li> +<li>Posed Photo</li> +<li>Attractive Man</li> +<li>Attractive Woman</li> +<li>Indian</li> +<li>Gray Hair</li> +<li>Bags Under Eyes</li> +<li>Heavy Makeup</li> +<li>Rosy Cheeks</li> +<li>Shiny Skin</li> +<li>Pale Skin</li> +<li>5 o' Clock Shadow</li> +<li>Strong Nose-Mouth Lines</li> +<li>Wearing Lipstick</li> +<li>Flushed Face</li> +<li>High Cheekbones</li> +<li>Brown Eyes</li> +<li>Wearing Earrings</li> +<li>Wearing Necktie</li> +<li>Wearing Necklace</li> +</ul> +<p>for i in {1..9};do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for</a> i in {10..20}; do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done</a></p> +<h2>From Market 1501</h2> +<p>The 27 attributes are:</p> +<table> +<thead><tr> +<th style="text-align:center">attribute</th> +<th style="text-align:center">representation in file</th> +<th style="text-align:center">label</th> +</tr> +</thead> +<tbody> +<tr> +<td style="text-align:center">gender</td> +<td style="text-align:center">gender</td> +<td style="text-align:center">male(1), female(2)</td> +</tr> +<tr> +<td style="text-align:center">hair length</td> +<td style="text-align:center">hair</td> +<td style="text-align:center">short hair(1), long hair(2)</td> +</tr> +<tr> +<td style="text-align:center">sleeve length</td> +<td style="text-align:center">up</td> +<td style="text-align:center">long sleeve(1), short sleeve(2)</td> +</tr> +<tr> +<td style="text-align:center">length of lower-body clothing</td> +<td style="text-align:center">down</td> +<td style="text-align:center">long lower body clothing(1), short(2)</td> +</tr> +<tr> +<td style="text-align:center">type of lower-body clothing</td> +<td style="text-align:center">clothes</td> +<td style="text-align:center">dress(1), pants(2)</td> +</tr> +<tr> +<td style="text-align:center">wearing hat</td> +<td style="text-align:center">hat</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying backpack</td> +<td style="text-align:center">backpack</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying bag</td> +<td style="text-align:center">bag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying handbag</td> +<td style="text-align:center">handbag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">age</td> +<td style="text-align:center">age</td> +<td style="text-align:center">young(1), teenager(2), adult(3), old(4)</td> +</tr> +<tr> +<td style="text-align:center">8 color of upper-body clothing</td> +<td style="text-align:center">upblack, upwhite, upred, uppurple, upyellow, upgray, upblue, upgreen</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">9 color of lower-body clothing</td> +<td style="text-align:center">downblack, downwhite, downpink, downpurple, downyellow, downgray, downblue, downgreen,downbrown</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +</tbody> +</table> +<p>source: <a href="https://github.com/vana77/Market-1501_Attribute/blob/master/README.md">https://github.com/vana77/Market-1501_Attribute/blob/master/README.md</a></p> +<h2>From DukeMTMC</h2> +<p>The 23 attributes are:</p> +<table> +<thead><tr> +<th style="text-align:center">attribute</th> +<th style="text-align:center">representation in file</th> +<th style="text-align:center">label</th> +</tr> +</thead> +<tbody> +<tr> +<td style="text-align:center">gender</td> +<td style="text-align:center">gender</td> +<td style="text-align:center">male(1), female(2)</td> +</tr> +<tr> +<td style="text-align:center">length of upper-body clothing</td> +<td style="text-align:center">top</td> +<td style="text-align:center">short upper body clothing(1), long(2)</td> +</tr> +<tr> +<td style="text-align:center">wearing boots</td> +<td style="text-align:center">boots</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">wearing hat</td> +<td style="text-align:center">hat</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying backpack</td> +<td style="text-align:center">backpack</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying bag</td> +<td style="text-align:center">bag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying handbag</td> +<td style="text-align:center">handbag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">color of shoes</td> +<td style="text-align:center">shoes</td> +<td style="text-align:center">dark(1), light(2)</td> +</tr> +<tr> +<td style="text-align:center">8 color of upper-body clothing</td> +<td style="text-align:center">upblack, upwhite, upred, uppurple, upgray, upblue, upgreen, upbrown</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">7 color of lower-body clothing</td> +<td style="text-align:center">downblack, downwhite, downred, downgray, downblue, downgreen, downbrown</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +</tbody> +</table> +<p>source: <a href="https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md">https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md</a></p> +<h2>From H3D Dataset</h2> +<p>The joints and other keypoints (eyes, ears, nose, shoulders, elbows, wrists, hips, knees and ankles) +The 3D pose inferred from the keypoints. +Visibility boolean for each keypoint +Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder) +Body type (male, female or child)</p> +<p>source: <a href="https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/">https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/</a></p> +<h2>From Leeds Sports Pose</h2> +<p>=INDEX(A2:A9,MATCH(datasets!D1,B2:B9,0)) +=VLOOKUP(A2, datasets!A:J, 7, FALSE)</p> +<p>Right ankle +Right knee +Right hip +Left hip +Left knee +Left ankle +Right wrist +Right elbow +Right shoulder +Left shoulder +Left elbow +Left wrist +Neck +Head top</p> +<p>source: <a href="http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html">http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html</a></p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/index.html b/site/public/research/index.html new file mode 100644 index 00000000..f4f90531 --- /dev/null +++ b/site/public/research/index.html @@ -0,0 +1,87 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: Research</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Research blog" /> + <meta property="og:title" content="MegaPixels: Research"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/research">Research</a> + </div> + </header> + <div class="content content-"> + + <section><h1>Research Blog</h1> +</section><div class='research_index'> + <a href='/research/munich_security_conference/'><section class='wide' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/background.jpg);' /> + <section> + <h4><span class='bgpad'>28 June 2019</span></h4> + <h2><span class='bgpad'>Analyzing Transnational Flows of Face Recognition Image Training Data</span></h2> + <h3><span class='bgpad'>Where does face data originate and who's using it?</span></h3> + <h4 class='readmore'><span class='bgpad'>Read more...</span></h4> + </section> + </section></a> + </div> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/munich_security_conference/index.html b/site/public/research/munich_security_conference/index.html new file mode 100644 index 00000000..0b625f53 --- /dev/null +++ b/site/public/research/munich_security_conference/index.html @@ -0,0 +1,128 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: Transnational Flows of Face Recognition Image Training Data</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Analyzing Transnational Flows of Face Recognition Image Training Data" /> + <meta property="og:title" content="MegaPixels: Transnational Flows of Face Recognition Image Training Data"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/research/munich_security_conference/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/munich_security_conference/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/research">Research</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Analyzing Transnational Flows of Face Recognition Image Training Data</span></div><div class='hero_subdesc'><span class='bgpad'>Where does face data originate and who's using it? +</span></div></div></section><section><h2>Face Datasets and Information Supply Chains</h2> +</section><section><div class='right-sidebar'><div class='meta'><div class='gray'>Images Analyzed</div><div>24,302,637</div></div><div class='meta'><div class='gray'>Datasets Analyzed</div><div>30</div></div><div class='meta'><div class='gray'>Years</div><div>2006 - 2018</div></div><div class='meta'><div class='gray'>Status</div><div>Ongoing Investigation</div></div><div class='meta'><div class='gray'>Last Updated</div><div>June 28, 2019</div></div></div><p>National AI strategies often rely on transnational data sources to capitalize on recent advancements in deep learning and neural networks. Researchers benefiting from these transnational data flows can yield quick and significant gains across diverse sectors from health care to biometrics. But new challenges emerge when national AI strategies collide with national interests.</p> +<p>Our <a href="https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e">earlier research</a> on the <a href="/datasets/msceleb">MS Celeb</a> and <a href="/datasets/duke_mtmc">Duke</a> datasets published with the Financial Times revealed that several computer vision image datasets created by US companies and universities were unexpectedly also used for research by the National University of Defense Technology in China, along with top Chinese surveillance firms including SenseTime, SenseNets, CloudWalk, Hikvision, and Megvii/Face++ which have all been linked to the oppressive surveillance of Uighur Muslims in Xinjiang.</p> +<p>In this new research for the <a href="https://tsr.securityconference.de">Munich Security Conference's Transnational Security Report</a> we provide summary statistics about the origins and endpoints of facial recognition information supply chains. To make it more personal, we gathered additional data on the number of public photos from embassies that are currently being used in facial recognition datasets.</p> +<h3>24 Million Non-Cooperative Faces</h3> +<p>In total, we analyzed 30 publicly available face recognition and face analysis datasets that collectively include over 24 million non-cooperative images. Of these 24 million images, over 15 million face images are from Internet search engines, over 5.8 million from Flickr.com, over 2.5 million from the Internet Movie Database (IMDb.com), and nearly 500,000 from CCTV footage. All 24 million images were collected without any explicit consent, a type of face image that researchers call "in the wild".</p> +<p>Next we manually verified 1,134 publicly available research papers that cite these datasets to determine who was using the data and where it was being used. Even though the vast majority of the images originated in the United States, the publicly available research citations show that only about 25% citations are from the country of the origin while the majority of citations are from China.</p> +</section><section><div class='columns columns-2'><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /site/research/munich_security_conference/assets/megapixels_origins_top.csv", "fields": ["Caption: Sources of Publicly Available Non-Cooperative Face Image Training Data 2006 - 2018", "Top: 10", "OtherLabel: Other"]}'></div></section><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /site/research/munich_security_conference/assets/summary_countries.csv", "fields": ["Caption: Locations Where Face Data Is Used Based on Public Research Citations", "Top: 14", "OtherLabel: Other"]}'></div></section></div></section><section><h3>6,000 Embassy Photos Being Used To Train Facial Recognition</h3> +<p>Of the 5.8 million Flickr images we found over 6,000 public photos from Embassy Flickr accounts were used to train facial recognition technologies. These images were used in the MegaFace and IBM Diversity in Faces datasets. Over 2,000 more images were included in the Who Goes There dataset, used for facial ethnicity analysis research. A few of the embassy images found in facial recognition datasets are shown below.</p> +</section><section><div class='columns columns-2'><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /site/research/munich_security_conference/assets/country_counts.csv", "fields": ["Caption: Photos from these embassies are being used to train face recognition software", "Top: 4", "OtherLabel: Other", "Colors: categoryRainbow"]}'></div></section><section class='applet_container'><div class='applet' data-payload='{"command": "single_pie_chart /site/research/munich_security_conference/assets/embassy_counts_summary_dataset.csv", "fields": ["Caption: Embassy images were found in these datasets", "Top: 4", "OtherLabel: Other", "Colors: categoryRainbow"]}'></div></section></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/4606260362.jpg' alt=' An image in the MegaFace dataset obtained from United Kingdoms Embassy in Italy'><div class='caption'> An image in the MegaFace dataset obtained from United Kingdom's Embassy in Italy</div></div> +<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/4749096858.jpg' alt=' An image in the MegaFace dataset obtained from the Flickr account of the United States Embassy in Kabul, Afghanistan'><div class='caption'> An image in the MegaFace dataset obtained from the Flickr account of the United States Embassy in Kabul, Afghanistan</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/4730007024.jpg' alt=' An image in the MegaFace dataset obtained from U.S. Embassy Canberra'><div class='caption'> An image in the MegaFace dataset obtained from U.S. Embassy Canberra</div></div></section><section><p>This brief research aims to shed light on the emerging politics of data. A photo is no longer just a photo when it can also be surveillance training data, and datasets can no longer be separated from the development of software when software is now built with data. "Our relationship to computers has changed", says Geoffrey Hinton, one of the founders of modern day neural networks and deep learning. "Instead of programming them, we now show them and they figure it out."<a class="footnote_shim" name="[^hinton]_1"> </a><a href="#[^hinton]" class="footnote" title="Footnote 1">1</a>.</p> +<p>As data becomes more political, national AI strategies might also want to include transnational dataset strategies.</p> +<p><em>This research post is ongoing and will updated during July and August, 2019.</em></p> +<h3>Further Reading</h3> +<ul> +<li><a href="/datasets/msceleb">MS Celeb Dataset Analysis</a></li> +<li><a href="/datasets/brainwash">Brainwash Dataset Analysis</a></li> +<li><a href="/datasets/duke_mtmc">Duke MTMC Dataset Analysis</a></li> +<li><a href="/datasets/uccs">Unconstrained College Students Dataset Analysis</a></li> +<li><a href="https://www.dukechronicle.com/article/2019/06/duke-university-facial-recognition-data-set-study-surveillance-video-students-china-uyghur">Duke MTMC dataset author apologies to students</a></li> +<li><a href="https://www.bbc.com/news/technology-48555149">BBC coverage of MS Celeb dataset takedown</a></li> +<li><a href="https://www.spiegel.de/netzwelt/web/microsoft-gesichtserkennung-datenbank-mit-zehn-millionen-fotos-geloescht-a-1271221.html">Spiegel coverage of MS Celeb dataset takedown</a></li> +</ul> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/munich_security_conference/assets/embassy_counts_public.csv", "fields": ["Headings: Images, Dataset, Embassy, Flickr ID, URL, Guest, Host"]}'></div></section><section> + + <h4>Cite Our Work</h4> + <p> + + If you find this analysis helpful, please cite our work: + +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^hinton]" class="footnote_shim"></a><span class="backlinks"><a href="#[^hinton]_1">a</a></span>"Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton". Published on Aug 8, 2017. <a href="https://www.youtube.com/watch?v=-eyhCTvrEtE">https://www.youtube.com/watch?v=-eyhCTvrEtE</a> +</li></ul></section></section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file |
